POPULARITY
The introduction of the Cyber Resilience Act (CRA) marks a major shift for the software industry: for the first time, manufacturers are being held accountable for the cybersecurity of their products. Olle E. Johansson, a long-time open source developer and contributor to the Asterisk PBX project, explains how this new regulation reshapes the role of software creators and introduces the need for transparency across the entire supply chain.In this episode, Johansson breaks down the complexity of today's software supply ecosystems—where manufacturers rely heavily on open source components, and end users struggle to identify vulnerabilities buried deep in third-party dependencies. With the CRA in place, the burden now falls on manufacturers to not only track but also report on the components in their products. That includes actively communicating which vulnerabilities affect users—and which do not.To make this manageable, Johansson introduces the Transparency Exchange API (TEA), a project rooted in the OWASP CycloneDX standard. What started as a simple Software Bill of Materials (SBOM) delivery mechanism has evolved into a broader platform for sharing vulnerability information, attestations, documentation, and even cryptographic data necessary for the post-quantum transition. Standardizing this API through Ecma International is a major step toward a scalable, automated supply chain security infrastructure.The episode also highlights the importance of automation and shared data formats in enabling companies to react quickly to threats like Log4j. Johansson notes that, historically, security teams spent countless hours manually assessing whether they were affected by a specific vulnerability. The Transparency Exchange API aims to change that by automating the entire feedback loop from developer to manufacturer to end user.Although still in beta, the project is gaining traction with organizations like the Apache Foundation integrating it into their release processes. Johansson emphasizes that community feedback is essential and invites listeners to engage through GitHub to help shape the project's future.For Johansson, OWASP stands for global knowledge and collaboration in application security. As Europe's regulatory influence grows, initiatives like this are essential to build a stronger, more accountable software ecosystem.GUEST: Olle E Johansson | Co-Founder, SBOM Europe | https://www.linkedin.com/in/ollejohansson/HOST:Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | https://www.seanmartin.comSPONSORSManicode Security: https://itspm.ag/manicode-security-7q8iRESOURCESCycloneDX/transparency-exchange-api on GitHub: https://github.com/CycloneDX/transparency-exchange-apiVIDEO: The Cyber Resilience Act: How the EU is Reshaping Digital Product Security | With Sarah Fluchs: https://youtu.be/c30eG5kzqnYLearn more and catch more stories from OWASP AppSec Global 2025 Barcelona coverage: https://www.itspmagazine.com/owasp-global-appsec-barcelona-2025-application-security-event-coverage-in-catalunya-spainCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
When starting to build a new website, you are facing a major challenge. Which framework should you use? Angular, React, Vue, Svelte? They are all based on JavaScript and can be the right choice depending on your needs. But do you really need one of these frameworks? Why would you not just stick to Java and use one of the many great libraries that are available for it? GuestsMartijn Dashorsthttps://www.linkedin.com/in/dashorst/https://mastodon.social/@dashorsthttps://twitter.com/dashorsthttps://martijndashorst.com Marcus Hellberghttps://www.linkedin.com/in/marcushellberg/https://mstdn.social/@marcushellberghttps://twitter.com/marcushellberghttps://marcushellberg.dev/ Podcast HostHost: Frank Delportehttps://foojay.social/@frankdelportehttps://www.linkedin.com/in/frankdelporte/ LinksWickethttps://wicket.apache.org/https://builtwithwicket.tumblr.comhttps://nightlies.apache.org/wicket/guide/10.x/single.htmlhttps://wicket.apache.org/start/quickstart.htmlhttps://wicket.apache.org/learn/#migrationshttps://github.com/apache/wicket https://twitter.com/apache_wicket Vaadinhttps://vaadin.com/https://start.vaadin.com https://github.com/vaadin/ https://vaadin.com/components https://twitter.com/vaadin https://foojay.io/?s=vaadinhttps://foojay.io/today/video-vaadin-drag-drop-support-its-so-easy/https://foojay.io/today/enterprise-java-application-development-with-jakarta-ee-and-vaadin/ https://foojay.io/today/how-to-style-a-vaadin-application/ https://foojay.io/today/blink-a-led-on-raspberry-pi-with-vaadin/ Thymeleaf / htmxhttps://www.thymeleaf.org/ https://htmx.org/ https://foojay.io/today/book-review-modern-frontends-with-htmx/https://foojay.io/today/new-book-taming-thymeleaf/https://foojay.io/today/controlling-an-lcd-display-with-spring-and-thymeleaf-on-the-raspberry-pi/ Content00:00 Introduction of the topic and guests01:37 About Apache Wicket 03:26 About Vaadin 06:37 How these frameworks exchange data between server and client 09:38 Comparing to Thymeleaf 11:16 About htmx https://foojay.io/today/book-review-modern-frontends-with-htmx/ 14:42 How the Apache Foundation works https://apache.org/ 19:20 License model of Vaadin 21:26 Wicket and Vaadin "in the wild" https://vaadin.com/blog/liukuri-uses-vaadin-flow-to-help-finnish-households-navigate-the-energy-crisis https://liukuri.fi/ https://api.pi4j.com/ https://4drums.media/ 26:03 Java developers can build full web applications with only Java without being full-stack 27:47 Could JavaFX become a web-development framework? 29:35 About WebComponents 32:14 How the company Vaadin is making money from opensource 34:31 The future of Wicket, htmx, Vaadin,… 39:55 Which kind of project to build with Wicket or Vaadin 46:18 Links 48:54 Searching Vaadin docs with AI https://marcushellberg.dev/how-to-build-a-custom-chatgpt-assistant-for-your-documentation 51:21 Conclusions MusicBarbershop JohnHermine DeurlooSynapse by Shane Ivers - https://www.silvermansound.com
In this episode of the "Giant Robots Smashing Into Other Giant Robots" podcast, host Victoria Guido delves into the intersection of technology, product development, and personal passions with her guests Henry Yin, Co-Founder and CTO of Merico, and Maxim Wheatley, the company's first employee and Community Leader. They are joined by Joe Ferris, CTO of thoughtbot, as a special guest co-host. The conversation begins with a casual exchange about rock climbing, revealing that both Henry and Victoria share this hobby, which provides a unique perspective on their professional roles in software development. Throughout the podcast, Henry and Maxim discuss the journey and evolution of Merico, a company specializing in data-driven tools for developers. They explore the early stages of Merico, highlighting the challenges and surprises encountered while seeking product-market fit and the strategic pivot from focusing on open-source funding allocation to developing a comprehensive engineering metric platform. This shift in focus led to the creation of Apache DevLake, an open-source project contributed to by Merico and later donated to the Apache Software Foundation, reflecting the company's commitment to transparency and community-driven development. The episode also touches on future challenges and opportunities in the field of software engineering, particularly the integration of AI and machine learning tools in the development process. Henry and Maxim emphasize the potential of AI to enhance developer productivity and the importance of data-driven insights in improving team collaboration and software delivery performance. Joe contributes to the discussion with his own experiences and perspectives, particularly on the importance of process over individual metrics in team management. Merico (https://www.merico.dev/) Follow Merico on GitHub (https://github.com/merico-dev), Linkedin (https://www.linkedin.com/company/merico-dev/), or X (https://twitter.com/MericoDev). Apache DevLake (https://devlake.apache.org/) Follow Henry Yin on LinkedIn (https://www.linkedin.com/in/henry-hezheng-yin-88116a52/). Follow Maxim Wheatley on LinkedIn (https://www.linkedin.com/in/maximwheatley/) or X (https://twitter.com/MaximWheatley). Follow thoughtbot on X (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: VICTORIA: This is the Giant Robots Smashing Into Other Giant Robots podcast, where we explore the design, development, and business of great products. I'm your host, Victoria Guido. And with me today is Henry Yin, Co-Founder and CTO of Merico, and Maxim Wheatley, the first employee and Community Leader of Merico, creating data-driven developer tools for forward-thinking devs. Thank you for joining us. HENRY: Thanks for having us. MAXIM: Glad to be here, Victoria. Thank you. VICTORIA: And we also have a special guest co-host today, the CTO of thoughtbot, Joe Ferris. JOE: Hello. VICTORIA: Okay. All right. So, I met Henry and Maxim at the 7CTOs Conference in San Diego back in November. And I understand that Henry, you are also an avid rock climber. HENRY: Yes. I know you were also in Vegas during Thanksgiving. And I sort of have [inaudible 00:49] of a tradition to go to Vegas every Thanksgiving to Red Rock National Park. Yeah, I'd love to know more about how was your trip to Vegas this Thanksgiving. VICTORIA: Yes. I got to go to Vegas as well. We had a bit of rain, actually. So, we try not to climb on sandstone after the rain and ended up doing some sport climbing on limestone around the Blue Diamond Valley area; a little bit light on climbing for me, actually, but still beautiful out there. I loved being in Red Rock Canyon outside of Las Vegas. And I do find that there's just a lot of developers and engineers who have an affinity for climbing. I'm not sure what exactly that connection is. But I know, Joe, you also have a little bit of climbing and mountaineering experience, right? JOE: Yeah. I used to climb a good deal. I actually went climbing for the first time in, like, three years this past weekend, and it was truly pathetic. But you have to [laughs] start somewhere. VICTORIA: That's right. And, Henry, how long have you been climbing for? HENRY: For about five years. I like to spend my time in nature when I'm not working: hiking, climbing, skiing, scuba diving, all of the good outdoor activities. VICTORIA: That's great. And I understand you were bouldering in Vegas, right? Did you go to Kraft Boulders? HENRY: Yeah, we went to Kraft also Red Spring. It was a surprise for me. I was able to upgrade my outdoor bouldering grade to B7 this year at Red Spring and Monkey Wrench. There was always some surprises for me. When I went to Red Rock National Park last year, I met Alex Honnold there who was shooting a documentary, and he was really, really friendly. So, really enjoying every Thanksgiving trip to Vegas. VICTORIA: That's awesome. Yeah, well, congratulations on B7. That's great. It's always good to get a new grade. And I'm kind of in the same boat with Joe, where I'm just constantly restarting my climbing career. So [laughs], I haven't had a chance to push a grade like that in a little while. But that sounds like a lot of fun. HENRY: Yeah, it's really hard to be consistent on climbing when you have, like, a full-time job, and then there's so much going on in life. It's always a challenge. VICTORIA: Yeah. But a great way to like, connect with other people, and make friends, and spend time outdoors. So, I still really appreciate it, even if I'm not maybe progressing as much as I could be. That's wonderful. So, tell me, how did you and Maxim actually meet? Did you meet through climbing or the outdoors? MAXIM: We actually met through AngelList, which I really recommend to anyone who's really looking to get into startups. When Henry and I met, Merico was essentially just starting. I had this eagerness to explore something really early stage where I'd get to do all of the interesting kind of cross-functional things that come with that territory, touching on product and marketing, on fundraising, kind of being a bit of everything. And I was eager to look into something that was applying, you know, machine learning, data analytics in some really practical way. And I came across what Hezheng Henry and the team were doing in terms of just extracting useful insights from codebases. And we ended up connecting really well. And I think the previous experience I had was a good fit for the team, and the rest was history. And we've had a great time building together for the last five years. VICTORIA: Yeah. And tell me a little bit more about your background and what you've been bringing to the Merico team. MAXIM: I think, like a lot of people in startups, consider myself a member of the Island of Misfit Toys in the sense that no kind of clear-cut linear pathway through my journey but a really exciting and productive one nonetheless. So, I began studying neuroscience at Georgetown University in Washington, D.C. I was about to go to medical school and, in my high school years had explored entrepreneurship in a really basic way. I think, like many people do, finding ways to monetize my hobbies and really kind of getting infected with that bug that I could create something, make money from it, and kind of be the master of my own destiny, for lack of less cliché terms. So, not long after graduating, I started my first job that recruited me into a seed-stage venture capital, and from there, I had the opportunity to help early-stage startups, invest in them. I was managing a startup accelerator out there. From there, produced a documentary that followed those startups. Not long after all of that, I ended up co-founding a consumer electronics company where I was leading product, so doing lots of mechanical, electrical, and a bit of software engineering. And without taking too long, those were certainly kind of two of the more formative things. But one way or another, I've spent my whole career now in startups and, especially early-stage ones. It was something I was eager to do was kind of take some of the high-level abstract science that I had learned in my undergraduate and kind of apply some of those frameworks to some of the things that I do today. VICTORIA: That's super interesting. And now I'm curious about you, Henry, and your background. And what led you to get the idea for Merico? HENRY: Yeah. My professional career is actually much simpler because Merico was my first company and my first job. Before Merico, I was a PhD student at UC Berkeley studying computer science. My research was an intersection of software engineering and machine learning. And back then, we were tackling this research problem of how do we fairly measure the developer contributions in a software project? And the reason we are interested in this project has to do with the open-source funding problem. So, let's say an open-source project gets 100k donations from Google. How does the maintainers can automatically distribute all of the donations to sometimes hundreds or thousands of contributors according to their varying level of contributions? So, that was the problem we were interested in. We did research on this for about a year. We published a paper. And later on, you know, we started the company with my, you know, co-authors. And that's how the story began for Merico. VICTORIA: I really love that. And maybe you could tell me just a little bit more about what Merico is and why a company may be interested in trying out your services. HENRY: The product we're currently offering actually is a little bit different from what we set out to build. At the very beginning, we were building this platform for open-source funding problem that we can give an open-source project. We can automatically, using algorithm, measure developer contributions and automatically distribute donations to all developers. But then we encountered some technical and business challenges. So, we took out the metrics component from the previous idea and launched this new product in the engineering metric space. And this time, we focus on helping engineering leaders better understand the health of their engineering work. So, this is the Merico analytics platform that we're currently offering to software engineering teams. JOE: It's interesting. I've seen some products that try to judge the health of a codebase, but it sounds like this is more trying to judge the health of the team. MAXIM: Yeah, I think that's generally fair to say. As we've evolved, we've certainly liked to describe ourselves as, you know, I think a lot of people are familiar with observability tools, which help ultimately ascertain, like, the performance of the technology, right? Like, it's assessing, visualizing, chopping up the machine-generated data. And we thought there would be a tremendous amount of value in being, essentially, observability for the human-generated data. And I think, ultimately, what we found on our journey is that there's a tremendous amount of frustration, especially in larger teams, not in looking to use a tool like that for any kind of, like, policing type thing, right? Like, no one's looking if they're doing it right, at least looking to figure out, like, oh, who's underperforming, or who do we need to yell at? But really trying to figure out, like, where are the strengths? Like, how can we improve our processes? How can we make sure we're delivering better software more reliably, more sustainably? Like how are we balancing that trade-off between new features, upgrades and managing tech debt and bugs? We've ultimately just worked tirelessly to, hopefully, fill in those blind spots for people. And so far, I'm pleased to say that the reception has been really positive. We've, I think, tapped into a somewhat subtle but nonetheless really important pain point for a lot of teams around the world. VICTORIA: Yeah. And, Henry, you said that you started it based on some of the research that you did at UC Berkeley. I also understand you leaned on the research from the DevOps research from DORA. Can you tell me a little bit more about that and what you found insightful from the research that was out there and already existed? MAXIM: So, I think what's really funny, and it really speaks to, I think, the importance in product development of just getting out there and speaking with your potential users or actual users, and despite all of the deep, deep research we had done on the topic of understanding engineering, we really hadn't touched on DORA too much. And this is probably going back about five years now. Henry and I were taking a customer meeting with an engineering leader at Yahoo out in the Bay Area. He kind of revealed this to us basically where he's like, "Oh, you guys should really look at incorporating DORA into this thing. Like, all of the metrics, all of the analytics you're building super cool, super interesting, but DORA really has this great framework, and you guys should look into it." And in hindsight, I think we can now [chuckles], honestly, admit to ourselves, even if it maybe was a bit embarrassing at the time where both Henry and I were like, "What? What is that? Like, what's Dora?" And we ended up looking into it and since then, have really become evangelists for the framework. And I'll pass it to Henry to talk about, like, what that journey has looked like. HENRY: Thanks, Maxim. I think what's cool about DORA is in terms of using metrics, there's always this challenge called Goodhart's Law, right? So, whenever a metric becomes a target, the metric cease to be a good metric because people are going to find ways to game the metric. So, I think what's cool about DORA is that it actually offers not just one metric but four key metrics that bring balance to covering both the stability and velocity. So, when you look at DORA metrics, you can't just optimize for velocity and sacrificing your stability. But you have to look at all four metrics at the same time, and that's harder to game. So, I think that's why it's become more and more popular in the industry as the starting point for using metrics for data-driven engineering. VICTORIA: Yeah. And I like how DORA also represents it as the metrics and how they apply to where you are in the lifecycle of your product. So, I'm curious: with Merico, what kind of insights do you think engineering leaders can gain from having this data that will unlock some of their team's potential? MAXIM: So, I think one of the most foundational things before we get into any detailed metrics is I think it's more important than ever, especially given that so many of us are remote, right? Where the general processes of software engineering are generally difficult to understand, right? They're nuanced. They tend to kind of happen in relative isolation until a PR is reviewed and merged. And it can be challenging, of course, to understand what's being done, how consistently, how well, like, where are the good parts, where are the bad parts. And I think that problem gets really exasperated, especially in a remote setting where no one is necessarily in the same place. So, on a foundational level, I think we've really worked hard to solve that challenge, where just being able to see, like, how are we doing? And to that point, I think what we've found before anyone even dives too deep into all of the insights that we can deliver, I think there's a tremendous amount of appetite for anyone who's looking to get into that practice of constant improvement and figuring out how to level up the work they're doing, just setting close benchmarks, figuring out, like, okay, when we talk about more nebulous or maybe subjective terms like speed, or quality, what does good look like? What does consistent look like? Being able to just tie those things to something that really kind of unifies the vocabulary is something I always like to say, where, okay, now, even if we're not focused on a specific metric, or we don't have a really particular goal in mind that we want to assess, now we're at least starting the conversation as a team from a place where when we talk about quality, we have something that's shared between us. We understand what we're referring to. And when we're talking about speed, we can also have something consistent to talk about there. And within all of that, I think one of the most powerful things is it helps to really kind of ground the conversations around the trade-offs, right? There's always that common saying: the triangle of trade-offs is where it's, like, you can have it cheap; you can have it fast, and you can have it good, but you can only have two. And I think with DORA, with all of these different frameworks with many metrics, it helps to really solidify what those trade-offs look like. And that's, for me at least, been one of the most impactful things to watch: is our global users have really started evolving their practices with it. HENRY: Yeah. And I want to add to Maxim's answer. But before that, I just want to quickly mention how our products are structured. So, Merico actually has an open-source component and a proprietary component. So, the open-source component is called Apache DevLake. It's an open-source project we created first within Merico and later on donated to Apache Software Foundation. And now, it's one of the most popular engineering metrics tool out there. And then, on top of that, we built a SaaS offering called DevInsight Cloud, which is powered by Apache DevLake. So, with DevLake, the open-source project, you can set up your data connections, connect DevLake to all of the dev tools you're using, and then we collect data. And then we provide many different flavors of dashboards for our users. And many of those dashboards are structured, and there are different questions engineering teams might want to ask. For example, like, how fast are we responding to our customer requirement? For that question, we will look at like, metrics like change lead time, or, like, for a question, how accurate is our planning for the sprint? In that case, the dashboard will show metrics relating to the percentage of issues we can deliver for every sprint for our plan. So, that's sort of, you know, based on the questions that the team wants to answer, we provide different dashboards that help them extract insights using the data from their DevOps tools. JOE: It's really interesting you donated it to Apache. And I feel like the hybrid SaaS open-source model is really common. And I've become more and more skeptical of it over the years as companies start out open source, and then once they start getting competitors, they change the license. But by donating it to Apache, you sort of sidestep that potential trust issue. MAXIM: Yeah, you've hit the nail on the head with that one because, in many ways, for us, engaging with Apache in the way that we have was, I think, ultimately born out of the observations we had about the shortcomings of other products in the space where, for one, very practical. We realized quickly that if we wanted to offer the most complete visibility possible, it would require connections to so many different products, right? I think anyone can look at their engineering toolchain and identify perhaps 7, 9, 10 different things they're using on a day-to-day basis. Oftentimes, those aren't shared between companies, too. So, I think part one was just figuring out like, okay, how do we build a framework that makes it easy for developers to build a plugin and contribute to the project if there's something they want to incorporate that isn't already supported? And I think that was kind of part one. Part two is, I think, much more important and far more profound, which is developer trust, right? Where we saw so many different products out there that claimed to deliver these insights but really had this kind of black-box approach, right? Where data goes in, something happens, insights come out. How's it doing that? How's it weighting things? What's it calculating? What variables are incorporated? All of that is a mystery. And that really leads to developers, rightfully, not having a basis to trust what's actually being shown to them. So, for us, it was this perspective of what's the maximum amount of transparency that we could possibly offer? Well, open source is probably the best answer to that question. We made sure the entirety of the codebase is something they can take a look at, they can modify. They can dive into the underlying queries and algorithms and how everything is working to gain a total sense of trust in how is this thing working? And if I need to modify something to account for some nuanced details of how our team works, we can also do that. And to your point, you know, I think it's definitely something I would agree with that one of the worst things we see in the open-source community is that companies will be kind of open source in name only, right? Where it's really more of marketing or kind of sales thing than anything, where it's like, oh, let's tap into the good faith of open source. But really, somehow or another, through bait and switch, through partial open source, through license changes, whatever it is, we're open source in name only but really, a proprietary, closed-source product. So, for us, donating the core of DevLake to the Apache Foundation was essentially our way of really, like, putting, you know, walking the talk, right? Where no one can doubt at this point, like, oh, is this thing suddenly going to have the license changed? Is this suddenly going to go closed-source? Like, the answer to that now is a definitive no because it is now part of that ecosystem. And I think with the aspirations we've had to build something that is not just a tool but, hopefully, long-term becomes, like, foundational technology, I think that gives people confidence and faith that this is something they can really invest in. They can really plumb into their processes in a deep and meaningful way with no concerns whatsoever that something is suddenly going to change that makes all of that work, you know, something that they didn't expect. JOE: I think a lot of companies guard their source code like it's their secret sauce, but my experience has been more that it's the secret shame [laughs]. HENRY: [laughs] MAXIM: There's no doubt in my role with, especially our open-source product driving our community we've really seen the magic of what a community-driven product can be. And open source, I think, is the most kind of a true expression of a community-driven product, where we have a Slack community with nearly 1,000 developers in it now. Naturally, right? Some of those developers are in there just to ask questions and answer questions. Some are intensely involved, right? They're suggesting improvements. They're suggesting new features. They're finding ways to refine things. And it really is that, like, fantastic culture that I'm really proud that we've cultivated where best idea ships, right? If you've got a good idea, throw it into a GitHub issue or a comment. Let's see how the community responds to it. Let's see if someone wants to pick it up. Let's see if someone wants to submit a PR. If it's good, it goes into production, and then the entire community benefits. And, for me, that's something I've found endlessly exciting. HENRY: Yeah. I think Joe made a really good point on the secret sauce part because I don't think the source code is our secret sauce. There's no rocket science in DevLake. If we break it down, it's really just some UI UX plus data pipelines. I think what's making DevLake successful is really the trust and collaboration that we're building with the open-source community. When it comes to trust, I think there are two aspects. First of all, trust on the metric accuracy, right? Because with a lot of proprietary software, you don't know how they are calculating the metrics. If people don't know how the metrics are calculated, they can't really trust it and use it. And secondly, is the trust that they can always use this software, and there's no vendor lock-in. And when it comes to collaboration, we were seeing many of our data sources and dashboards they were contributed not by our core developers but by the community. And the communities really, you know, bring in their insights and their use cases into DevLake and make DevLake, you know, more successful and more applicable to more teams in different areas of soft engineering. MID-ROLL AD: Are you an entrepreneur or start-up founder looking to gain confidence in the way forward for your idea? At thoughtbot, we know you're tight on time and investment, which is why we've created targeted 1-hour remote workshops to help you develop a concrete plan for your product's next steps. Over four interactive sessions, we work with you on research, product design sprint, critical path, and presentation prep so that you and your team are better equipped with the skills and knowledge for success. Find out how we can help you move the needle at tbot.io/entrepreneurs. VICTORIA: I understand you've taken some innovative approaches on using AI in your open-source repositories to respond to issues and questions from your developers. So, can you tell me a little bit more about that? HENRY: Absolutely. I self-identify as a builder. And one characteristic of builder is to always chase after the dream of building infinite things within the finite lifespan. So, I was always thinking about how we can be more productive, how we can, you know, get better at getting better. And so, this year, you know, AI is huge, and there are so many AI-powered tools that can help us achieve more in terms of delivering software. And then, internally, we had a hackathon, and there's one project, which is an AI-powered coding assistant coming out of it called DevChat. And we have made it public at devchat.ai. But we've been closely following, you know, what are the other AI-powered tools that can make, you know, software developers' or open-source maintainers' lives easier? And we've been observing that there are more and more open-source projects adopting AI chatbots to help them handle, you know, respond to GitHub issues. So, I recently did a case study on a pretty popular open-source project called LangChain. So, it's the hot kid right now in the AI space right now. And it's using a chatbot called Dosu to help respond to issues. I had some interesting findings from the case study. VICTORIA: In what ways was that chatbot really helpful, and in what ways did it not really work that well? HENRY: Yeah, I was thinking of how to measure the effectiveness of that chatbot. And I realized that there is a feature that's built in GitHub, which is the reaction to comment. So, how the chatbot works is whenever there is a new issue, the chatbot would basically retrieval-augmented generation pipeline and then using ORM to generate a response to the issue. And then there's people leave reactions to that comment by the chatbot, but mostly, it's thumbs up and thumbs down. So, what I did is I collect all of the issues from the LangChain repository and look at how many thumbs up and thumbs down Dosu chatbot got, you know, from all of the comments they left with the issues. So, what I found is that over across 2,600 issues that Dosu chatbot helped with, it got around 900 thumbs ups and 1,300 thumbs down. So, then it comes to how do we interpret this data, right? Because it got more thumbs down than thumbs up doesn't mean that it's actually not useful or harmful to the developers. So, to answer that question, I actually looked at some examples of thumbs-up and thumb-down comments. And what I found is the thumb down doesn't mean that the chatbot is harmful. It's mostly the developers are signaling to the open-source maintainers that your chatbot is not helping in this case, and we need human intervention. But with the thumbs up, the chatbot is actually helping a lot. There's one issue where people post a question, and the chatbot just wrote the code and then basically made a suggestion on how to resolve the issue. And the human response is, "Damn, it worked." And that was very surprising to me, and it made me consider, you know, adopting similar technology and AI-powered tools for our own open-source project. VICTORIA: That's very cool. Well, I want to go back to the beginning of Merico. And when you first got started, and you were trying to understand your customers and what they need, was there anything surprising in that early discovery process that made you change your strategy? HENRY: So, one challenge we faced when we first explored open-source funding allocation problem space is that our algorithm looks at the Git repository. But with software engineering, especially with open-source collaboration, there are so many activities that are happening outside of open-source repos on GitHub. For example, I might be an evangelist, and my day-to-day work might be, you know, engaging in community work, talking about the open-source project conference. And all of those things were not captured by our algorithm, which was only looking at the GitHub repository at the time. So, that was one of the technical challenge that we faced and led us to switch over to more of the system-driven metrics side. VICTORIA: Gotcha. Over the years, how has Merico grown? What has changed between when you first started and today? HENRY: So, one thing is the team size. When we just got started, we only have, you know, the three co-founders and Maxim. And now we have grown to a team of 70 team members, and we have a fully distributed team across multiple continents. So, that's pretty interesting dynamics to handle. And we learned a lot of how to build effective team and a cohesive team along the way. And in terms of product, DevLake now, you know, has more than 900 developers in our Slack community, and we track over 360 companies using DevLake. So, definitely, went a long way since we started the journey. And yeah, tomorrow we...actually, Maxim and I are going to host our end-of-year Apache DevLake Community Meetup and featuring Nathen Harvey, the Google's DORA team lead. Yeah, definitely made some progress since we've been working on Merico for four years. VICTORIA: Well, that's exciting. Well, say hi to Nathen for me. I helped takeover DevOps DC with some of the other organizers that he was running way back in the day, so [laughs] that's great. What challenges do you see on the horizon for Merico and DevLake? MAXIM: One of the challenges I think about a lot, and I think it's front of mind for many people, especially with software engineering, but at this point, nearly every profession, is what does AI mean for everything we're doing? What does the future look like where developers are maybe producing the majority of their code through prompt-based approaches versus code-based approaches, right? How do we start thinking about how we coherently assess that? Like, how do you maybe redefine what the value is when there's a scenario where perhaps all coders, you know, if we maybe fast forward a few years, like, what if the AI is so good that the code is essentially perfect? What does success look like then? How do you start thinking about what is a good team if everyone is shooting out 9 out of 10 PRs nearly every time because they're all using a unified framework supported by AI? So, I think that's certainly kind of one of the challenges I envision in the future. I think, really, practically, too, many startups have been contending with the macroclimate within the fundraising climates. You know, I think many of the companies out there, us included, had better conditions in 2019, 2020 to raise funds at more favorable valuations, perhaps more relaxed terms, given the climate of the public markets and, you know, monetary policy. I think that's, obviously, we're all experiencing and has tightened things up like revenue expectations or now higher kind of expectations on getting into a highly profitable place or, you know, the benchmark is set a lot higher there. So, I think it's not a challenge that's unique to us in any way at all. I think it's true for almost every company that's out there. It's now kind of thinking in a more disciplined way about how do you kind of meet the market demands without compromising on the product vision and without compromising on the roadmap and the strategies that you've put in place that are working but are maybe coming under a little bit more pressure, given kind of the new set of rules that have been laid out for all of us? VICTORIA: Yeah, that is going to be a challenge. And do you see the company and the product solving some of those challenges in a unique way? HENRY: I've been thinking about how AI can fulfill the promise of making developers 10x developer. I'm an early adopter and big fan of GitHub Copilot. I think it really helps with writing, like, the boilerplate code. But I think it's improving maybe my productivity by 20% to 30%. It's still pretty far away from 10x. So, I'm thinking how Merico's solutions can help fill the gap a little bit. In terms of Apache DevLake and its SaaS offering, I think we are helping with, like, the team collaboration and measuring, like, software delivery performance, how can the team improve as a whole. And then, recently, we had a spin-off, which is the AI-powered coding assistant DevChat. And that's sort of more on the empowering individual developers with, like, testing, refactoring these common workflows. And one big thing for us in the future is how we can combine these two components, you know, team collaboration and improvement tool, DevLake, with the individual coding assistant, DevChat, how they can be integrated together to empower developers. I think that's the big question for Merico ahead. JOE: Have you used Merico to judge the contributions of AI to a project? HENRY: [laughs] So, actually, after we pivot to engineering metrics, we focus now less on individual contribution because that sometimes can be counterproductive. Because whenever you visualize that, then people will sometimes become defensive and try to optimize for the metrics that measure individual contributions. So, we sort of...nowadays, we no longer offer that kind of metrics within DevLake, if that makes sense. MAXIM: And that kind of goes back to one of Victoria's earlier questions about, like, what surprised us in the journey. Early on, we had this very benevolent perspective, you know, I would want to kind of underline that, that we never sought to be judging individuals in a negative way. We were looking to find ways to make it useful, even to a point of finding ways...like, we explored different ways to give developers badges and different kind of accomplishment milestones, like, things to kind of signal their strengths and accomplishments. But I think what we've found in that journey is that...and I would really kind of say this strongly. I think the only way that metrics of any kind serve an organization is when they support a healthy culture. And to that end, what we found is that we always like to preach, like, it's processes, not people. It's figuring out if you're hiring correctly, if you're making smart decisions about who's on the team. I think you have to operate with a default assumption within reason that those people are doing their best work. They're trying to move the company forward. They're trying to make good decisions to better serve the customers, better serve the company and the product. With that in mind, what you're really looking to do is figure out what is happening within the underlying processes that get something from thought to production. And how do you clear the way for people? And I think that's really been a big kind of, you know, almost like a tectonic shift for our company over the years is really kind of fully transitioning to that. And I think, in some ways, DORA has represented kind of almost, like, a best practice for, like, processes over people, right? It's figuring out between quality and speed; how are you doing? Where are those trade-offs? And then, within the processes that account for those outcomes, how can you really be improving things? So, I would say, for us, that's, like, been kind of the number one thing there is figuring out, like, how do we keep doubling down on processes, not people? And how do we really make sure that we're not just telling people that we're on their side and we're taking a, you know, a very humanistic perspective on wanting to improve the lives of people but actually doing it with the product? HENRY: But putting the challenge on measuring individual contributions aside, I'm as curious as Joe about AI's role in software engineering. I expect to see more and more involvement of AI and gradually, you know, replacing low-level and medium-level and, in the future, even high-level tasks for humans so we can just focus on, like, the objective instead of the implementation. VICTORIA: I can imagine, especially if you're starting to integrate AI tools into your systems and if you're growing your company at scale, some of the ability to have a natural intuition about what's going on it really becomes a challenge, and the data that you can derive from some of these products could help you make better decisions and all different types of things. So, I'm kind of curious to hear from Joe; with your history of open-source contribution and being a part of many different development teams, what kind of information do you wish that you had to help you make decisions in your role? JOE: Yeah, that's an interesting question. I've used some tools that try to identify problem spots in the code. But it'd be interesting to see the results of tools that analyze problem spots in the process. Like, I'd like to learn more about how that works. HENRY: I'm curious; one question for Joe. What is your favorite non-AI-powered code scanning tool that you find useful for yourself or for your team? JOE: I think the most common static analysis tool I use is something to find the Git churn in a repository. Some of this probably is because I've worked mostly on projects these days with dynamic languages. So, there's kind of a limit to how much static analysis you can do of, you know, a Ruby or a Python codebase. But just by analyzing which parts of the application changed the most, help you find which parts are likely to be the buggiest and the most complex. I think every application tends to involve some central model. Like, if you're making an e-commerce site, then probably products are going to have a lot of the core logic, purchases will have a lot of the core logic. And identifying those centers of gravity just through the Git statistics has helped me find places that need to be reworked. HENRY: That's really interesting. Is it something like a hotspot analysis? And when you find a hotspot, then would you invest more resources in, like, refactoring the hotspot to make it more maintainable? JOE: Right, exactly. Like, you can use the statistics to see which files you should look at. And then, usually, when you actually go into the files, especially if you look at some of the changes to the files, it's pretty clear that it's become, you know, for example, a class has become too large, something has become too tightly coupled. HENRY: Gotcha. VICTORIA: Yeah. And so, if you could go back in time, five years ago and give yourself some advice when you first started along this journey, what advice would you give yourself? MAXIM: I'll answer the question in two ways: first for the company and then for myself personally. I think for the company, what I would say is, especially when you're in that kind of pre-product market fit space, and you're maybe struggling to figure out how to solve a challenge that really matters, I think you need to really think carefully about, like, how would you yourself be using your product? And if you're finding reasons, you wouldn't, like, really, really pay careful attention to those. And I think, for us, like, early on in our journey, we ultimately kind of found ourselves asking, we're like, okay, we're a smaller earlier stage team. Perhaps, like, small improvements in productivity or quality aren't going to necessarily move the needle. That's one of the reasons maybe we're not using this. Maybe our developers are already at bandwidth. So, it's not a question of unlocking more bandwidth or figuring out where there's kind of weak points or bottlenecks at that level, but maybe how can we dial in our own processes to let the whole team function more effectively. And I think, for us, like, the more we started thinking through that lens of, like, what's useful to us, like, what's solving a pain point for us, I think, in many ways, DevLake was born out of that exact thinking. And now DevLake is used by hundreds of companies around the world and has, you know, this near thousand developer community that supports it. And I think that's testament to the power of that. For me, personally, if I were to kind of go back five years, you know, I'm grateful to say there isn't a whole lot I would necessarily change. But I think if there's anything that I would, it would just to be consistently more brave in sharing ideas, right? I think Merico has done a great job, and it's something I'm so proud of for us as a team of really embracing new ideas and really kind of making sure, like, best idea ships, right? There isn't a title. There isn't a level of seniority that determines whether or not someone has a right to suggest something or improve something. And I think with that in mind, for me as a technical person but not a member of technical staff, so to speak, I think there was many occasions, for me personally, where I felt like, okay, maybe because of that, I shouldn't necessarily weigh in on certain things. And I think what I've found, and it's a trust-building thing as well, is, like, even if you're wrong, even if your suggestion may be misunderstands something or isn't quite on target, there's still a tremendous amount of value in just being able to share a perspective and share a recommendation and push it out there. And I think with that in mind, like, it's something I would encourage myself and encourage everybody else in a healthy company to feel comfortable to just keep sharing because, ultimately, it's an accuracy-by-volume game to a certain degree, right? Where if I come up with one idea, then I've got one swing at the bat. But if us as a collective come up with 100 ideas that we consider intelligently, we've got a much higher chance of maybe a handful of those really pushing us forward. So, for me, that would be advice I would give myself and to anybody else. HENRY: I'll follow the same structure, so I'll start by the advice in terms of company and advice to myself as an individual. So, for a company level, I think my advice would be fail fast because every company needs to go through this exploration phase trying to find their product-market fit, and then they will have to test, you know, a couple of ideas before they find the right fit for themselves, the same for us. And I wish that we actually had more in terms of structure in exploring these ideas and set deadlines, you know, set milestones for us to quickly test and filter out bad ideas and then accelerate the exploration process. So, fail fast would be my suggestion at the company level. From an individual level, I would say it's more adapting to my CTO role because when I started the company, I still had that, you know, graduate student hustle mindset. I love writing code myself. And it's okay if I spent 100% of my time writing code when the company was, you know, at five people, right? But it's not okay [chuckles] when we have, you know, a team of 40 engineers. So, I wish I had that realization earlier, and I transitioned to a real CTO role earlier, focusing more, like, on technical evangelism or building out the technical and non-technical infrastructure to help my engineering teams be successful. VICTORIA: Well, I really appreciate that. And is there anything else that you all would like to promote today? HENRY: So if you're, you know, engineering leaders who are looking to measure, you know, some metrics and adopt a more data-driven approach to improving your software delivery performance, check out Apache DevLake. It's open-source project, free to use, and it has some great dashboards, support, various data resources. And join our community. We have a pretty vibrant community on Slack. And there are a lot of developers and engineering leaders discussing how they can get more value out of data and metrics and improve software delivery performance. MAXIM: Yeah. And I think to add to that, something I think we've found consistently is there's plenty of data skeptics out there, rightfully so. I think a lot of analytics of every kind are really not very good, right? And so, I think people are rightfully frustrated or even traumatized by them. And for the data skeptics out there, I would invite them to dive into the DevLake community and pose your challenges, right? If you think this stuff doesn't make sense or you have concerns about it, come join the conversation because I think that's really where the most productive discussions end up coming from is not from people mutually high-fiving each other for a successful implementation of DORA. But the really exciting moments come from the people in the community who are challenging it and saying like, "You know what? Like, here's where I don't necessarily think something is useful or I think could be improved." And it's something that's not up to us as individuals to either bless or to deny. That's where the community gets really exciting is those discussions. So, I would say, if you're a data skeptic, come and dive in, and so long as you're respectful, challenge it. And by doing so, you'll hopefully not only help yourself but really help everybody, which is what I love about this stuff so much. JOE: I'm curious, does Merico use Merico? HENRY: Yes. We've been dogfooding ourself a lot. And a lot of the product improvement ideas actually come from our own dogfooding process. For example, there was one time that we look at a dashboard that has this issue change lead time. And then we found our issue, change lead time, you know, went up in the past few month. And then, we were trying to interpret whether that's a good thing or a bad thing because just looking at a single metric doesn't tell us the story behind the change in the metrics. So, we actually improved the dashboard to include some, you know, covariates of the metrics, some other related metrics to help explain the trend of the metric. So yeah, dogfooding is always useful in improving product. VICTORIA: That's great. Well, thank you all so much for joining. I really enjoyed our conversation. You can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobots.fm. And you can find me on Twitter @victori_ousg. This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening. See you next time.
William Kwok speaks with Doc Searls and Shawn Powers about Apache SeaTunnel, an exciting and extremely useful open-source way to synchronize multiple databases. Hosts: Doc Searls and Shawn Powers Guest: William Kwok Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
William Kwok speaks with Doc Searls and Shawn Powers about Apache SeaTunnel, an exciting and extremely useful open-source way to synchronize multiple databases. Hosts: Doc Searls and Shawn Powers Guest: William Kwok Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
William Kwok speaks with Doc Searls and Shawn Powers about Apache SeaTunnel, an exciting and extremely useful open-source way to synchronize multiple databases. Hosts: Doc Searls and Shawn Powers Guest: William Kwok Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
William Kwok speaks with Doc Searls and Shawn Powers about Apache SeaTunnel, an exciting and extremely useful open-source way to synchronize multiple databases. Hosts: Doc Searls and Shawn Powers Guest: William Kwok Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
Wir wünschen euch ein frohes neues Jahr und starten gleich mit ein paar privaten Abenteuern. Etwa praktische Anwendungsfälle für die neue ChatGPT-AI von OpenAI. In den News haben wir einiges aufzuholen, daher fokussieren wir uns auf das Wesentliche: Neue CPUs von Intel und AMD, eine neue alte Grafikkarte von NVIDIA und... eine neue Linux-Distribution, die verspricht, immutable zu sein? Aus der Gerüchteküche haben wir Apple mit dem neuen Mixed Reality-Headset mitgebracht, sowie Spekulationen zur Ende zu Ende-Verschlüsselung bei iCloud Drive, welche nun als Opt-In in den USA verfügbar ist. Mit dabei außerdem die Bitte, die Apache Foundation umzubenennen, der LastPass-Hack und mögliche Vorgehensweisen, sollte man betroffen sein, sowie ein abschließender Rant über Discord und Telegram als Support-Channel für Projekte. Viel Spaß!
Indigenous tech group asks Apache Foundation to change its name - https://arstechnica.com/gadgets/2023/01/indigenous-tech-group-asks-apache-foundation-to-change-its-name/ Native Americans urge Apache Software Foundation to ditch its name - https://www.theregister.com/2023/01/11/native_american_apache_software_foundation/ Intel Launches 4th Gen Xeon Scalable "Sapphire Rapids", Xeon CPU Max Series - https://www.phoronix.com/review/intel-xeon-sapphire-rapids-max Intel Xeon Platinum 8490H "Sapphire Rapids" Performance Benchmarks - https://www.phoronix.com/review/intel-xeon-platinum-8490h After big delays, Sapphire Rapids arrives, full of accelerators and superlatives - https://www.theregister.com/2023/01/10/after_big_delays_intels_new/ Inside Intel's Delays in Delivering a Crucial New Microprocessor - https://www.nytimes.com/2023/01/10/technology/intel-sapphire-rapids-microprocessor.html The State of JS 2022 - https://2022.stateofjs.com/en-US/ #apache #nativeamericans #intel #xeon #sapphirerapids #stateofjs #stateofjs2022 === RSS - https://anchor.fm/s/b1bf48a0/podcast/rss --- Send in a voice message: https://podcasters.spotify.com/pod/show/edodusi/message
Wie sieht eigentlich der Layer unter Docker aus? Und wie interagiert Kubernetes mit Containern?In Episode 46 haben wir geklärt, welches Problem Docker eigentlich löst. Das Container-Ecosystem ist jedoch weit größer. Deswegen widmet sich diese Folge der darunter liegenden Ebene. Wir besprechen die Modularisierung von Docker, die herausgetrennte Highlevel Runtime containerd, wie Kubernetes mit Docker-Containern umgeht, ob Docker Container die einzige Art von Containern ist, die Kubernetes unterstützt, was ein Container Runtime Interface (CRI) ist, was die Open Container Initiative (OCI) und ob auch du dir deine eigene Highlevel Container Runtime programmieren kannst.Bonus: Was die Linux-Mafia ist und wieso es bald eine österreichische Container Runtime gibt.Feedback (gerne auch als Voice Message)Email: stehtisch@engineeringkiosk.devTwitter: https://twitter.com/EngKioskWhatsApp +49 15678 136776Gerne behandeln wir auch euer Audio Feedback in einer der nächsten Episoden, einfach Audiodatei per Email oder WhatsApp Voice Message an +49 15678 136776LinksEngineering Kiosk #46 Welches Problem löst Docker?: https://engineeringkiosk.dev/podcast/episode/46-welches-problem-l%C3%B6st-docker/c't - Magazin für Computer Technik: https://www.heise.de/ct/iX - Magazin für professionelle Informationstechnik: https://www.heise.de/ix/Highlevel container runtime containerd: https://containerd.io/Getting started with containerd: https://github.com/containerd/containerd/blob/main/docs/getting-started.mdcontaiNERD CTL - Docker-compatible CLI for containerd, with support for Compose, Rootless, eStargz, OCIcrypt, IPFS: https://github.com/containerd/nerdctlshim: https://de.wikipedia.org/wiki/Shim_(Informatik)Core OS: https://de.wikipedia.org/wiki/Core_OSrkt: https://github.com/rkt/rktContainer Runtime Interface (CRI): https://kubernetes.io/docs/concepts/architecture/cri/CRI Plugin von containerd: https://github.com/containerd/containerd/tree/main/pkg/criHighlevel Container Runtime CRI-O: https://cri-o.io/Cloud Native Computing Foundation: https://www.cncf.io/Linux Foundation: https://www.linuxfoundation.org/Apache Foundation: https://www.apache.org/gVisor - Container Security Platform: https://gvisor.dev/minikube - local Kubernetes cluster: https://minikube.sigs.k8s.io/docs/Open Container Initiative: https://opencontainers.org/Sprungmarken(00:00:00) Intro(00:00:58) Computerbild, c't und iX und einen Layer tiefer als "Was ist Docker"(00:04:31) Der nächsttiefere Layer nach docker run: Highlevel Runtime containerd(00:07:05) Was ist denn ein Container-Lebenszyklus und der Unterschied zwischen Highlevel und Lowlevel-Runtime?(00:09:37) Ist containerd noch ein Teil von docker?(00:10:35) Kann containerd auch von anderen Clients als docker angesprochen werden?(00:13:06) Verwendet Kubernetes auch docker? docker shim, CoreOS, rkt(00:16:01) Container Runtime Interface (CRI) von Kubernetes(00:19:44) Kubernetes, API-Server, kubelet und die Highlevel-Runtime(00:20:58) Eine alternative Highlevel-Container-Runtme zu containerd: CRI-O(00:24:15) Die Cloud Native Computing Foundation (CNCF)(00:28:16) Was ist besser? Containerd oder CRI-O?(00:29:57) docker shim und unterstützt Kubernetes noch docker?(00:32:10) Docker container sind eigentlich Open Container Initiative (OCI) Images und Container(00:33:21) Woher kommt die Open Container Initiative (OCI) und die Standards?(00:34:41) Zusammenfassung und die Möglichkeiten eure eigene Runtime zu schreiben(00:36:31) Ausblick auf die nächste Container-Folge und FeedbackHostsWolfgang Gassler (https://twitter.com/schafele)Andy Grunwald (https://twitter.com/andygrunwald)Feedback (gerne auch als Voice Message)Email: stehtisch@engineeringkiosk.devTwitter: https://twitter.com/EngKioskWhatsApp +49 15678 136776
In this episode we speak to Sergei Egorov, CEO of AtomicJar, the company behind TestContainers, a library that helps with integration testing for containerized applications. We discuss the challenges of developing container-based applications, how to orchestrate containers for testing, the future of cloud development environments, and whether the Apple M1 chip has come too late. About Sergei EgorovSergei Egorov is CEO & co-founder of AtomicJar - the company behind Testcontainers on a mission to make integration testing easy and enjoyable for developers. He is a Java Champion, an active member of the Open Source community, member of the Apache Foundation, and Reactive Foundation TOC.Other things mentioned:DockerKubernetes Google Cloud RunHerokuJosh WongTwelve-Factor AppWeb AssemblyScaffoldBuildpacksKO for GOLocalStackLambdaDynanoDBQuarkusGitHub Code SpacesMacBook Pro M1NotionLet us know what you think on Twitter:https://twitter.com/consoledotdevhttps://twitter.com/davidmyttonhttps://twitter.com/bsideupOr by email: hello@console.devAbout ConsoleConsole is the place developers go to find the best tools. Our weekly newsletter picks out the most interesting tools and new releases. We keep track of everything - dev tools, devops, cloud, and APIs - so you don't have to. Sign up for free at: https://console.devRecorded: 2022-04-07.
Today on the show we're talking about streaming data and streaming applications with Apache Kafka. We're joined by Kris Jenkins, Developer Advocate from Confluent, and Rob Walters, Product Manager at MongoDB, who will discuss how you can leverage this technology to your benefit and use it in your applications.Kafka is traditionally used for building real time streaming data pipelines and real time streaming applications. It began its life in 2010 at LinkedIn and made its way to the public open-source space through a relationship with Apache, the Apache Foundation, in 2011. Since then, the use of Kafka has grown massively and it's estimated that approximately 30% of all Fortune 500 companies are already using Kafka in one way or another. A great example for why you might want to use Kafka would be perhaps capturing all of the user activity that happens on your website. As users visit your website, they're interacting with links on the page and scrolling up and down. This is potentially large volumes of data. You may want to store this to understand how users are interacting with your website in real time. Kafka will aid in this process by ingesting and storing all of this activity data while serving up reads for applications on the other side. Conversation highlights include:[03:38] What is Kafka?[05:29] At the heart of every database[08:03] The difference between Kafka and a database[09:03] What Kafka's architecture looks like[12:03] Kafka as a data backbone of system architecture[14:06] MongoDB and Kafka working together[15:40] What are "Topics" in Kafka?[17:53] Chain stream events[19:58] Kafka's history[22:07] MongoDB Connector, and Kafka via Confluent Cloud[25:53] Popular use cases using Kafka and MongoDB[27:48] Kafka and stream processing with games and event data[29:13] KSQL and processing against the stream of data[30:59] Developer.Confluence.io, a place to learn everything about Kafka
https://go.dok.community/slack https://dok.community/ From the DoK Day EU 2022 (https://youtu.be/Xi-h4XNd5tE) What does Kubernetes provide that allows us to reduce the complexity of Apache Cassandra while making it better suited for cloud native deployments? That was the question we started with as we began a mission to bring Cassandra closer to Kubernetes and eliminate the redundancy. Many great open source databases have been adapted to run on Kubernetes, without relying on the deep ecosystem of projects that it takes to run in Kubernetes(there is a difference). This talk will discuss the design and implementation of the Astra Serverless Database which re-architected Apache Cassandra to run only on Kubernetes infrastructure. Built to be optimized for multi-tenancy and auto-scaling, we set out with a design goal to completely separate compute and storage. Decoupling different aspects of Cassandra into scaleable services and relying on the benefits of Kubernetes and it's ecosystem created a simpler more powerful database service than a stand alone, bare-metal Cassandra cluster. The entire system is now built on Apache Cassandra, Stargate, Etcd, Prometheus, and object-storage like Minio or Ceph. In this talk we will discuss the downstream changes coming to several open source projects based on the work we have done. Jake is a lead developer and software architect at DataStax with over 20 years of experience in the areas of distributed systems, finance, and manufacturing. He is a member of the Apache Foundation and is on the project committee of the Apache Cassandra, Arrow, and Thrift projects. Jake has a reputation for developing creative solutions to solve difficult problems and fostering a culture of trust and innovation. He believes the best software is built by small diverse teams who are encouraged to think freely. Jake received his B.S. in Computer Science from Lehigh University along with a minor in Cognitive Science.
En el episodio 71 del podcast de Entre Dev y Ops hablaremos sobre el proyecto Open Source Lura Project. Blog Entre Dev y Ops - https://www.entredevyops.es Telegram Entre Dev y Ops - https://t.me/entredevyops Twitter Entre Dev y Ops - https://twitter.com/entredevyops LinkedIn Entre Dev y Ops - https://www.linkedin.com/company/entredevyops/ Patreon Entre Dev y Ops - https://www.patreon.com/edyo Amazon Entre Dev y Ops - https://amzn.to/2HrlmRw Enlaces comentados: Podcast 62: KrakenD - https://www.entredevyops.es/podcasts/podcast-62.html Lura Project - https://luraproject.org/ Linux Foundation - https://linuxfoundation.org/ Apache Foundation - https://www.apache.org/ CNCF - https://www.cncf.io/ AsyncAPI - https://www.asyncapi.com/ Oracle Identity and Access Management - https://www.oracle.com/security/identity-management/ Microsoft - https://www.microsoft.com/ Adevinta - https://www.adevinta.com/ Letgo - https://www.letgo.com/ Newrelic - https://newrelic.com/ Ebay - https://www.ebay.com/ GitHub: Exploring the dependencies of a repository - https://docs.github.com/en/enterprise-cloud@latest/code-security/supply-chain-security/understanding-your-software-supply-chain/exploring-the-dependencies-of-a-repository Lura Project v2.0.1 - https://github.com/luraproject/lura/releases/tag/v2.0.1 KrakenD - https://www.krakend.io/ KrakenD Soporte - https://www.krakend.io/support/
The Log4j vulnerability called Log4Shell is impacting nearly every big tech company. The industry makes trillions off of Apache's open source software yet records show the Apache Foundation received only $2 million from the industry to fund their operations. Is it time they invest more in securing open source software? See more like this: http://lon.tv/ww and subscribe! http://lon.tv/s VIDEO INDEX: 00:00 - Intro 00:53 - What is LOG4J? 01:36 - Log4Shell Vulnerability 02:13 - Timeline of the Log4Shell exploit 03:06 - Apple's Vulnerability 04:15 - FTC Response and Minimal Federal Privacy Regulations 07:21 - FTC Warning on Open Source Vulnerability 08:27 - The industry gave Apache only $2 million last year 09:52 - Question: Should Big Tech Pay for Open Source? 10:23 - Supporter Thank Yous 11:13 - Helping The Channel 11:33 - My Other Channels 12:37 - Conclusion Subscribe to my email list to get a weekly digest of upcoming videos! - http://lon.tv/email See my second channel for supplementary content : http://lon.tv/extras Join the Facebook group to connect with me and other viewers! http://lon.tv/facebookgroup Visit the Lon.TV store to purchase some of my previously reviewed items! http://lon.tv/store Read more about my transparency and disclaimers: http://lon.tv/disclosures Want to chat with other fans of the channel? Visit our Facebook Group! http://lon.tv/facebookgroup Want to help the channel? Start a Member subscription or give a one time tip! http://lon.tv/support or contribute via Venmo! lon@lon.tv Follow me on Facebook! http://facebook.com/lonreviewstech Follow me on Twitter! http://twitter.com/lonseidman Catch my longer interviews and wrap-ups in audio form on my podcast! http://lon.tv/itunes http://lon.tv/stitcher or the feed at http://lon.tv/podcast/feed.xml We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. --- Support this podcast: https://anchor.fm/lon-seidman/support
A new Episode of the Serie "AI for Exponential Business" in the unique AI Chanel of Trust "Exponential Trust Times" by AI Exponential Thinker. Our Guest is Dr. Chris Mattmann. Chief Technology & Innovation Officer at NASA Jet Propulsion Laboratory; He is the Author of couple books with the last one coming soon “Machine Learning with TensorFlow” . Chris is a Board Member at Apache Foundation; Dr. Lobna Karoui is pleased to welcome Dr. Chris Mattmann in this new podcast episode. Part 1 - More coming soon in the Part2 with exclusive insights about how to achieve a career at NASA, questions from the audience- Subscribe via via www.aiexponentialthinker.com Dr. Lobna Karoui is an Executive AI Strategy Growth Advisor and Exponential Digital Transformer for Fortune 500 & CAC40 with two decades experience in building AI products and services for millions of users. She is the president of AI Exponential thinker with a target to inspire and empower 1 Million young boys and girls, horizon 2025, about Trust Technologies and AI Opportunities. Dr. Karoui is one of the 1000 AI Experts at Global Scale who signs the "Autonomous Weapons Letter" in 2014 with Stephen Hawking and Elon Musk. She is an international Speaker and interviewer recognized as an AI Expert by Forbes, Bloomberg and MIT. Follow us and subscribe www.aiexponentialthinker.com or via contact@aiexponentialthinker.com to interact with our Guests, meet great speakers and mentors from great companies such as Amazon, WEF, Harvard and more
A new Episode of the Serie "AI for Exponential Business" in the unique AI Chanel of Trust by AI Exponential Thinker. Our Guest is Dr. Chris Mattmann. AI Director in Jet Propulsion Laboratory at NASA; He is the Author of couple books with the last one coming soon “Machine Learning with TensorFlow” . Chris is a Board Member at Apache Foundation; Dr. Lobna Karoui is pleased to welcome Dr. Chris Mattmann in this new podcast episode. Dr. Lobna Karoui is an Executive AI Strategy Growth Advisor and Exponential Digital Transformer for Fortune 500 & CAC40 with two decades experience in building AI products and services for millions of users. She is the president of AI Exponential thinker with a target to inspire and empower 1 Million young boys and girls, horizon 2025, about Trust Technologies and AI Opportunities. Dr. Karoui is one of the 1000 AI Experts at Global Scale who signs the "Autonomous Weapons Letter" in 2014 with Stephen Hawking and Elon Musk. She is an international Speaker and interviewer recognized as an AI Expert by Forbes, Bloomberg and MIT. Follow us and subscribe www.aiexponentialthinker.com or via contact@aiexponentialthinker.com to interact with our Guests, meet great speakers and mentors from great companies such as Amazon, WEF, Harvard and more
Originally published May 14, 2018The Kubernetes ecosystem consists of enterprises, vendors, open source projects, and individual engineers. The Cloud Native Computing Foundation was created to balance the interests of all the different groups within the cloud native community. CNCF has similarities to the Linux Foundation and the Apache Foundation. CNCF helps to guide open source projects in the Kubernetes ecosystem–including Prometheus, Fluentd, and Envoy. With the help of the CNCF, these projects can find common ground where possible.KubeCon is a conference organized by the Cloud Native Computing Foundation. I attended the most recent KubeCon in Copenhagen. KubeCon was a remarkably well-run conference–and the attendees were excited and optimistic. As much traction as Kubernetes has, it is still very early days and it was fun to talk to people and forecast what the future might bring.At KubeCon, I sat down with Chris Aniszczyk and Dan Kohn, who are the COO and director of the CNCF. I was curious about how to scale an organization like the CNCF. In some ways, it is like scaling a government. Kubernetes is growing faster than Linux grew, and the applications of Kubernetes are as numerous as those of Linux. Different constituencies want different things out of Kubernetes–and as those constituencies rapidly grow in number, how do you maintain diplomacy among competing interests? It's not an easy task, and that diplomacy has been established by keeping in mind lessons from previous open source projects.
Originally published May 14, 2018 The Kubernetes ecosystem consists of enterprises, vendors, open source projects, and individual engineers. The Cloud Native Computing Foundation was created to balance the interests of all the different groups within the cloud native community. CNCF has similarities to the Linux Foundation and the Apache Foundation. CNCF helps to guide open The post Cloud Native Computing Foundation with Chris Aniszczyk and Dan Kohn Holiday Repeat appeared first on Software Engineering Daily.
Originally published May 14, 2018 The Kubernetes ecosystem consists of enterprises, vendors, open source projects, and individual engineers. The Cloud Native Computing Foundation was created to balance the interests of all the different groups within the cloud native community. CNCF has similarities to the Linux Foundation and the Apache Foundation. CNCF helps to guide open The post Cloud Native Computing Foundation with Chris Aniszczyk and Dan Kohn Holiday Repeat appeared first on Software Engineering Daily.
Originally published May 14, 2018 The Kubernetes ecosystem consists of enterprises, vendors, open source projects, and individual engineers. The Cloud Native Computing Foundation was created to balance the interests of all the different groups within the cloud native community. CNCF has similarities to the Linux Foundation and the Apache Foundation. CNCF helps to guide open The post Cloud Native Computing Foundation with Chris Aniszczyk and Dan Kohn Holiday Repeat appeared first on Software Engineering Daily.
В 12 выпуске подкаста Javaswag поговорили с Алексеем Зиновьевым о машинном обучении внутри Apache Spark и Apache Ignite. 00:03:03 Как все началось? 00:06:31 Что такое задача машинного обучения? 00:09:46 Посчитать статистику это уже ML? Предсказать событие это уже ML? А когда это ML? 00:13:13 DevOps ML Engineer, QA ML Developer, Business ML Analyst и другие вакансии будущего 00:20:43 Почему дата сайнтисты пишут на питоне? 00:22:04 В какой момент в дата сайнсе появилась Джава? 00:24:49 Что было до Apache Spark? 00:29:29 Модуль Spark ML 00:35:22 Почему Apache Spark победил в мире ETL? 00:37:07 История SparkML 00:40:28 Как написать новый алгоритм для Apache Spark? 00:44:03 Apache Spark 3.0 00:48:12 Спарк - “помойка джаров с мавен централа” 00:50:46 Apache Spark движется на встречу дата сайнтистам, но они питонисты 00:52:56 Опенсорсные продукты, за которыми стоит одна кампания 00:55:05 Apache Ignite 01:03:40 ML в Apache Ignite 01:09:41 Как спроектировать API ML библиотеки 01:15:55 Как Ignite попал в Apache Foundation? 01:16:52 Какие алгоритмы реализовали первыми в Apache Ignite? 01:21:35 Меряемся фичами Игнайта и Спарка 01:25:32 Будущее Ignite ML 01:31:17 Как стать коммитером в Ignite? В какие блоки можно контрибьютить? 01:38:30 Как вкатиться в датасаенс в 2к20? Курс Воронцова и секретный дата саенс чат Гость - https://twitter.com/zaleslaw Телеграм канал подкаста t.me/javaswag Чат t.me/javaswag_chat
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we're going to cover one of the most important and widely distributed server platforms ever: The Apache Web Server. Today, Apache servers account for around 44% of the 1.7 Billion web sites on the Internet. But at one point it was zero. And this is crazy, it's down from over 70% in 2010. Tim Berners-Lee had put the first website up in 1991 and what we now know as the web was slowly growing. In 1994 and begins with the National Center for Supercomputing Applications, University of Illinois, Urbana-Champaign. Yup, NCSA is also the organization that gave us telnet and Mosaic, the web browser that would evolve into Netscape. After Rob leaves NCSA, the HTTPdaemon goes a little, um, dormant in development. The distress had forked and the extensions and bug fixes needed to get merged into a common distribution. Apache is a free and open source web server that was initially created by Robert McCool and written in C in 1995, the same year Berners-Lee coined the term World Wide Web. You can't make that name up. I'd always pictured him as a cheetah wearing sunglasses. Who knew that he'd build a tool that would host half of the web sites in the world. A tool that would go on to be built into plenty of computers so they can spin up sharing services. Times have changed since 1995. Originally the name was supposedly a cute name referring to a Patchy server, given that it was based on lots of existing patches of craptostic code from NCSA. So it was initially based on NCSA HTTPd is still alive and well all the way up to the configuration files. For example, on a Mac these are stored at /private/etc/apache2/httpd.conf. The original Apache group consisted of * Brian Behlendorf * Roy T. Fielding * Rob Hartill * David Robinson * Cliff Skolnick * Randy Terbush * Robert S. Thau * Andrew Wilson And there were additional contributions from Eric Hagberg, Frank Peters, and Nicolas Pioch. Within a year of that first shipping, Apache had become the most popular web server on the internet. The distributions and sites continued to grow to the point that they formed the Apache Software Foundation that would give financial, legal, and organizational support for Apache. They even started bringing other open source projects under that umbrella. Projects like Tomcat. And the distributions of Apache grew. Mod_ssl, which brought the first SSL functionality to Apache 1.17, was released in 1998. And it grew. The Apache Foundation came in 1999 to make sure the project outlived the participants and bring other tools under the umbrella. The first conference, ApacheCon came in 2000. Douglas Adams was there. I was not. There were 17 million web sites at the time. The number of web sites hosted on Apache servers continued to rise. Apache 2 was released in 2004. The number of web sites hosted on Apache servers continued to rise. By 2009, Apache was hosting over 100 million websites. By 2013 Apache had added that it was named “out of a respect for the Native American Indian tribe of Apache”. The history isn't the only thing that was rewritten. Apache itself was rewritten and is now distributed as Apache 2.0. there were over 670 million web sites by then. And we hit 1 billion sites in 2014. I can't help but wonder what percentage collections of fart jokes. Probably not nearly enough. But an estimated 75% are inactive sites. The job of a web server is to serve web pages on the internet. Those were initially flat HTML files but have gone on to include CGI, PHP, Python, Java, Javascript, and others. A web browser is then used to interpret those files. They access the .html or .htm (or other one of the other many file types that now exist) file and it opens a page and then loads the text, images, included files, and processes any scripts. Both use the http protocol; thus the URL begins with http or https if the site is being hosted over ssl. Apache is responsible for providing the access to those pages over that protocol. The way the scripts are interpreted is through Mods. These include mod_php, mod_python, mod_perl, etc. The modular nature of Apache makes it infinitely extensible. OK, maybe not infinitely. Nothing's really infinite. But the Loadable Dynamic Modules do make the system more extensible. For example, you can easily get TLS/SSL using mod_ssl. The great thing about Apache and its mods are that anyone can adapt the server for generic uses and they allow you to get into some pretty really specific needs. And the server as well as each of those mods has its source code available on the Interwebs. So if it doesn't do exactly what you want, you can conform the server to your specific needs. For example, if you wanna' hate life, there's a mod for FTP. Out of the box, Apache logs connections, includes a generic expression parser, supports webdav and cgi, can support Embedded Perl, PHP and Lua scripting, can be configured for public_html per-user web-page, supports htaccess to limit access to various directories as one of a few authorization access controls and allows for very in depth custom logging and log rotation. Those logs include things like the name and IP address of a host as well as geolocations. Can rewrite headers, URLs, and content. It's also simple to enable proxies Apache, along with MySQL, PHP and Linux became so popular that the term LAMP was coined, short for those products. The prevalence allowed the web development community to build hundreds or thousands of tools on top of Apache through the 90s and 2000s, including popular Content Management Systems, or CMS for short, such as Wordpress, Mamba, and Joomla. * Auto-indexing and content negotiation * Reverse proxy with caching * Multiple load balancing mechanisms * Fault tolerance and Failover with automatic recovery * WebSocket, FastCGI, SCGI, AJP and uWSGI support with caching * Dynamic configuration * Name- and IP address-based virtual servers * gzip compression and decompression * Server Side Includes * User and Session tracking * Generic expression parser * Real-time status views * XML support Today we have several web servers to choose from. Engine-X, spelled Nginx, is a newer web server that was initially released in 2004. Apache uses a thread per connection and so can only process the number of threads available; by default 10,000 in Linux and macOS. NGINX doesn't use threads so can scale differently, and is used by companies like AirBNB, Hulu, Netflix, and Pinterest. That 10,000 limit is easily controlled using concurrent connection limiting, request processing rate limiting, or bandwidth throttling. You can also scale with some serious load balancing and in-band health checks or with one of the many load balancing options. Having said that, Baidu.com, Apple.com, Adobe.com, and PayPal.com - all Apache. We also have other web servers provided by cloud services like Cloudflare and Google slowly increasing in popularity. Tomcat is another web server. But Tomcat is almost exclusively used to run various Java servers, servelets, EL, webscokets, etc. Today, each of the open source projects under the Apache Foundation has a Project Management committee. These provide direction and management of the projects. New members are added when someone who contributes a lot to the project get nominated to be a contributor and then a vote is held requiring unanimous support. Commits require three yes votes with no no votes. It's all ridiculously efficient in a very open source hacker kinda' way. The Apache server's impact on the open-source software community has been profound. It iis partly explained by the unique license from the Apache Software Foundation. The license was in fact written to protect the creators of Apache while giving access to the source code for others to hack away at it. The Apache License 1.1 was approved in 2000 and removed the requirement to attribute the use of the license in advertisements of software. Version two of the license came in 2004, which made the license easier for projects that weren't from the Apache Foundation. This made it easier for GPL compatibility, and using a reference for the whole project rather than attributing software in every file. The open source nature of Apache was critical to the growth of the web as we know it today. There were other projects to build web servers for sure. Heck, there were other protocols, like Gopher. But many died because of stringent licensing policies. Gopher did great until the University of Minnesota decided to charge for it. Then everyone realized it didn't have nearly as good of graphics as other web servers. Today the web is one of the single largest growth engines of the global economy. And much of that is owed to Apache. So thanks Apache, for helping us to alleviate a little of the suffering of the human condition for all creatures of the world. By the way, did you know you can buy hamster wheels on the web. Or cat food. Or flea meds for the dog. Speaking of which, I better get back to my chores. Thanks for taking time out of your busy schedule to listen! You probably get to your chores as well though. Sorry if I got you in trouble. But hey, thanks for tuning in to another episode of the History of Computing Podcast. We're lucky to have you. Have a great day!
Craigslist Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we're going to look at the computer that was the history of craigslist. It's 1995. The web is 4 years old. By the end of the year, there would be over 23,000 websites. Netscape released JavaScript, Microsoft released Internet Explorer, Sony released the Playstation, Coolio Released Gangsta's Paradise, and probably while singing along to “This is How We Do It” veteran software programmer Craig Newmark made a list. And Craig Alexander Newmark hails from Morristown, New Jersey and after being a nerdy kid with thick black glasses and a pocket protector in high school went off to Case Western, getting a bachelors in 1975 and a masters in 77. This is where he was first given access to the arpanet, which would later evolve into the internet as we know it today. He then spent 17 years at IBM during some of the most formative years of the young computer industry. This was when the hacker ethos formed and anyone that went to college in the 70s would be well acquainted with Stewart Brand's Whole Earth Catalog and yes, even employees of IBM would potentially have been steeped in the ethos of the counterculture that helped contribute to that early hacker ethos. And as with many of us, Gibson's Neuromancer got him thinking about the potential of the web. Anyone working at that time would have also seen the rise of the Internet, the advent of email, and a lot of people were experimenting with side projects here and there. And people from all around the country that still believed in the ideals of that 60s counterculture still gravitated towards San Francisco, where Newmark moved to take a gig at Charles Schwab in 1993 where he was an early proponent of the web, exploring uses with a series of brown bag lunches. If you're going to San Francisco make sure to wear flowers in your hair. Newmark got to see some of the best of the WELL and Usenet and as with a lot of people when they first move to a new place, old Craig was in his early 40s with way too much free time on his hands. I've known lots of people these days that move to new cities and jump headfirst into Eventbrite, Meetup, or more recently, Facebook events, as a way of meeting new people. But nothing like that really existed in 1993. The rest of the country had been glued to their televisions, waiting for the OJ Simpson verdict while flipping back and forth between Seinfeld, Frasier, and Roseanne. Unforgiven with Clint Eastwood won Best Picture. I've never seen Seinfeld. I've seen a couple episodes of Frasier. I lived Roseanne so was never interested. So a lot of us missed all that early 90s pop culture. Instead of getting embroiled in Friends from 93 to 95, Craig took a stab at connecting people. He started simple, with an email list and ten or so friends. Things like getting dinner at Joe's digital diner. And arts events. Things he was interested in personally. People started to ask Craig to be added to the list. The list, which he just called craigslist, was originally for finding things to do but quickly grew into a wanted ad in a way - with people asking him to post their events or occasionally asking for him to mention an apartment or car, and of course, early email aficionados were a bit hackery so there was plenty of computer parts needed or available. It's even hard for me to remember what things were like back then. If you wanted to list a job, sell a car, sell furniture, or even put an ad to host a group meetup, you'd spend $5 to $50 for a two or three line blurb. You had to pick up the phone. And chances are you had a home phone. Cordless phones were all the rage then. And you had to dial a phone number. And you had to talk to a real life human being. All of this sounds terrible, right?!?! So it was time to build a website. When he first launched craigslist, you could rent apartments, post small business ads, sell cars, buy computers, and organize events. Similar to the email list but on the web. This is a natural progression. Anyone who's managed a list serve will eventually find the groups to become unwieldy and if you don't build ways for people to narrow down what they want out of it, the groups and lists will split themselves into factions organically. Not that Craig had a vision for increasing page view times or bringing in advertisers, or getting more people to come to the site. But at first, there weren't that many categories. And the URL was www.craigslist.org. It was simple and the text, like most hyperlinks at the time, was mostly blue. By end of 1997 he was up to a million page views a month and a few people were volunteering to help out with the site. Through 1998 the site started to lag behind with timely postings and not pruning old stuff quickly enough. It was clear that it needed more. In 1999 he made Craigslist into a business. Being based in San Francisco of course, venture capitalist friends were telling him to do much, much more, like banner ads and selling ads. It was time to hire people. He didn't feel like he did great at interviewing people, he couldn't fire people. But in 99 he got a resume from Jim Buckmaster. He hired him as the lead tech. Craigslist first expanded into different geographies by allowing users to basically filter to different parts of the Bay Area. San Francisco, South Bay, East Bay, North Bay, and Peninsula. Craig turned over operations of the company to Jim in 2000 and Craigslist expanded to Boston in y2k, and once tests worked well, added Chicago, DC, Los Angeles, New York City, Portland, Sacramento, San Diego, and Seattle. I had friends in San Francisco and had used Craigslist - I lived in LA at the time and this was my first time being able to use it regularly at home. Craig stayed with customer service, enjoying a connection with the organization. They added Sacramento and in 2001 saw the addition of Atlanta, Austin, Vancouver and Denver added. Every time I logged in there were new cities, and new categories, even one to allow for “erotic services”. Then in 2004 we saw Amsterdam, Tokyo, Paris, Bangalore, and Sao Paulo. As organizations grow they need capital. Craigslist wasn't necessarily aggressive about growth, but once they became a multi-million dollar company, there was risk of running out of cash. In 2004, eBay purchased 28.4 percent of the company. They expanded into Sydney and Melbourne. Craigslist also added new categories to make it easier to find specific things, like toys or things for babies, different types of living arrangements, ridesharing, etc. Was it the ridesharing category that inspired Travis Kalanick? Was it posts to rent a room for a weekend that inspired AirBNB? Was it the events page that inspired Eventbrite? In 2005, eBay launched Kijiji, an online classifieds service organized by cities. It's a similar business model to Craigslist. By May they'd purchased Gumtree, a similar site serving the UK, South Africa and a number of other countries, and then purchased LoQuo, OpusForum.org. They were firmly getting in the same market as Craigslist. Craigslist continued to grow. And by 2008, eBay sued Craigslist claiming they were diluting the eBay stock. Craigslist countered that Kijiji stoke trade secrets. By 2008 over 40 million Americans used Craigslist every month and they had helped people in more than 500 cities spread across more than 50 countries. Much larger than the other service. They didn't settle that suit for 7 years, with eBay finally selling its shares back to Craigslist in 2015. Over the years, there have been a number of other legal hurdles for Craigslist. In 2008, Craigslist added phone verification to the erotic services category and saw a drastic reduction in the number of ads. They also teamed up with the National Center for Missing and Exploited Children as well as 43 US Attorneys General and saw over 90% reduced ads for erotic services over the next year and donated all revenue from ads to post erotic services to charities. Craigslist later removed the category outright. The net effect was that many of those services got posted to the personals section. At the time, craigslist was the most used personals site in the US. Therefore, unable to police those, in 2010, Craiglist took the personals down as well. Craigslist was obviously making people ask a lot of questions. Newspaper revenue from classifieds advertisements went down from 14 to 20 percent in 2007 while online classified traffic shot up 23%. Again, disruption makes people ask question. I am not a political person and don't like talking about politics. I had friends in prosecutors offices at the time and they would ask me about how an ad could get posted for an illegal activity and really looked at it from the perspective that Craigslist was facilitating sex work. But it's worth noting that a social change that resulted in that erotic services section was that a number of sex workers moved inside apartments rather than working on the street. They could screen potential customers and those clients knew they would be leaving behind a trail of bits and bytes that might get them caught. As a result, homicide rates against females went down by 17 percent and since the Erotic Services section of the site has been shut down, those rates have risen back to the same levels. Other sites did spring up to facilitate the same services, such as Backpage. And each has been taken down or prosecuted as they spring up. To make it easier to do so, the Stop Enabling Sex Trafficers Act and Allow States and Victims to Fight Online Sex Trafficking Act was launch in 2018. We know that the advent of the online world is changing a lot in society. If I need some help around the house, I can just go to Craigslist and post an ad and within an hour usually have 50 messages. I don't love washing windows on the 2nd floor of the house - and now I don't have to. I did that work myself 20 years ago. Cars sold person to person sell for more than to dealerships. And out of great changes comes people looking to exploit them. I don't post things to sell as much as I used to. The last few times I posted I got at least 2 or 3 messages asking if I am willing to ship items and offering to pay me after the items arrive. Obvious scams. Not that I haven't seen similar from eBay or Amazon, but at least there you would have recourse. Angie got a list in 1995 too. You can use angieslist to check up on people offering to do services. But in my experience few who respond to a craigslist ad are there and most are gainfully employed elsewhere and just gigging on the side. Today Craigslist runs with around 50 people, and with revenue over 700 million. Classified advertising at large newspaper chains has dropped drastically. Alexa ranks craigslist as the 120th ranked global sites and 28th ranked in the US - with people spending 9 minutes on the site on average. The top searches are cheap furniture, estate sales, and lawn mowers. And what's beautiful is that the site looks almost exactly like it looked when launched in the 90s. Still no banners. Still blue hyperlinks. Still some black text. Nothing fancy. Out of Craigslist we've gotten CL blob service, CL image service, and memcache cluster proxy. They contribute code to Haraka, Redis, and Sphinx. The craigslist Charitable fund helps support the Apache Foundation, the Free Software Foundation, Gnome Foundation, Mozilla Foundation, Open Source Initiative, OpenStreetMap.us, Perl Foundation, PostgreSQL, Python Software Foundation, and Software in the Public Interest. I meet a lot of entrepreneurs who want to “disrupt” an industry. When I hear the self proclaimed serial entrepreneurs who think they're all about the ideas but don't know how to actually make any of the ideas work talk about disruptive technologies, I have never heard one mention craigslist. There's a misnomer that a lot of engineers don't have the ideas and that every Bill Gates needs a Paul Allen or that every Steve Jobs needs a Woz. Or I hear that starting companies is for young entrepreneurs, like those four were when starting Microsoft and Apple. Craig Newmark, a 20 year software veteran in his 40s inspired Yelp!, Uber, Next-door and thousands of other sites. And unlike many of those other organizations he didn't have to go blow things up and build a huge company. They did something that their brethren from the early days on the WELL would be proud of, they diverted much of their revenues to the Craigslist Charitable Fund. Here, they sponsor four main categories of grant partners: * Environment and Transportation * Education, Rights, Justice, Reason * Non-Violence, Veterans, Peace * Journalism, Open Source, Internet You can find more on this at https://www.craigslist.org/about/charitable According to Forbes, Craig is a billionaire. But he's said that his “minimal profit” business model allows him to “give away tremendous amounts of money to the nonprofits I believe in” including Wikipedia, a similar minded site. The stories of the history of computing are often full of people becoming “the richest person in the world” and organizations judged based on market share. But not only with the impact that the site has had but also with those inspired by how he runs it, Craig Newmark shatters all of those misconceptions of how the world should work. These days you're probably most likely gonna' find him on craigconnects.org - “helping people do good work that matters.” So think about this, my lovely listeners. No matter how old you are, nor how bad your design skills, nor how disruptive it will be or not be, anyone can parlay an idea that helps a few people into something that changes not only their life, but changes the lives of others, disrupts multiple industries, and doesn't have to create all the stress of trying to keep up with the tech joneses. You can do great things if you want. Or you can listen to me babble. Thanks for doing that. We're lucky to have you join us.
An Open Source pioneer, Brian Behlendorf now leads the effort to build the infrastructure for trust as a service. In the past he helped build the foundations of the Web with the Apache Foundation and brought Open Source to the enterprise with Collab.net. At The Interval he’ll discuss his current work leading Hyperledger at the Linux Foundation to unlock blockchain’s potential beyond cryptocurrency. Brian Behlendorf is Executive Director for Hyperledger, a project of the Linux Foundation. Hyperledger is an open source collaborative effort created to advance cross-industry blockchain technologies. Previously he was the primary developer of the Apache Web server, the most popular web server software on the Internet, and a founding member of the Apache Software Foundation. He was the founding CTO of CollabNet and CTO of the World Economic Forum. Most recently, Behlendorf was a managing director at Mithril Capital Management LLC, a global technology investment firm. He is a long-serving board member of the Mozilla Foundation and the Electronic Frontier Foundation.
A little over a week ago, KubeCon and CloudNativeCon happened and our independent Roaring Roving Reporter Rubik Dave came back from Barcelona with a comprehensive report. Kubernetes As the kubernetes.io webpage tells us: "Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications." As we discuss in the episode, Kubernetes forms a kind of middleware layer that performs orchestration of light weight docker containers. To be sure, you can use other container technologies but Docker (and its companion project Moby) are what is most often used with Kubernetes. The biggest advantage of Kubernetes, I believe, is how it has standardized the way a micro services framework based on docker container instances can be deployed and managed. There have been a myriad of other approaches that tried to solve that problem (and Dave gives a rather exhaustive list in the episode), Kubernetes has emerged to be the best supported by the community. KubeCon And that is where KubeCon comes in: there are other, more developer oriented conferences, but KubeCon is perhaps the largest event for Kubernetes consumers. Details on this years event are available at the KubeCon | CloudNativeCon Europe 2019 website. If you missed this years installment, take a note that next years Europe event will be in Amsterdam, March 30th to April 2nd. And if the American continent is more practical, you can join the community at the San Diego venue, November 18th to 21st. CloudNativeCon KubeCon ran together with the CloudNativeCon for as long as I can figure out and since Kubernetes is one of the larger "CNCF graduated" projects, that is not surprising. It also makes sense since micro services architectures are an excellent fit for cloud based deployments so a lot of the Kubernetes community is likely to also be a member of the "cloud crowd". Now, reading the CloudNative website, their charter in particular, it does seems to see it's purpose in a similar vein as the Apache Foundation does. However, the CloudNative folk recommend the projects under it's wings to use the Apache 2.0 license so they certainly don't appear to be in any kind of direct competition here... I think I feel a future podcast episode announcing itself! :D Please use the Contact Form on this blog or our twitter feed to send us your questions, or to suggest future episode topics you would like us to cover.
Is there really any advantage to building your software vs installing the package? We discuss when and why you might want to consider building it yourself. Plus some useful things Mozilla is working on and Cassidy joins us to tell us about elementary OS' big choice. Special Guests: Brent Gervais, Cassidy James Blaede, and Martin Wimpress.
A EULA in FOSS clothing, NetBSD with more LLVM support, Thoughts on FreeBSD 12.0, FreeBSD Performance against Windows and Linux on Xeon, Microsoft shipping NetBSD, and more. Headlines A EULA in FOSS clothing? There was a tremendous amount of reaction to and discussion about my blog entry on the midlife crisis in open source. As part of this discussion on HN, Jay Kreps of Confluent took the time to write a detailed response — which he shortly thereafter elevated into a blog entry. Let me be clear that I hold Jay in high regard, as both a software engineer and an entrepreneur — and I appreciate the time he took to write a thoughtful response. That said, there are aspects of his response that I found troubling enough to closely re-read the Confluent Community License — and that in turn has led me to a deeply disturbing realization about what is potentially going on here. To GitHub: Assuming that this is in fact a EULA, I think it is perilous to allow EULAs to sit in public repositories. It’s one thing to have one click through to accept a license (though again, that itself is dubious), but to say that a git clone is an implicit acceptance of a contract that happens to be sitting somewhere in the repository beggars belief. With efforts like choosealicense.com, GitHub has been a model in guiding projects with respect to licensing; it would be helpful for GitHub’s counsel to weigh in on their view of this new strain of source-available proprietary software and the degree to which it comes into conflict with GitHub’s own terms of service. To foundations concerned with software liberties, including the Apache Foundation, the Linux Foundation, the Free Software Foundation, the Electronic Frontier Foundation, the Open Source Initiative, and the Software Freedom Conservancy: the open source community needs your legal review on this! I don’t think I’m being too alarmist when I say that this is potentially a dangerous new precedent being set; it would be very helpful to have your lawyers offer their perspectives on this, even if they disagree with one another. We seem to be in some terrible new era of frankenlicenses, where the worst of proprietary licenses are bolted on to the goodwill created by open source licenses; we need your legal voices before these creatures destroy the village! NetBSD and LLVM NetBSD entering 2019 with more complete LLVM support I’m recently helping the NetBSD developers to improve the support for this operating system in various LLVM components. As you can read in my previous report, I’ve been focusing on fixing build and test failures for the purpose of improving the buildbot coverage. Previously, I’ve resolved test failures in LLVM, Clang, LLD, libunwind, openmp and partially libc++. During the remainder of the month, I’ve been working on the remaining libc++ test failures, improving the NetBSD clang driver and helping Kamil Rytarowski with compiler-rt. The process of upstreaming support to LLVM sanitizers has been finalized I’ve finished the process of upstreaming patches to LLVM sanitizers (almost 2000LOC of local code) and submitted to upstream new improvements for the NetBSD support. Today out of the box (in unpatched version) we have support for a variety of compiler-rt LLVM features: ASan (finds unauthorized memory access), UBSan (finds unspecified code semantics), TSan (finds threading bugs), MSan (finds uninitialized memory use), SafeStack (double stack hardening), Profile (code coverage), XRay (dynamic code tracing); while other ones such as Scudo (hardened allocator) or DFSan (generic data flow sanitizer) are not far away from completeness. The NetBSD support is no longer visibly lacking behind Linux in sanitizers, although there are still failing tests on NetBSD that are not observed on Linux. On the other hand there are features working on NetBSD that are not functional on Linux, like sanitizing programs during early initialization process of OS (this is caused by /proc dependency on Linux that is mounted by startup programs, while NetBSD relies on sysctl(3) interfaces that is always available). News Roundup Thoughts on FreeBSD 12.0 Playing with FreeBSD with past week I don’t feel as though there were any big surprises or changes in this release compared to FreeBSD 11. In typical FreeBSD fashion, progress tends to be evolutionary rather than revolutionary, and this release feels like a polished and improved incremental step forward. I like that the installer handles both UFS and ZFS guided partitioning now and in a friendly manner. In the past I had trouble getting FreeBSD’s boot menu to work with boot environments, but that has been fixed for this release. I like the security options in the installer too. These are not new, but I think worth mentioning. FreeBSD, unlike most Linux distributions, offers several low-level security options (like hiding other users’ processes and randomizing PIDs) and I like having these presented at install time. It’s harder for people to attack what they cannot see, or predict, and FreeBSD optionally makes these little adjustment for us. Something which stands out about FreeBSD, compared to most Linux distributions I run, is that FreeBSD rarely holds the user’s hand, but also rarely surprises the user. This means there is more reading to do up front and new users may struggle to get used to editing configuration files in a text editor. But FreeBSD rarely does anything unless told to do it. Updates rarely change the system’s behaviour, working technology rarely gets swapped out for something new, the system and its applications never crashed during my trial. Everything was rock solid. The operating system may seem like a minimal, blank slate to new users, but it’s wonderfully dependable and predictable in my experience. I probably wouldn’t recommend FreeBSD for desktop use. It’s close relative, GhostBSD, ships with a friendly desktop and does special work to make end user applications run smoothly. But for people who want to run servers, possible for years without change or issues, FreeBSD is a great option. It’s also an attractive choice, in my opinion, for people who like to build their system from the ground up, like you would with Debian’s server install or Arch Linux. Apart from the base tools and documentation, there is nothing on a FreeBSD system apart from what we put on it. FreeBSD 12.0 Performance Against Windows & Linux On An Intel Xeon Server Last week I posted benchmarks of Windows Server 2019 against various Linux distributions using a Tyan dual socket Intel Xeon server. In this article are some complementary results when adding in the performance of FreeBSD 11.2 against the new FreeBSD 12.0 stable release for this leading BSD operating system. As some fun benchmarks to end out 2018, here are the results of FreeBSD 11.2/12.0 (including an additional run when using GCC rather than Clang) up against Windows Server and several enterprise-ready Linux distributions. While FreeBSD 12.0 had picked up just one win of the Windows/Linux comparisons run, the FreeBSD performance is moving in the right direction. FreeBSD 12.0 was certainly faster than FreeBSD 11.2 on this dual Intel Xeon Scalable server based on a Tyan 1U platform. Meanwhile, to no surprise given the data last week, Clear Linux was by far the fastest out-of-the-box operating system tested. I did run some extra benchmarks on FreeBSD 11.2/12.0 with this hardware: in total I ran 120 benchmarks for these BSD tests. Of the 120 tests, there were just 15 cases where FreeBSD 11.2 was faster than 12.0. Seeing FreeBSD 12.0 faster than 11.2 nearly 90% of the time is an accomplishment and usually with other operating systems we see more of a mixed bag on new releases with not such solidly better performance. It was also great seeing the competitive performance out of FreeBSD when using the Clang compiler for the source-based tests compared to the GCC8 performance. Additional data available via this OpenBenchmarking.org result file. How NetBSD came to be shipped by Microsoft Google cache in case the site is down In 2000, Joe Britt, Matt Hershenson and Andy Rubin formed Danger Incorporated. Danger developed the world’s first recognizable smartphone, the Danger HipTop. T-Mobile sold the first HipTop under the brand name Sidekick in October of 2002. Danger had a well developed kernel that had been designed and built in house. The kernel came to be viewed as not a core intellectual property and Danger started a search for a replacement. For business reasons, mostly to do with legal concerns over the Gnu Public License, Danger rejected Linux and began to consider BSD Unix as a replacement for the kernel. In 2006 I was hired by Mike Chen, the manager of the kernel development group to investigate the feasibility of replacing the Danger kernel with a BSD kernel, to select the version of BSD to use, to develop a prototype and to develop the plan for adapting BSD to Danger’s requirements. NetBSD was easily the best choice among the BSD variations at the time because it had well developed cross development tools. It was easy to use a NetBSD desktop running an Intel release to cross compile a NetBSD kernel and runtime for a device running an ARM processor. (Those interested in mailing list archaeology might be amused to investigate NetBSD technical mailing list for mail from picovex, particularly from Bucky Katz at picovex.) We began product development on the specific prototype of the phone that would become the Sidekick LX2009 in 2007 and contracts for the phone were written with T-Mobile. We were about half way through the two year development cycle when Microsoft purchased Danger in 2008. Microsoft would have preferred to ship the Sidekick running Windows/CE rather than NetBSD, but a schedule analysis performed by me, and another by an independent outside contractor, indicated that doing so would result in unacceptable delay. Beastie Bits Unleashed 1.2 Released 35th CCC - Taming the Chaos: Can we build systems that actually work? Potholes to avoid when migrating to IPv6 XScreenSaver 5.42 SSH Examples and Tunnels Help request - mbuf(9) - request for comment NSA to release free Reverse Engineering Tool Running FreeBSD on a Raspberry Pi3 using a custom image created with crochet and poudriere Feedback/Questions Dries - Lets talk a bit about VIMAGE jails ohb - Question About ZFS Root Dataset Micah - Active-Active NAS Sync recommendations Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
What is the startup world of Europe like? How does it differ from the tech startup scene in the U.S. and how is it similar? Duncan Davidson can tell you and he does on today’s episode of CTO Studio. Duncan is the CTO in Residence at Microsoft for Startups Europe. On this show, we dive into the European tech world as well as his Apache Ant creation. We also discuss which city is at the heart of the tech scene in Europe, what the general consensus is on Silicon Valley and a whole lot more. Join us to get the inside scoop on this episode of CTO Studio. In this episode, you’ll hear: How to balance getting it right with the need to change and adapt. When should you let go of something you created? How does the rest of the tech world feel about Silicon Valley? Why it's human nature to optimize to a single point of failure. Why do we need mentors as leaders in tech? And so much more! To start the show we take a quick dip into the German tech scene, how it's similar and how it's different from the tech scene here in the U.S. Duncan gives us his perspective and experience on that topic before shifting to Apache Ant. I wanted to know what his process was like to get into the Apache Foundation. How did it happen? He created Ant because he needed it, but what happened then? Duncan says Ant was sneaky! There are multiple angles to this story, one of which is how he made so many mistakes during its creation but didn't know until later. Another is how it was a stealth tool, the way they got it into Apache was through Tomcat. He was working at Tomcat at Sun, and he wanted to be able to build Tomcat on his Mac, on his Windows machine, on his Unix machine. And that's where the impetus for this brand came from. The way they finagled this was when they got Tomcat ready to go into the Apache's software foundation, they needed to ship it as a build tool. They built Tomcat with Ant so without Ant you couldn't build Tomcat. He says someone could've written some make files, but they just slid it in! But it wasn't without consequences. A lot of people were angry once they figured Ant had gone out with Tomcat and what a big deal Ant became, they were upset. Nothing could be done at that point. He says it is funny they were upset because at the time he made Ant he talked with compiler folks and everyone else around and no one was really into it. But once it got out into the Apache community, the folks using Tomcat, then those same people who had passed on it became upset about the built script. I asked Duncan if they maintained the project as open source after that, and he said no. When it went out they shipped Ant and Tomcat together and then he spent the next year working on a lot of community things around it so he actually wasn't hand on in the code for Ant. Tomcat got all the attention and then some folks found Ant and those were the people who pushed it forward. They took it somewhere, so much so that when he was done with his year-long community building to ensure everyone was happy, he wanted to go hack on Ant. He had lots of ideas about what he wanted to change with Ant: the XML format and a lot of other things. So he came into the Ant community with his ideas and people were like who are you? They wanted him to go make patches to prove he was really part of the community! He went through about a week of that and he wasn’t particularly happy about it. Until he realized what was so awesome about the situation: he had had an idea that was something that needed to be done, and he did it and it went out into the world. Then other people actually took it and found so much value in it they wanted to defend it. They really wanted to keep what it was and they saw something in it, perhaps something he hadn't seen. We switched topics from there and I asked Duncan about Microsoft's move to bring German startups into their ecosystem. What is his perspective on this after being in the Silicon Valley tech world? No matter what, Duncan says, everyone else in the world has mad respect for Silicon Valley and what is going on there. Everybody is in awe of it, even knowing the downside of it as well. And those same people want to know what makes it so unique. How has Silicon Valley come to be what is today Silicon Valley? He always explains the history including the schools and the military and the initial technology run in the 40s and 50s, and it just continued to snowball from there. It is a special place, and people want to create it somewhere else. But Duncan acknowledges creating an environment like Silicon Valley is the kind of thing that takes decades. You need the universities, the spirit, the willingness to take risks, the ability to fail and to learn from failure. Some cultures don't have that as much, both here in the U.S. and elsewhere in the world. He says it is fascinating to see people's desire to emulate that kind of environment, but there's also a hesitancy and caution people have. He admits to feeling this, too. Today there are more and more geopolitical questions about people and borders and where they can go, and where they can't go and where they can and cannot work. The majority of our tech giants in the U.S. are on the West Coast, so could that concentration be an issue geopolitically? There does seem to be a strong urge to centralize. Duncan expands on this thought as well as why, aside from science, we insist on revisiting lessons learned from previous generations. And speaking of learning, does he think the world is still learning from Silicon Valley? Yes for sure. One place you see this is China. Right now it's really popular to talk about how magnificent and unique China is, but he had ignored it for the most part. However he's been watching the maker communities in Shenzhen. These groups are using all the fruits of the manufacturing being done in their area and repurpose those pieces. They can take all the pieces of an iPhone, for example, and go use it for something else! He also talks about Naomi Wu in the Chinese innovation movement, what he does for startups in his role as CTO in Residence Berlin and how he got involved in TED Talks as their main photographer! You’ll hear those great stories and more on today’s episode of CTO Studio.
The Kubernetes ecosystem consists of enterprises, vendors, open source projects, and individual engineers. The Cloud Native Computing Foundation was created to balance the interests of all the different groups within the cloud native community. CNCF has similarities to the Linux Foundation and the Apache Foundation. CNCF helps to guide open source projects in the Kubernetes The post Cloud Native Computing Foundation with Chris Aniszczyk and Dan Kohn appeared first on Software Engineering Daily.
We try to answer what happens to an open source project after a developers death, we tell you about the last bootstrapped tech company in Silicon Valley, we have an update to the NetBSD Thread sanitizer, and show how to use use cabal on OpenBSD This episode was brought to you by Headlines Life after death, for code (https://www.wired.com/story/giving-open-source-projects-life-after-a-developers-death/) YOU'VE PROBABLY NEVER heard of the late Jim Weirich or his software. But you've almost certainly used apps built on his work. Weirich helped create several key tools for Ruby, the popular programming language used to write the code for sites like Hulu, Kickstarter, Twitter, and countless others. His code was open source, meaning that anyone could use it and modify it. "He was a seminal member of the western world's Ruby community," says Justin Searls, a Ruby developer and co-founder of the software company Test Double. When Weirich died in 2014, Searls noticed that no one was maintaining one of Weirich's software-testing tools. That meant there would be no one to approve changes if other developers submitted bug fixes, security patches, or other improvements. Any tests that relied on the tool would eventually fail, as the code became outdated and incompatible with newer tech. The incident highlights a growing concern in the open-source software community. What happens to code after programmers pass away? Much has been written about what happens to social-media accounts after users die. But it's been less of an issue among programmers. In part, that's because most companies and governments relied on commercial software maintained by teams of people. But today, more programs rely on obscure but crucial software like Weirich's. Some open-source projects are well known, such as the Linux operating system or Google's artificial-intelligence framework TensorFlow. But each of these projects depend on smaller libraries of open-source code. And those libraries depend on other libraries. The result is a complex, but largely hidden, web of software dependencies. That can create big problems, as in 2014 when a security vulnerability known as "Heartbleed" was found in OpenSSL, an open-source program used by nearly every website that processes credit- or debit-card payments. The software comes bundled with most versions of Linux, but was maintained by a small team of volunteers who didn't have the time or resources to do extensive security audits. Shortly after the Heartbleed fiasco, a security issue was discovered in another common open-source application called Bash that left countless web servers and other devices vulnerable to attack. There are surely more undiscovered vulnerabilities. Libraries.io, a group that analyzes connections between software projects, has identified more than 2,400 open-source libraries that are used in at least 1,000 other programs but have received little attention from the open-source community. Security problems are only one part of the issue. If software libraries aren't kept up to date, they may stop working with newer software. That means an application that depends on an outdated library may not work after a user updates other software. When a developer dies or abandons a project, everyone who depends on that software can be affected. Last year when programmer Azer Koçulu deleted a tiny library called Leftpad from the internet, it created ripple effects that reportedly caused headaches at Facebook, Netflix, and elsewhere. The Bus Factor The fewer people with ownership of a piece of software, the greater the risk that it could be orphaned. Developers even have a morbid name for this: the bus factor, meaning the number of people who would have to be hit by a bus before there's no one left to maintain the project. Libraries.io has identified about 3,000 open-source libraries that are used in many other programs but have only a handful of contributors. Orphaned projects are a risk of using open-source software, though commercial software makers can leave users in a similar bind when they stop supporting or updating older programs. In some cases, motivated programmers adopt orphaned open-source code. That's what Searls did with one of Weirich's projects. Weirich's most-popular projects had co-managers by the time of his death. But Searls noticed one, the testing tool Rspec-Given, hadn't been handed off, and wanted to take responsibility for updating it. But he ran into a few snags along the way. Rspec-Given's code was hosted on the popular code-hosting and collaboration site GitHub, home to 67 million codebases. Weirich's Rspec-Given page on GitHub was the main place for people to report bugs or to volunteer to help improve the code. But GitHub wouldn't give Searls control of the page, because Weirich had not named him before he died. So Searls had to create a new copy of the code, and host it elsewhere. He also had to convince the operators of Ruby Gems, a “package-management system” for distributing code, to use his version of Rspec-Given, instead of Weirich's, so that all users would have access to Searls' changes. GitHub declined to discuss its policies around transferring control of projects. That solved potential problems related to Rspec-Given, but it opened Searls' eyes to the many things that could go wrong. “It's easy to see open source as a purely technical phenomenon,” Searls says. “But once something takes off and is depended on by hundreds of other people, it becomes a social phenomenon as well.” The maintainers of most package-management systems have at least an ad-hoc process for transferring control over a library, but that process usually depends on someone noticing that a project has been orphaned and then volunteering to adopt it. "We don't have an official policy mostly because it hasn't come up all that often," says Evan Phoenix of the Ruby Gems project. "We do have an adviser council that is used to decide these types of things case by case." Some package managers now monitor their libraries and flag widely used projects that haven't been updated in a long time. Neil Bowers, who helps maintain a package manager for the programming language Perl, says he sometimes seeks out volunteers to take over orphan projects. Bowers says his group vets claims that a project has been abandoned, and the people proposing to take it over. A 'Dead-Man's Switch' Taking over Rspec-Given inspired Searls, who was only 30 at the time, to make a will and a succession plan for his own open-source projects. There are other things developers can do to help future-proof their work. They can, for example, transfer the copyrights to a foundation, such as the Apache Foundation. But many open-source projects essentially start as hobbies, so programmers may not think to transfer ownership until it is too late. Searls suggests that GitHub and package managers such as Gems could add something like a "dead man's switch" to their platform, which would allow programmers to automatically transfer ownership of a project or an account to someone else if the creator doesn't log in or make changes after a set period of time. But a transition plan means more than just giving people access to the code. Michael Droettboom, who took over a popular mathematics library called Matplotlib after its creator John Hunter died in 2012, points out that successors also need to understand the code. "Sometimes there are parts of the code that only one person understands," he says. "The knowledge exists only in one person's head." That means getting people involved in a project earlier, ideally as soon as it is used by people other than the original developer. That has another advantage, Searls points out, in distributing the work of maintaining a project to help prevent developer burnout. The Last Bootstrapped Tech Company In Silicon Valley (https://www.forbes.com/sites/forbestechcouncil/2017/12/12/the-last-bootstrapped-tech-company-in-silicon-valley/2/#4d53d50f1e4d) My business partner, Matt Olander, and I were intimately familiar with the ups and downs of the Silicon Valley tech industry when we acquired the remnants of our then-employer BSDi's enterprise computer business in 2002 and assumed the roles of CEO and CTO. Fast-forward to today, and we still work in the same buildings where BSDi started in 1996, though you'd hardly recognize them today. As the business grew from a startup to a global brand, our success came from always ensuring we ran a profitable business. While that may sound obvious, keep in mind that we are in the heart of Silicon Valley where venture capitalists hunt for the unicorn company that will skyrocket to a billion-dollar valuation. Unicorns like Facebook and Twitter unquestionably exist, but they are the exception. Live By The VC, Die By The VC After careful consideration, Matt and I decided to bootstrap our company rather than seek funding. The first dot-com bubble had recently burst, and we were seeing close friends lose their jobs right and left at VC-funded companies based on dubious business plans. While we did not have much cash on hand, we did have a customer base and treasured those customers as our greatest asset. We concluded that meeting their needs was the surest path to meeting ours, and the rest would simply be details to address individually. This strategy ended up working so well that we have many of the same customers to this day. After deciding to bootstrap, we made a decision on a matter that has left egg on the face of many of our competitors: We seated sales next to support under one roof at our manufacturing facility in Silicon Valley. Dell's decision to outsource some of its support overseas in the early 2000s was the greatest gift it could have given us. Some of our sales and support staff have worked with the same clients for over a decade, and we concluded that no amount of funding could buy that mutual loyalty. While accepting venture capital or an acquisition may make you rich, it does not guarantee that your customers, employees or even business will be taken care of. Our motto is, “Treat your customers like friends and employees like family,” and we have an incredibly low employee turnover to show for it. Thanks to these principles, iXsystems has remained employee-owned, debt-free and profitable from the day we took it over -- all without VC funding, which is why we call ourselves the "last bootstrapped tech company in Silicon Valley." As a result, we now provide enterprise servers to thousands of customers, including top Fortune 500 companies, research and educational institutions, all branches of the military, and numerous government entities. Over time, however, we realized that we were selling more and more third-party data storage systems with every order. We saw this as a new opportunity. We had partnered with several storage vendors to meet our customers' needs, but every time we did, we opened a can of worms with regard to supporting our customers to our standards. Given a choice of risking being dragged down by our partners or outmaneuvered by competitors with their own storage portfolios, we made a conscious decision to develop a line of storage products that would not only complement our enterprise servers but tightly integrate with them. To accelerate this effort, we adopted the FreeNAS open-source software-defined storage project in 2009 and haven't looked back. The move enabled us to focus on storage, fully leveraging our experience with enterprise hardware and our open source heritage in equal measures. We saw many storage startups appear every quarter, struggling to establish their niche in a sea of competitors. We wondered how they'd instantly master hardware to avoid the partnering mistakes that we made years ago, given that storage hardware and software are truly inseparable at the enterprise level. We entered the storage market with the required hardware expertise, capacity and, most importantly, revenue, allowing us to develop our storage line at our own pace. Grow Up, But On Your Own Terms By not having the external pressure from VCs or shareholders that your competitors have, you're free to set your own priorities and charge fair prices for your products. Our customers consistently tell us how refreshing our sales and marketing approaches are. We consider honesty, transparency and responsible marketing the only viable strategy when you're bootstrapped. Your reputation with your customers and vendors should mean everything to you, and we can honestly say that the loyalty we have developed is priceless. So how can your startup venture down a similar path? Here's our advice for playing the long game: Relate your experiences to each fad: Our industry is a firehose of fads and buzzwords, and it can be difficult to distinguish the genuine trends from the flops. Analyze every new buzzword in terms of your own products, services and experiences, and monitor customer trends even more carefully. Some buzzwords will even formalize things you have been doing for years. Value personal relationships: Companies come and go, but you will maintain many clients and colleagues for decades, regardless of the hat they currently wear. Encourage relationship building at every level of your company because you may encounter someone again. Trust your instincts and your colleagues: No contractual terms or credit rating system can beat the instincts you will develop over time for judging the ability of individuals and companies to deliver. You know your business, employees and customers best. Looking back, I don't think I'd change a thing. We need to be in Silicon Valley for the prime customers, vendors and talent, and it's a point of pride that our customers recognize how different we are from the norm. Free of a venture capital “runway” and driven by these principles, we look forward to the next 20 years in this highly-competitive industry. Creating an AS for fun and profit (http://blog.thelifeofkenneth.com/2017/11/creating-autonomous-system-for-fun-and.html) At its core, the Internet is an interconnected fabric of separate networks. Each network which makes up the Internet is operated independently and only interconnects with other networks in clearly defined places. For smaller networks like your home, the interaction between your network and the rest of the Internet is usually pretty simple: you buy an Internet service plan from an ISP (Internet Service Provider), they give you some kind of hand-off through something like a DSL or cable modem, and give you access to "the entire Internet". Your router (which is likely also a WiFi access point and Ethernet switch) then only needs to know about two things; your local computers and devices are on one side, and the ENTIRE Internet is on the other side of that network link given to you by your ISP. For most people, that's the extent of what's needed to be understood about how the Internet works. Pick the best ISP, buy a connection from them, and attach computers needing access to the Internet. And that's fine, as long as you're happy with only having one Internet connection from one vendor, who will lend you some arbitrary IP address(es) for the extend of your service agreement, but that starts not being good enough when you don't want to be beholden to a single ISP or a single connection for your connectivity to the Internet. That also isn't good enough if you are an Internet Service Provider so you are literally a part of the Internet. You can't assume that the entire Internet is that way when half of the Internet is actually in the other direction. This is when you really have to start thinking about the Internet and treating the Internet as a very large mesh of independent connected organizations instead of an abstract cloud icon on the edge of your local network map. Which is pretty much never for most of us. Almost no one needs to consider the Internet at this level. The long flight of steps from DSL for your apartment up to needing to be an integral part of the Internet means that pretty much regardless of what level of Internet service you need for your projects, you can probably pay someone else to provide it and don't need to sit down and learn how BGP works and what an Autonomous System is. But let's ignore that for one second, and talk about how to become your own ISP. To become your own Internet Service Provider with customers who pay you to access the Internet, or be your own web hosting provider with customers who pay you to be accessible from the Internet, or your own transit provider who has customers who pay you to move their customer's packets to other people's customers, you need a few things: Your own public IP address space allocated to you by an Internet numbering organization Your own Autonomous System Number (ASN) to identify your network as separate from everyone else's networks At least one router connected to a different autonomous system speaking the Border Gateway Protocol to tell the rest of the Internet that your address space is accessible from your autonomous system. So... I recently set up my own autonomous system... and I don't really have a fantastic justification for it... My motivation was twofold: One of my friends and I sat down and figured it out that splitting the cost of a rack in Hurricane Electric's FMT2 data center marginally lowered our monthly hosting expenses vs all the paid services we're using scattered across the Internet which can all be condensed into this one rack. And this first reason on its own is a perfectly valid justification for paying for co-location space at a data center like Hurricane Electric's, but isn't actually a valid reason for running it as an autonomous system, because Hurricane Electric will gladly let you use their address space for your servers hosted in their building. That's usually part of the deal when you pay for space in a data center: power, cooling, Internet connectivity, and your own IP addresses. Another one of my friends challenged me to do it as an Autonomous System. So admittedly, my justification for going through the additional trouble to set up this single rack of servers as an AS is a little more tenuous. I will readily admit that, more than anything else, this was a "hold my beer" sort of engineering moment, and not something that is at all needed to achieve what we actually needed (a rack to park all our servers in). But what the hell; I've figured out how to do it, so I figured it would make an entertaining blog post. So here's how I set up a multi-homed autonomous system on a shoe-string budget: Step 1. Found a Company Step 2. Get Yourself Public Address Space Step 3. Find Yourself Multiple Other Autonomous Systems to Peer With Step 4. Apply for an Autonomous System Number Step 5. Source a Router Capable of Handling the Entire Internet Routing Table Step 6. Turn it All On and Pray And we're off to the races. At this point, Hurricane Electric is feeding us all ~700k routes for the Internet, we're feeding them our two routes for our local IPv4 and IPv6 subnets, and all that's left to do is order all our cross-connects to other ASes in the building willing to peer with us (mostly for fun) and load in all our servers to build our own personal corner of the Internet. The only major goof so far has been accidentally feeding the full IPv6 table to our first other peer that we turned on, but thankfully he has a much more powerful supervisor than the Sup720-BXL, so he just sent me an email to knock that off, a little fiddling with my BGP egress policies, and we were all set. In the end, setting up my own autonomous system wasn't exactly simple, it was definitely not justified, but some times in life you just need to take the more difficult path. And there's a certain amount of pride in being able to claim that I'm part of the actual Internet. That's pretty neat. And of course, thanks to all of my friends who variously contributed parts, pieces, resources, and know-how to this on-going project. I had to pull in a lot of favors to pull this off, and I appreciate it. News Roundup One year checkpoint and Thread Sanitizer update (https://blog.netbsd.org/tnf/entry/one_year_checkpoint_and_thread) The past year has been started with bugfixes and the development of regression tests for ptrace(2) and related kernel features, as well as the continuation of bringing LLDB support and LLVM sanitizers (ASan + UBsan and partial TSan + Msan) to NetBSD. My plan for the next year is to finish implementing TSan and MSan support, followed by a long run of bug fixes for LLDB, ptrace(2), and other related kernel subsystems TSan In the past month, I've developed Thread Sanitizer far enough to have a subset of its tests pass on NetBSD, started with addressing breakage related to the memory layout of processes. The reason for this breakage was narrowed down to the current implementation of ASLR, which was too aggressive and which didn't allow enough space to be mapped for Shadow memory. The fix for this was to either force the disabling of ASLR per-process, or globally on the system. The same will certainly happen for MSan executables. After some other corrections, I got TSan to work for the first time ever on October 14th. This was a big achievement, so I've made a snapshot available. Getting the snapshot of execution under GDB was pure hazard. ``` $ gdb ./a.out GNU gdb (GDB) 7.12 Copyright (C) 2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64--netbsd". Type "show configuration" for configuration details. For bug reporting instructions, please see: . Find the GDB manual and other documentation resources online at: . For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from ./a.out...done. (gdb) r Starting program: /public/llvm-build/a.out [New LWP 2] WARNING: ThreadSanitizer: data race (pid=1621) Write of size 4 at 0x000001475d70 by thread T1: #0 Thread1 /public/llvm-build/tsan.c:4:10 (a.out+0x46bf71) Previous write of size 4 at 0x000001475d70 by main thread: #0 main /public/llvm-build/tsan.c:10:10 (a.out+0x46bfe6) Location is global 'Global' of size 4 at 0x000001475d70 (a.out+0x000001475d70) Thread T1 (tid=2, running) created by main thread at: #0 pthreadcreate /public/llvm/projects/compiler-rt/lib/tsan/rtl/tsaninterceptors.cc:930:3 (a.out+0x412120) #1 main /public/llvm-build/tsan.c:9:3 (a.out+0x46bfd1) SUMMARY: ThreadSanitizer: data race /public/llvm-build/tsan.c:4:10 in Thread1 Thread 2 received signal SIGSEGV, Segmentation fault. ``` I was able to get the above execution results around 10% of the time (being under a tracer had no positive effect on the frequency of successful executions). I've managed to hit the following final results for this month, with another set of bugfixes and improvements: check-tsan: Expected Passes : 248 Expected Failures : 1 Unsupported Tests : 83 Unexpected Failures: 44 At the end of the month, TSan can now reliably executabe the same (already-working) program every time. The majority of failures are in tests verifying sanitization of correct mutex locking usage. There are still problems with NetBSD-specific libc and libpthread bootstrap code that conflicts with TSan. Certain functions (pthreadcreate(3), pthreadkeycreate(3), _cxaatexit()) cannot be started early by TSan initialization, and must be deferred late enough for the sanitizer to work correctly. MSan I've prepared a scratch support for MSan on NetBSD to help in researching how far along it is. I've also cloned and adapted the existing FreeBSD bits; however, the code still needs more work and isn't functional yet. The number of passed tests (5) is negligible and most likely does not work at all. The conclusion after this research is that TSan shall be finished first, as it touches similar code. In the future, there will be likely another round of iterating the system structs and types and adding the missing ones for NetBSD. So far, this part has been done before executing the real MSan code. I've added one missing symbol that was missing and was detected when attempting to link a test program with MSan. Sanitizers The GCC team has merged the LLVM sanitizer code, which has resulted in almost-complete support for ASan and UBsan on NetBSD. It can be found in the latest GCC8 snapshot, located in pkgsrc-wip/gcc8snapshot. Though, do note that there is an issue with getting backtraces from libasan.so, which can be worked-around by backtracing ASan events in a debugger. UBsan also passes all GCC regression tests and appears to work fine. The code enabling sanitizers on the GCC/NetBSD frontend will be submitted upstream once the backtracing issue is fixed and I'm satisfied that there are no other problems. I've managed to upstream a large portion of generic+TSan+MSan code to compiler-rt and reduce local patches to only the ones that are in progress. This deals with any rebasing issues, and allows me to just focus on the delta that is being worked on. I've tried out the LLDB builds which have TSan/NetBSD enabled, and they built and started fine. However, there were some false positives related to the mutex locking/unlocking code. Plans for the next milestone The general goals are to finish TSan and MSan and switch back to LLDB debugging. I plan to verify the impact of the TSan bootstrap initialization on the observed crashes and research the remaining failures. This work was sponsored by The NetBSD Foundation. The NetBSD Foundation is a non-profit organization and welcomes any donations to help us continue funding projects and services to the open-source community. Please consider visiting the following URL, and chip in what you can: The scourge of systemd (https://blog.ungleich.ch/en-us/cms/blog/2017/12/10/the-importance-of-devuan/) While this article is actually couched in terms of promoting devuan, a de-systemd-ed version of debian, it would seem the same logic could be applied to all of the BSDs Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster! Many of the manufacturers decide to implement doord, because the company providing doord makes it clear that it is beneficial for everyone. And additional to opening doors faster, it also standardises things. How to turn on your car? It is the same now everywhere, it is not necessarily to look for the keyhole anymore. Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way your navigation system works, because that is totally related to opening doors, but leads to some users being unable to navigate, which is accepted as collateral damage. In the end, you at least have faster door opening and a standard way to turn on the car. Oh, and if you are in a traffic jam and have to restart the engine often, it will stop restarting it after several times, because that's not what you are supposed to do. You can open the engine hood and tune that setting though, but it will be reset once you buy a new car. Some of you might now ask themselves "Is systemd THAT bad?". And my answer to it is: No. It is even worse. Systemd developers split the community over a tiny detail that decreases stability significantly and increases complexity for not much real value. And this is not theoretical: We tried to build Data Center Light on Debian and Ubuntu, but servers that don't boot, that don't reboot or systemd-resolved that constantly interferes with our core network configuration made it too expensive to run Debian or Ubuntu. Yes, you read right: too expensive. While I am writing here in flowery words, the reason to use Devuan is hard calculated costs. We are a small team at ungleich and we simply don't have the time to fix problems caused by systemd on a daily basis. This is even without calculating the security risks that come with systemd. Using cabal on OpenBSD (https://deftly.net/posts/2017-10-12-using-cabal-on-openbsd.html) Since W^X became mandatory in OpenBSD (https://undeadly.org/cgi?action=article&sid=20160527203200), W^X'd binaries are only allowed to be executed from designated locations (mount points). If you used the auto partition layout during install, your /usr/local/ will be mounted with wxallowed. For example, here is the entry for my current machine: /dev/sd2g on /usr/local type ffs (local, nodev, wxallowed, softdep) This is a great feature, but if you build applications outside of the wxallowed partition, you are going to run into some issues, especially in the case of cabal (python as well). Here is an example of what you would see when attempting to do cabal install pandoc: qbit@slip[1]:~? cabal update Config file path source is default config file. Config file /home/qbit/.cabal/config not found. Writing default configuration to /home/qbit/.cabal/config Downloading the latest package list from hackage.haskell.org qbit@slip[0]:~? cabal install pandoc Resolving dependencies... ..... cabal: user error (Error: some packages failed to install: JuicyPixels-3.2.8.3 failed during the configure step. The exception was: /home/qbit/.cabal/setup-exe-cache/setup-Simple-Cabal-1.22.5.0-x86_64-openbsd-ghc-7.10.3: runProcess: runInteractiveProcess: exec: permission denied (Permission denied) The error isn't actually what it says. The untrained eye would assume permissions issue. A quick check of dmesg reveals what is really happening: /home/qbit/.cabal/setup-exe-cache/setup-Simple-Cabal-1.22.5.0-x86_64-openbsd-ghc-7.10.3(22924): W^X binary outside wxallowed mountpoint OpenBSD is killing the above binary because it is violating W^X and hasn't been safely kept in its /usr/local corral! We could solve this problem quickly by marking our /home as wxallowed, however, this would be heavy handed and reckless (we don't want to allow other potentially unsafe binaries to execute.. just the cabal stuff). Instead, we will build all our cabal stuff in /usr/local by using a symlink! doas mkdir -p /usr/local/{cabal,cabal/build} # make our cabal and build dirs doas chown -R user:wheel /usr/local/cabal # set perms rm -rf ~/.cabal # kill the old non-working cabal ln -s /usr/local/cabal ~/.cabal # link it! We are almost there! Some cabal packages build outside of ~/.cabal: cabal install hakyll ..... Building foundation-0.0.14... Preprocessing library foundation-0.0.14... hsc2hs: dist/build/Foundation/System/Bindings/Posix_hsc_make: runProcess: runInteractiveProcess: exec: permission denied (Permission denied) Downloading time-locale-compat-0.1.1.3... ..... Fortunately, all of the packages I have come across that do this all respect the TMPDIR environment variable! alias cabal='env TMPDIR=/usr/local/cabal/build/ cabal' With this alias, you should be able to cabal without issue (so far pandoc, shellcheck and hakyll have all built fine)! TL;DR # This assumes /usr/local/ is mounted as wxallowed. # doas mkdir -p /usr/local/{cabal,cabal/build} doas chown -R user:wheel /usr/local/cabal rm -rf ~/.cabal ln -s /usr/local/cabal ~/.cabal alias cabal='env TMPDIR=/usr/local/cabal/build/ cabal' cabal install pandoc FreeBSD and APRS, or "hm what happens when none of this is well documented.." (https://adrianchadd.blogspot.co.uk/2017/10/freebsd-and-aprs-or-hm-what-happens.html) Here's another point along my quest for amateur radio on FreeBSD - bring up basic APRS support. Yes, someone else has done the work, but in the normal open source way it was .. inconsistently documented. First is figuring out the hardware platform. I chose the following: A Baofeng UV5R2, since they're cheap, plentiful, and do both VHF and UHF; A cable to do sound level conversion and isolation (and yes, I really should post a circuit diagram and picture..); A USB sound device, primarily so I can whack it into FreeBSD/Linux devices to get a separate sound card for doing radio work; FreeBSD laptop (it'll become a raspberry pi + GPS + sensor + LCD thingy later, but this'll do to start with.) The Baofeng is easy - set it to the right frequency (VHF APRS sits on 144.390MHz), turn on VOX so I don't have to make up a PTT cable, done/done. The PTT bit isn't that hard - one of the microphone jack pins is actually PTT (if you ground it, it engages PTT) so when you make the cable just ensure you expose a ground pin and PTT pin so you can upgrade it later. The cable itself isn't that hard either - I had a baofeng handmic lying around (they're like $5) so I pulled it apart for the cable. I'll try to remember to take pictures of that. Here's a picture I found on the internet that shows the pinout: image (https://3.bp.blogspot.com/-58HUyt-9SUw/Wdz6uMauWlI/AAAAAAAAVz8/e7OrnRzN3908UYGUIRI1EBYJ5UcnO0qRgCLcBGAs/s1600/aprs-cable.png) Now, I went a bit further. I bought a bunch of 600 ohm isolation transformers for audio work, so I wired it up as follows: From the audio output of the USB sound card, I wired up a little attenuator - input is 2k to ground, then 10k to the input side of the transformer; then the output side of the transformer has a 0.01uF greencap capacitor to the microphone input of the baofeng; From the baofeng I just wired it up to the transformer, then the output side of that went into a 0.01uF greencap capacitor in series to the microphone input of the sound card. In both instances those capacitors are there as DC blockers. Ok, so that bit is easy. Then on to the software side. The normal way people do this stuff is "direwolf" on Linux. So, "pkg install direwolf" installed it. That was easy. Configuring it up was a bit less easy. I found this guide to be helpful (https://andrewmemory.wordpress.com/tag/direwolf/) FreeBSD has the example direwolf config in /usr/local/share/doc/direwolf/examples/direwolf.conf . Now, direwolf will run as a normal user (there's no rc.d script for it yet!) and by default runs out of the current directory. So: $ cd ~ $ cp /usr/local/share/doc/direwolf/examples/direwolf.conf . $ (edit it) $ direwolf Editing it isn't that hard - you need to change your callsign and the audio device. OK, here is the main undocumented bit for FreeBSD - the sound device can just be /dev/dsp . It isn't an ALSA name! Don't waste time trying to use ALSA names. Instead, just find the device you want and reference it. For me the USB sound card shows up as /dev/dsp3 (which is very non specific as USB sound devices come and go, but that's a later problem!) but it's enough to bring it up. So yes, following the above guide, using the right sound device name resulted in a working APRS modem. Next up - something to talk to it. This is called 'xastir'. It's .. well, when you run it, you'll find exactly how old an X application it is. It's very nostalgically old. But, it is enough to get APRS positioning up and test both the TCP/IP side of APRS and the actual radio radio side. Here's the guide I followed: (https://andrewmemory.wordpress.com/2015/03/22/setting-up-direwolfxastir-on-a-raspberry-pi/) So, that was it! So far so good. It actually works well enough to decode and watch APRS traffic around me. I managed to get out position information to the APRS network over both TCP/IP and relayed via VHF radio. Beastie Bits Zebras All the Way Down - Bryan Cantrill (https://www.youtube.com/watch?v=fE2KDzZaxvE) Your impact on FreeBSD (https://www.freebsdfoundation.org/blog/your-impact-on-freebsd/) The Secret to a good Gui (https://bsdmag.org/secret-good-gui/) containerd hits v1.0.0 (https://github.com/containerd/containerd/releases/tag/v1.0.0) FreeBSD 11.1 Custom Kernels Made Easy - Configuring And Installing A Custom Kernel (https://www.youtube.com/watch?v=lzdg_2bUh9Y&t=) Debugging (https://pbs.twimg.com/media/DQgCNq6UEAEqa1W.jpg:large) *** Feedback/Questions Bostjan - Backup Tapes (http://dpaste.com/22ZVJ12#wrap) Philipp - A long time ago, there was a script (http://dpaste.com/13E8RGR#wrap) Adam - ZFS Pool Monitoring (http://dpaste.com/3BQXXPM#wrap) Damian - KnoxBug (http://dpaste.com/0ZZVM4R#wrap) ***
Human trafficking is being fought with the help of a supercomputer in a Defense Department DARPA project.
## PDF Yes. But pdf and epub modules are the second generation. Fileviewer module was the first generation. It uses Poppler, the popular pdf lib in Linux, to convert pdf file into png images and display them in browser. After several years, the Mozilla Foundation created pdf.js which allow browsers use HTML5 and JavaScript to display PDF file. Today pdf.js has become the default PDF plugin in Firefox. I wrote PDF module to integrate it into Drupal. * So, this just integrates pdf.js into Drupal? * Can you create pdfs with the module? ## ePub Since Amazon launched Kindle, ebook market was getting hot. Google and Apple joined the battle soon. Epub format as an open standard chosen by many new competitors in this market became popular. Thanks to Jake Hartnell the author of epub.js, an open source Javascript epub lib, we can display epub file in the browser as well. So I wrote epub module to integrate it into Drupal. Google Book Search has been renamed into Google Books and become a part of Play Books. Both Google and Amazon have HTML5 online reader now. Although epub.js is not as good as them, it has gotten most features for a online ebook reader. * Do either of these provide search functionality? ##Apachesolr_file * How does Apachesolr_file fit into this? It’s always easy to use Ctrl-F to search in one book. If you have thousands of books or even more, you need a full-text search engine to index them all. Apachesolr_file module uses Solr, Apache Foundation’s popular full text search engine, to index files. We already have apachesolr module and apachesolr_attachments module. The difference between apachesolr_attachments and apachesolr_file is - apachesolr_attachments was designed to index the files with nodes and apachesolr_file was designed to index file entity (the new conception since Drupal 7) for purely file management. Not only pdf and epub but also other popular file formats like MS Word, Excel, PowerPoint… can be indexed by Solr (https://tika.apache.org/1.5/formats.html all the formats supported by Tika - the file parser used by Solr). So you can also use this module on intranet for companies, schools and other organizations. ## Application * Do you know of any sites that are using these now? * What are some other applications you can see for these modules? ## NodeSquirrel Ad Have you heard of/used NodeSquirrel? Use "StartToGrow" it's a 12-month free upgrade from the Start plan to the Grow plan. So, using it means that the Grow plan will cost $5/month for the first year instead of $10. (10 GB storage on up to 5 sites)
Big Data is undeniably hot right now, and to many Hadoop is inextricably linked to the broader Big Data conversation. And yet, Hadoop has a reputation for being complex, and unpolished, and difficult, and ‘technical,’ and a host of other less-than-glowing attributes which might cause potential users to pause and take stock. Some of that reputation is, perhaps, undeserved, and many of those limitations are actively being addressed within the Apache Foundation‘s open source Hadoop projects. But there is clearly an opportunity for intermediaries who understand Hadoop, who can make it perform, and who can actively contribute back to those Apache projects. MapR Technologies is one of the better known of those intermediaries (alongside others such as Cloudera, which we also discuss), and the company has done much to encourage adoption and real use of Hadoop beyond the Silicon Valley bubble in which it emerged. In this podcast I talk with MapR Technologies CEO and co-founder, John Schroeder, to learn a little more about his company’s approach and to gain his insight into the ways in which Big Data technologies such as Hadoop are being deployed at scale to address real business challenges. Image of the engraving Slag bij Zama tussen Scipio […]
Новости Вышли Rails 3.0.12, 3.1.4, 3.2.2 Товарищ Константин, О времени и о себе. Кстати, упоминаемая в интервью книжица “Sinatra: Up and Running” — тоже очень ничего. Можно рекомендовать как академическое пособие для желающих разобраться, как правильно готовить на Руби web-(и прочее)-middleware и все такое. Деплой как в Heroku 4 марта вышло обновление на Github, связанное с массовыми уязвимостями на этом сайте 6 марта вышел Vagrant версии 1.0 7 марта вышел Bundler 1.1 Lightrail - легкий rails-стэк для json приложений Ruby 2.0 Enumerable::Lazy Except.io - сервис, аналогичный airbrake.io Обсуждение Системы полнотекстового поиска Sphinx - система полнотекстового поиска от Андрея Аксенова Full Text Search в Postgresql - система полнотекстового поиска, встроенная в Postgresql Elasticsearch Solr - сервер полнотекстового поиска от Apache Foundation Lucene - движок полнотекствого поиска от Apache Foundation Срывая покровы с Ивана Самсонова Профиль Ивана на Моем Круге Профиль Ивана на LinkedIn Wheely - компания, где сейчас работает Иван РГГУ - а здесь Иван сейчас учится
Here’s a list of several of the things we discussed: How PostgreSQL got started Ingres The Apache Foundation The PostgreSQL core team and it’s role. Data Warehousing It’s community property like Linux The SQL Query Language The C Programming Language gcc Standardization Google Summer of Code XML Indexing XPath Support ISN/ISBN Data Type Array Data Types HStores (Dictionary or Hash) Full Text Search Tri-grams Sphinx Lucene Why people switch from MySQL Performance Reliability Special Features Supports really complex queries Worry about the future of MySQL Skype – 200 Postgres servers Sky tools clustering platform Heroku San Francisco PostgreSQL User Group Differences between MySQL and PostgreSQL MySQL was originally written to please web developers Postgres was written by DBA’s Postgres will throw out a feature they can’t stabilize MySQL will accept a feature and then try to stabilize it Postgres really allows you to run code inside the database Postgres is more reliable and secure Lowers admin cost due to better uptime Rails was originally built around MySQL You can get some boosts by bypassing the ORM and going directly to the database Full JSON support is upcoming Django The PostgreSQL Ruby driver ByteA binary data type Simplified data types (Text data type) Why people switch from PostgreSQL to MySQL MySQL has been commercially successful longer than postgres Vendor tools Cheap hosting for MySQL A lot of things are designed to work out of the box with PostgreSQL PGSQL Novice list Postgres Open Postgres has a new version coming out soon (changelog) Postgres 9.2 Multi-core support Postgres included documentation Beginning Databases with Postgres – Dated but gives the basics To hire Josh’s guys, go to http://pgexperts.com. Download 6.08 MB Download (iPod & iPhone) 4.61 MB
Discuss this episode in the Muse community Follow @MuseAppHQ on Twitter Show notes 00:00:00 - Speaker 1: There’s so many zillions of startups trying to try every single angle and opportunity in that area. And so the marginal return to investing your personal time in terms of the impact on the world might be relatively smaller there. Whereas there’s this whole space that I feel like is really under explored. And if you just make it about 80%, making a profit and 20% making a statement, that opens up all kinds of incredible opportunities. 00:00:29 - Speaker 2: Hello and welcome to Meta Muse. Muse is a tool for thought on iPad. This podcast isn’t about Muse the product, it’s about Muse the company and the small team behind it. I’m Adam Wiggins, joined by Mark McGramigan. Hey, Adam. And Mark, since we last spoke, I am a father. Congrats. Yeah, it’s great, or at least the non-sleep deprived parts are great. I’m actually on parental leave right now, but I enjoy doing this podcast enough. I thought I could sneak back for just an hour here, but if my brain is not at full capacity, let’s just say you’ll have to carry things for us. OK. Now, way back in episode 4, we talked about our partnership model. And the context there was we were hiring the 5th member of our team, our engineering partner, and I’m happy to say we have through that process, we added Adam Wulf to the team, really great engineer with a particular specialty in inking, which is quite important for us, and he’s been doing great on the team, so we’re now 5. And in the course of that, of course, we talked about kind of the nature of the company and how it’s different from other models, particularly the startup model, but I thought it would be good to both first take an episode to talk more explicitly about what this somewhat unusual business structure we chose was, and then also it’s been a year and a half actually coming up on 2 years now since we started this thing and so being able to essentially say how’s it going? Is this working out the way that we expected. And just to frame things up a little bit, a starting place and a point of inspiration for both of us is a book called Small Giants, and I read this many, many years ago, I think when I was in my startup lifestyle, I would say, but it it had a big impact on me, and the book basically profiles a bunch of, let’s call them, businesses that are maybe have an outsized impact. But they’re less about huge size or making it to the S&P 500 or something like that. So for example, they have Clif Bars in there or Whole Foods, which I think at the time the book was wrote was really kind of an up and comer, independent up and comer, or Union Square Cafe, which is quite kind of unique restaurant in the New York area, since expanded to other locations. And the process of profiling these businesses, they showed kind of a maybe an alternate to, I think they’re thinking more an alternate to the standard kind of public company path, but I at least for me, I read it as an alternate to the startup world, which at the time I was just completely immersed in. I was kind of the only way to do things with the startup way, and this book suggested another path. 00:03:03 - Speaker 1: Yeah, that book was quite influential on me as well. So Adam, I’m curious, what from the book did you find yourself taking away the most and applying to your future adventures? 00:03:13 - Speaker 2: Yeah, well, in prep for this episode, I went and pulled out my Kindle highlights as a PDF and scanned through those a bit, and I have to say I’m not sure it’s actually a great book in terms of how it’s written, but there’s just a couple of core ideas that really hit home. One of those is they talk about businesses with soul or another term they use quite a bit is mojo, which is kind of a funny one. They talk about optimizing for mojo overgrowth and growth, of course, a business exists to Earn money, that’s it’s kind of practical function in the economy, and growth typically goes with that, it’s almost a requirement. So if you’re not growing, you’re stagnating. And that is taken to a real extreme in the startup world. I mean, Paul Graham even has an essay, Startup equals Growth, which just says, that is your sole purpose for being, grow, grow, grow fast as you can, and the counterpoint this book presents is mojo and expressing something kind of artistically and Having the soul is something you can choose. Of course, you still need to pay attention to the business fundamentals. You do still need to grow, but you can choose to have maybe a different balance where you say, you know what, this mojo thing we want to optimize for that and have enough growth to be successful but not have it be growth at the cost of absolutely every other thing. 00:04:35 - Speaker 1: Yeah, exactly. For me, there are a few layers here. There’s that first layer of, OK, you don’t necessarily need to be a huge business or to grow really fast. It’s a sort of mechanical matter, there are existence proofs of businesses that haven’t gotten huge or growing that fast, they’re doing just fine. OK, that’s great. That’s kind of the first layer. Then there’s this mojo idea of you can use the business as a vehicle to accomplish something non-monetary to make a statement. To do an artistic expression, and that’s something that was really important to me in starting this venture. I’m gonna spend the next 25, 10 years of my moral life working on this. I want it to be about something more than making money. And then there’s kind of a third layer, and I don’t know how much they get into this in the book and if you would even agree, but I think there’s a sort of arbitrage here where there are so few businesses that are operating with mojo, as it were, that you can have a sort of outsized impact if you choose to do so and do it well. This is where I think the small giants can punch above their weight class. It’s because so few people are actually operating with this mojo, this sense of artistic expression, that when you do, you really stand out, even if you’re smaller. 00:05:38 - Speaker 2: There’s some examples of companies that come to mind for you that are high mojo. 00:05:43 - Speaker 1: The one that’s top of mind for me these days is Signal. I’m not sure if that’s the company name or the app name, but, you know, I’m referring to the company that makes the Signal app, and I would expect they’re quite small. I’m not actually sure about the size of the firm, but it can’t be that big, but the impact that they’re having on the global discussion around the right of citizens to communicate privately is huge, and they could choose to have a huge impact going forward. So that’s one that’s kind of mindfully these days. 00:06:08 - Speaker 2: One that comes to mind for me is Panic. So they make kind of a variety of weird things, including, I don’t know, FTP clients, but also games. And now I think they’re working on a handheld game console and probably an example of a company that does have both mojo and a lot of growth, but maybe they took their time with that. The growth happened over a relatively speaking a pretty long time period and can build up slowly over time. Another one I remember you speaking about, we talked about this before, is Vanguard. Tell me more about the unusual structure there because I wasn’t familiar with it. 00:06:44 - Speaker 1: Yeah, so Vanguard is like one of the greatest business hacks of all time, and I feel like it’s an understudied story. So my understanding of Vanguard is the founder, I believe his last name is Boggle, wanted to make investing more accessible and more successful for individual retail investors, and he had this insight around indexing, whereby if you index into the market and operate those index funds at a very low cost way, it would be very beneficial to the people who are investing. Now he could have taken this insight and developed a huge and hugely profitable firm with it, but my understanding of what he did instead was he did this move where the firm is effectively owned by the people who invest in the funds. So essentially all the profits that would get plowed back into the funds in the form of lower fees. So he basically forgoes a huge personal fortune to help bring low cost. Indexing investing to the masses. And then it got to the point where it was so successful that it becomes quite hard to compete as a for-profit indexing firm because you can’t plow all your profits back into lower fees, right? Or at least your investors wouldn’t approve necessarily. And that’s kind of the sense of almost art that he’s shared with the world in the form of this somewhat unassailable venture to bring low cost investing to the masses. 00:08:01 - Speaker 2: index funds, you know, S&P 500, ETFs, guess what they’re called nowadays, is this huge technology, or maybe you call it a social technology or just a financial tool or something, but it had this huge democratizing effect for individual investors compared to the managed mutual funds that came before and yeah, the art. Start, as you say, you know, for me that is the reason I am in business is it is a vehicle for expressing something that matters to me about how I think the world should be or how it could be better and the business and the mechanics of all that, how it’s incorporated, how it’s funded, how it earns money, all that stuff is really a means to an end. Right, so optimizing for mojo, businesses with soul, expressing something artistically, that all sounds nice. What does this mean practically in terms of the business that you’re building? And here you start to think about these mechanics, which is, OK, you’ve got a group of people and you’ve got a thing they want to express. Product they want to bring into the world or a piece of art they want to create depending on how you want to think about it. That needs time, it needs money, it needs organization, and that leads you into what I usually think of as kind of a container or a vehicle, which is typically a legal entity, could be a corporation or a nonprofit. Um, and then there are certain models that fit with different kinds of businesses. So, for example, if you’re gonna open a restaurant, and for a lot of people creating a certain kind of food and a certain kind of environment, that is very much an artistic activity for them. You certainly see that if you watch something like the Netflix series Chef’s Table on kind of the high end, but I think even more for your local corner restaurant, many times those businesses are not very lucrative. They’re open because people are really passionate about food and sharing a certain kind of experience with their customers. But there’s probably a certain kind of legal entity you’re form and you’ll probably get funding as a small bank loan or some other things like that. And that’s extremely different from, let me start a startup, move to Silicon Valley, join Y Combinator, get venture funding, and ultimately you still have the legal entity, a source of funding, you know, way to hire people or bring team members on board and the sort of mission they’re signing up to, but the mechanics of them are very, very different. And there’s, you know, there’s a list of other things as well, including nonprofits, or even pure artistic activities, art projects, Burning Man art installations, or you’re starting a band or some, you know, writing a book or something like that. All of these need capital and ways to organize people. And there’s legal mechanisms for that. And so knowing both the mechanisms, but also what you want to express, and therefore, what is the right vehicle for that, I think that’s worth thinking through rather than reaching for a default, which is, I don’t know, everyone starts startups, so I’ll start a startup, for example. 00:10:45 - Speaker 1: Yep. Well, now you got me thinking about the Wall Street that stuff that’s going on on Reddit and in that case, I guess the optimal vehicle was a series of memes. 00:10:55 - Speaker 2: That’s right, I do think it’s ever evolving, and you mostly mean that as a joke, but honestly, the internet has brought us some new structures, right? We have Kickstarter, for example, Patreon. There’s new ways potentially to, in the end, it is really about organizing groups of people. Probably if you’re a solo artist, you’re painting, you’re painting, you’re doing something. Individual, maybe this stuff matters less, but as soon as you have a group of people over time they are investing their energy, their effort, their emotion, and certainly their money, then you need mechanisms, governance and understanding for both what we’re going to put into this and what we expect to get out of it and what our goals are and all that sort of thing. So that brings us to the vehicle we created for Muse, which I think borrows elements from some of the different types of containers we’ve mentioned, but we think also has its own special blend. Can you explain a little bit what that container looks like? 00:11:47 - Speaker 1: Yeah, so first of all, we did believe that Muse needed to be a commercial entity, and the main reason was, well, maybe two main reasons. One is you need a significant amount of investment to develop a novel product like Muse and bring it to market. We’re talking about 3 to 5 engineers or 3 to 5 staff members for 123 years. So it’s not something you could do as a pure art project, you know, say. Furthermore, if you have this vision of impacting the world in a particular way, it helps to have ongoing self-sustaining funding for it. So that’s another reason to make this a business versus a nonprofit or an art project. The meat of what makes Muse unique is how we treat the staff and the other participants around the business. And the top level thing there was we wanted Muse to be the place that we wanted to work and the place that we wanted our collaborators to work. And that meant a few things. One is we wanted to be a relatively team, which has a bunch of implications that we can talk about. We wanted everyone to feel like peers who were at the top of their craft and operating at the top of their game. And we wanted everyone to be treated as well and as fairly as possible. And in particular, we didn’t want to sort of founder class versus an employee class where they’re very different, as in typical startups. And lastly, we wanted a sense of dynamism in the staff and the team, where people come in, they go, and that’s a very natural thing to happen, and you’re less kind of bound and handcuffed to the company. And furthermore, you’re also not constrained in how far you can rise in terms of your impact and your influence and your ownership, just by virtue of when you joined. It’s more a function of your contributions and commitments to the company. So those were kind of our goals that inform the structure and then in terms of where we ended up, well, first of all, we did end up with the Delaware Corp, which is the standard vehicle for startups, among other things, mostly because that’s the best understood by all the potential participants, staff, investors, and has the best support for people having ownership, a variety of people having ownership in the firm, which was really important to us. But then where we went in a quite different direction was this idea of a partner. So at a typical startup, you have sort of three classes of people. You have the investors, you have the founders, then you have all the employees, and they’re all treated very differently and have different economics in the firm, and they’re a function of kind of how you join and how you come to be participating in the firm. And we want this model, like I was alluding to before, where it’s more like the staff members are peers with each other and have the opportunity to rise to that level over time regardless of when they joined. So that’s where our partner model comes in, which is sort of drawn from the world of professional services firms, like law firms and accounting firms, and the idea that There is, if you start a law firm, you get to put your name on the sign because you started it and your partner right away, presumably, but also over time people can join and through their contributions to the firm and their commitment and they’re taking responsibility for the success of the business overall, they can eventually become a partner, just like the founding partners. So that’s sort of the idea that we have with the Muse partner. They’re someone who can become a peer with the other partners and have corresponding responsibilities at the firm. So it’s not just that you’re responsible for being a good engineer, you’re responsible for helping basically directs how the business operates, making big business decisions and things like that, and you have corresponding economic interest in the business, much more so on a percentage basis than a typical employee would have. So I guess if I had to summarize with the partner, it’s the idea of we want everyone to act like a real owner in the business, and in order to do that fairly, you need to actually make them a real owner in the business. 00:15:24 - Speaker 2: One way to understand the business structure or how the container is different, is to compare and contrast with other options. You mentioned taking investment, we did take some seed funding from a lovely firm called Harrison Metal, who happily turned out to be understanding or at least willing to try out. Weird model here, but you could compare to other ways of doing this. So bootstrapping, for example, and there’s a few different approaches on this. I’ve done this in past businesses where you essentially do consulting work on the side or maybe it’s kind of related to you can try to sell your product to someone, but you sort of do some consulting. With them at the same time that like helps you pay the bills until such time as the product is self-sustaining, or something you see a lot in the iOS developer world is these what I call these indieDevs. Many times they have multiple apps, but it’s usually one person or maybe two people tops, and they can craft an app in Pretty short amount of time, a few months, maybe they’re doing it on the side, maybe they have other kind of some passive income from existing apps, or maybe they’re just doing it in their extra time alongside a job, and they can do that reasonably in 6 months, put it out on the app store, and then start making not a huge amount of money, but enough to make it pretty worthwhile for a single person. But as you pointed out, for Muse, which has this first of all very forward thinking or trying to reinvent a lot of these gestures, the human computer interaction aspects, the tablet power user interface, there was just a big investment first on the research side when we were in the research lab, but then even once we left the lab and we’re trying to take this kind of validated prototype and turned it into a product people can really use that just took a lot of time, a lot of iterations in a way that let’s say a safer kind of app wouldn’t. And similarly, there’s something that I do think is common in the startup world, which is big investments in design and brand, and you expect this from Slack and Tesla and Apple, and certainly Any up and comer startup, you have the money to be able to put a lot of effort into that sort of thing, and maybe we didn’t want to be quite at that level, but I also felt that a lot of investment there was part of allowing this first part of what we wanted to express artistically, but then secondly I think necessary for it to be successful. So that sort of says, OK, the iOS indie developer path or bootstrap path is really not viable. We need a little more upfront capital than that. But then you can compare it to startups where, in fact, by start-up standards, the amount of money we’ve taken is ridiculously small. I don’t think it would even count as a precede. And furthermore, coming upon 2 years into this, we’re a 5 person team with no particular plans to expand, but in the startup model you’re expected to really quickly scale out the team, be 8 people, 10 people, 12 people in that first year or 1st 2 years. And so from that perspective, the 5 person team, we would be growing much too slow, but we felt that that rapid team growth first of all, wasn’t necessarily quite the kind of environment we wanted to work in. And second, it wasn’t quite right for what we wanted to express with the product. And so we ended up in this middle ground that was neither the bootstrapper path nor the startup path, and that led us to thinking, OK, how do we get some investment, be able to make that investment in things like design and brand and exploring this more radical interface, but not necessarily go on the, you got to become a unicorn startup path. 00:18:49 - Speaker 1: Yeah, exactly. Another way to think about the funding situation would be, as you get more funding and you have more external investors and owners, you tend to have fewer degrees of freedom. So at the extreme end of you’re a large publicly traded company in many respects, including basically legally at the whims of the owners, they can more or less insist that you act purely in their best judiciary interests, and if they don’t like what you’re doing, they can take over your company by various means. And at the other extreme, you would have the art project where you’re in your house, you can do whatever you want. And, you know, in some respects it’s nice to be doing the art projects you have infinite degrees of freedom, but then you don’t necessarily have the capital and the collaborators and the teammates in a sense to help you accomplish a bigger mission. So, when we were looking at funding the venture, we wanted to go in the direction of raising a little bit of funding, but no more than we kind of strictly needed to, A and B. In order to minimize the extent to which raising that funding impinged on the desired degrees of freedom in the firm, we raised the funding from people who were aligned with our sense of mojo, if you will, or what what we wanted to do with the venture, and we’re therefore not going to use the fact that they were investors and owners as a way to shape the business in a way that wouldn’t fit with what we wanted to do. So being aligned with the investors was important, I think. 00:20:10 - Speaker 2: Another piece of the puzzle on funding and money flow generally is that all businesses should go through this cycle of they need upfront capital, even if you’re a lemonade stand, you gotta get the lemons and the pitcher and the cups and the poster board and the marker so you can make your sign. Everyone needs some amount of capital, but there’s always this cycle where initially you’re in the red. You’ve put in capital but you haven’t produced a functioning business yet and you hopefully over time in that time period could be very long. I’m gonna say for, you know, a business like Amazon, maybe it took them a decade plus to go to cash flow positive, whereas maybe for more bootstrap things you expect to get there basically right away, but for us, we wanted to have enough capital to make these investments we knew were necessary to even get a product that people would want to use or pay for. But it was also important to me or it was part of what I wanted to express with the business was to make a self-sustaining business where the product exists because people are paying for it, not because of continuous injections of venture capital. And partially this is my experience in the startup world, both with my own companies and other companies I’ve advised. But in the end, you will always serve the needs of the people who give you money, and that’s just kind of the physics. You can resist that in some ways, but it’s just kind of the long term, you’ll always converge to that. And so if your customers are the ones giving you money, then they’re the ones you’re serving. But of course they can’t. Maybe putting aside some unusual cases of big Kickstarters or whatever. For the most part, you can’t be completely customer funded to start. That’s where professional investors can really help out. They want to give money to fledgling businesses for a chance at a return, and so that’s a good deal. But if the startup path tends to be one where there’s long, many, many of capital and so you’re in some ways I’ve seen the it’s quite a joke or a criticism or something, but they say that startups in many cases their product is their stock. What they’re really trying to do is sell their stock and sell it for ever increasing prices and the product that they give to users and maybe even charge for but not enough to break even, that is secondary. And I really wanted the other way around, which is, of course, we need to do our fiduciary duty to our investors and give them hopefully a solid return over time, but ultimately, the sooner we can be funded by customer money rather than investor money, I think the more that will shape the company and the product that I want to make in a way that really is focused on serving customers. 00:22:48 - Speaker 1: Yeah, for sure. And one of the reasons that I like that approach is I basically prefer to serve paying customers versus free customers in general. This goes back to kind of the patio 11 thing of, you get what you charged for or something, where customers who pay serious money for tools tend to be invested in them and want them to succeed and understand their value and things like that. So it’s yet another reason to focus on paying customers. 00:23:11 - Speaker 2: Yeah, it’s a way to filter out people who really find a lot of value in your product from those that like free stuff. Everybody likes free stuff, that’s fine, but I think a business and a product works out best if you can have that real focus on, here are the people that get the most value from what I’m doing. Yeah I’ll note that I think it worked pretty well for us, this idea of we’ll take this seed-ish round, and then we’ll try to use that to get to, if not profitability, at least kind of a sustainability, at least not be losing money, and that really did create a lot of urgency on the team, I feel, to charge sooner and it was a challenge actually because I think as craftspeople. You think, OK, I don’t feel ready to charge money for this yet. I think it can be better. It still has bugs in it. There’s so many features to add. It’s a very natural thing when you hold yourself and your work to a high bar, but then you made this spreadsheet that basically mapped out cash and how we were spending it and what would happen if we started charging and it really made a difference starting charging just a few months. Earlier, because it really takes time to build up your customer base and that that is recurring over time, we could get to this sustainability on a trajectory that would allow us to not need to sort of go back to the well for for more funds and or just go out of business, and that was really focusing and I think it pushed us to charge a little sooner than maybe we would have otherwise. And that in turn I think really changed our relationship with our users who are now customers because now we have a different obligation to them and I think that further focused our ability to make a good product. So overall that kind of charge money sooner and then in turn try to grow into that price you’re offering or that product you claim to be offering. For me that was a really powerful focusing thing for the team and for the product. 00:25:03 - Speaker 1: Yeah, I think that was big and by the way, it was made all the more challenging by our take on pricing on iOS. Part of the hypothesis about how this venture can work with a small team, a relatively modest amount of funding, but still reaching self-sustainability. Is a prosumer price level on the $10 a month, $100 a year range, versus almost all iOS apps, which are $0.03 dollars, $5 maybe $999. It’s the wrong number of zeros to be able to make the physics of the business work. And so at the same time as we are craftspeople who it’s tough to charge for a product that isn’t where we want to be eventually, we’re also dealing with the challenge of we’re doing something quite different with iOS pricing, so it’s dealing with two things at once there. 00:25:47 - Speaker 2: Great, so we’ve got kind of this partnership model, small talent dense team, people who are all owners in the business. We’ve got a small bit of seed funding, so we can do a bigger investment than a pure bootstrap thing, but something trying to get to Sustainability sooner, and not be on a long term kind of multiple rounds of investment, and we’ve got prosumer pricing that potentially makes it possible to get to something sustainable for a 5 person team within kind of the physics of how many people are out there that need a tool like this, and what they’re willing to pay and that sort of thing. So that was, I think, kind of roughly the picture we put together, we wrote an internal memo that outlined mostly everything we just talked about back in the summer of 2018. So now the question becomes, OK, we’re coming up on two years in, how’s it going? Is this working the way we thought it would? 00:26:39 - Speaker 1: Yeah, I think it’s working out great so far. Now, there is a huge question mark around the financial success and viability of the business. We haven’t fully demonstrated that yet, and that’s a question mark that’s going to be out there until we have that information, it’s hard to fully evaluate this model, right? But in terms of how it feels to work day and day and the staff that we’ve attracted, that feels. Great to me, and I especially love this feeling with the partnership model that you have 5 people who are operating at the top of their game, and who you fully trust to make great decisions for the business independently. That feeling is awesome and really helps us, I think, move quickly and punch above our weight, even as a 5 person team. 00:27:19 - Speaker 2: You know I’ve always kind of liked the what I think of as the pirate ship model, kind of a group of people who band together for a common purpose, but it’s not this top down classic command and control. One person is in charge, everyone else just executes, and individuals can pursue their own decision making, as you said, but the reality is, I think I don’t. how it would be with even more than 5, but certainly any, I don’t know, before this you were working at Stripe as part of a big team there and amazing company, but it’s just there’s hundreds or I don’t know even now thousands of people and there has to be some coherence to the decision making and so that in turn leads you into cascading OKRs and all the Big company stuff you think of it’s necessary, and you know, I think it’s necessary to do something at that scale, but for me personally, yeah, it is a lot more fun to make individual decisions for my own work and then for my teammates to be able to trust that we have enough shared vision, alignment around purposes, sense of trust in each other’s capabilities as craftspeople, but also that we were seeking a similar outcome in the business. And that people can have a lot of autonomy while at the same time, we’re working together for a common purpose. We’re not making decisions that contradict each other or will make the whole thing feel incoherent. 00:28:33 - Speaker 1: Yeah, exactly. And furthermore, I think there’s a sort of talent arbitrage that we’ve been able to pull off here in two respects. First of all, I think people are stepping into a level of responsibility and impacts and skill that they wouldn’t have stepped into so quickly or just such a. extent, if they were in a bigger organization where they had a more specialized and confined and limited and structured role. And that’s the result of you give people responsibility, you trust them with it, and you make them big owners in the business, and they take that very seriously, and they tend to step up to the challenge if you find the right people. And second of all, I do think that the model is very attractive to some people, and I won’t put on the spot here, but I, I think people have found their way to the venture that otherwise they’re basically not hirable by general purpose companies, right? But because the model is so unique and attractive, and because there is that mojo, I think you can bring people into the venture that otherwise you basically wouldn’t have been able to hire. 00:29:27 - Speaker 2: Yeah, for me, looking back at this almost 2 years, we’ve been doing this slightly unusual model. I actually went to review the memo that we wrote back in summer of 2018 just to kind of look at our original goals and see the degree to which we’ve executed that versus it’s evolved. And one interesting thing in there was essentially what the risks or open questions are, and I’m happy to say that two of those we’ve already answered in that. Intervening time, just as we’ve discussed. One is just our ability to raise money. So we went out to look for seed funding from the kinds of investors who normally would invest in startups, and we had kind of a weird story where we basically said, look, this isn’t unicorn potential. We’re not trying to follow the standard startup model. We do think there’s something quite interesting here. We think there’s a potentially a very good business here. But, you know, we’re explicitly not on that path, and we’re looking for less money in exchange for less ownership, and we’re not gonna fit the normal model and for many, actually most investors, that was a, well, we like what you’re doing, it’s interesting, but this just doesn’t fit our model. But we did manage to find some folks who liked what we were doing and certainly it helped, I think a lot that you and I have and others on the team, you know, we have a really nice CV. In the tech world and the amount of money we were asking for was so small that people felt they could take a risk. I think that would be tougher to do without the career capital that we have in this particular team, and I would like to see if there are more businesses that can do with a model like this. It would be nice if it was more possible for people who didn’t necessarily have the background of Stripe and Hiroku and whatever else to be able to get this kind of funding. So that’s one item to risk is the raising of money. The other one is the ability to hire, and I think I outlined that in the previous podcast episode on this, which at the time we’d just been joined by our fourth partner, Leonard, but it’s one is can be an outlier, so I thought, OK, well, we got pretty lucky with that, and he really seemed interested in being not just a great designer as he is, but also someone who would have broad ownership in the business and interested. All pieces of it, not just his sort of specific discipline, can we replicate that? And the addition of Adam Wulf to the team made me say, OK, yeah, it seems we can, right? We got not just the original three who wanted to do things this way, but then 2 more we were able to attract, as you said, maybe even people we wouldn’t have been able to hire if we were a slightly more conventional company, that that was appealing to them. And I do think it’s not a highly scalable model, but it’s scalable enough to serve our purposes, and we have no plans to expand the team beyond 5 for the foreseeable future, but we also think that’s the right number of people to execute on this vision. So from the perspective of answering those two risks, I would say that is going well. 00:32:02 - Speaker 1: What are the other risks on the list? 00:32:05 - Speaker 2: Uh, the other big one is the one that you just mentioned, which is can we get sustainability, right? Because I think that for the record, at the time of this recording, we are not revenue sustainable. Let us say if we run out of our little nest egg in the bank here, we would not have enough to keep the business going, at least in its current form. But the graph is trending in the right direction, we have new customers every week, and if you look at the way that the lines meet in terms of, you know, bank account going down, revenue, and new customers coming in, we do think it is viable to get there, but we won’t know until it happens. So I think that remains the biggest risk, and if we do start to get close to being in the red on the bank account, and then we have to ask the question of, OK, you know, do we just sort of give up and go to business, to be revenue based financing, which could be interesting, but I think maybe we might not be the right shape of business for that, or do we go back to Silicon Valley investors, but now we’re sort of like breaking our model, right? We’re saying, well, we were just going to raise this one round and charge money right away and try. get to sustainability based on that, but if we need to go and refresh from that well, that pretty naturally takes us into just the startup path of raising perpetual rounds of funding, and your eventual outcome is acquisition by a larger company or in some cases going public, but I just don’t think we have the right kind of business, nor is what we want to express the sort of thing that makes sense for a big public company, right? Yeah, and then addressing the more personal side of it, which is just creating this company, this vehicle uh that is a place we want to work. I like you wanted to be a little less of a manager, a little more of a maker, and It is interesting because, you know, we do spend a lot of time. I spend a lot of time tweaking CSS and manually typing expenses into QuickBooks, which is a perpetually rote and frustrating activity and many other small things that were, how we raised a little more money on the startup path. Yeah, we would be hiring office managers and other kinds of people we would have a bigger team that would mean that we could do less of that stuff. You get more leverage or something like that, but that’s actually what I wanted. I’ve gone both directions and I think I’m at my best when I’m, I like being on a team, that’s really important to me. I want to do things that are big enough that they require a team as opposed to just, you know, kind of a solo activity or even like a two person partnership. But I like to be on a very small team where you can be doing a lot, but most of what you’re doing is making, I would call it, rather than the management and leadership tasks that come naturally with the expansion of a team. 00:34:42 - Speaker 1: Yeah, totally. And I think in addition to this maker versus manager access and how that’s influenced by the size of the team, I also think that a smaller team gives you more degrees of freedom, which is great if you’re someone who just likes freedom, like me, but it’s also great if you want to do something unique that requires moving several variables at the same time. So for example, this local first idea that we’re working on, this idea that you have all the data on your device and it’s very quick to access and it’s secure to you and things like that, that requires pulling levers on engineering, products, business strategy, the client side, the server side, interfacing with the research at the lab. There’s all the stuff that you Got to kind of pull together. And if you had to coordinate a bunch of people to do that with meetings and planning documents and all that, it would take forever. It might just not get done. Whereas if it’s a small number of people or even one person, you’re much more able to come up with these weird combinations of variables to produce novel results. And that goes back to this idea of making a statement or building something unique for the world. 00:35:43 - Speaker 2: Another element of degrees of freedom is outcomes. So outcomes could include, you have a profitable business, but it could also include something like an acquisition or an IPO and the startup world, there’s really the outcomes that matter are acquisition, IPO, or go out of business, and that’s sustainable but moderately sized business is a non-goal. That’s actually a bad outcome from the perspective of investors and the whole. The system is kind of built around that. You shared a nice article with me some years back called VCM Math, which I’ll link in the show notes, but the way the person puts it is, you know, venture capitalists in pushing these businesses to become a billion dollar company in 10 years. This is not because they’re jerks, it’s because the model demands it. This is how it works. That’s where this money comes from. It’s only possible if you push for these polarized outcomes. And that’s well and good if you know what you’re getting into and you’re seeking that kind of go baker bust result, but for the, I think potentially large number of potential mid-size businesses, very solid mid-sized businesses, that’s of course not a fit. And so by keeping that smaller amount of Capital upfront, keeping the team smaller, we leave more possibilities for what counts as a good outcome. And so, of course, we still can have a startup style outcome, and that might be something we consider good, but there’s also other outcomes that I would consider extremely good. But that in turn leads into, OK, how do investors as well as the partners who have this significant equity stake and in fact are taking lower salaries than they would in other places in order to get this equity stake, but how does that equity become worth something? So the startup world typically it’s through. or IPO and there’s no other outcome. So you did quite a bit of work on the financial pieces that could potentially make this work. So how do investors or partners over the long run, if news is able to be a successful and profitable business, how do they realize the results of their effort? 00:37:52 - Speaker 1: Yeah, this is a tricky one. So certainly if there’s a standard outcome in the startup world, like an acquisition or something that’s straightforward and it’ll work like other places, just the percentages would be different because again, we’ve given much more ownership to the staff. But if you are profitable, it’s quite challenging. So I hope our listeners who have joined for discussions of gesture-based interfaces will forgive my aggression in US tax law here, but it’s actually really important for how you compensate your staff. So, tax and securities laws makes it quite hard for people, individuals to get cash out of a company like this, and I can kind of play through the different scenarios that we thought about. So one thing we’ve considered is the idea of small scale tender offers. This is where a company or someone else offers to buy shares from existing investors and in that way, existing owners of the equity could get some liquidity and have cash to support their families or what have you. 00:38:46 - Speaker 2: And small digression there when I first encountered the term tender offer, I just thought it was the sweetest thing. Here’s an offer for you tenderly for your shares, but I, I don’t think that’s what it is. It’s, it’s that they are tendering an offer, right? But it basically just refers to an internal stock purchase, right? A transaction where one person has some and they’re going to sell it to someone else on an open market transaction. And is that similar to or the same thing as stock buybacks and kind of public companies? 00:39:13 - Speaker 1: Yeah, so a stock buyback would be buying the stock from the public, which I guess could conceivably be some of your staff if they own it on the public markets, where the tender offer, I associate that more with a more closely held private company, and it’s not a public transaction, it’s more of a private offer to specific individuals to buy the equity. 00:39:33 - Speaker 2: How does that relate to, we mentioned taking inspiration from the partnership model, attorney firms, and so on, and I think it’s pretty standard there that when you’re going to leave the firm, they buy you out, right? Even maybe with a restaurant, you know, you can imagine a couple of people in a restaurant, one person decides they’ve had it with the business or they’re moving on to other things in life, it’s normal for one person to buy out the other person’s steak. Would that be a tender offer or something else? 00:39:57 - Speaker 1: Hm, interesting. I suspect that’s a little bit different because those are probably LLCs or otherwise not Corps, and again I associate tender offer with basically with the Delaware Corp, and that could, for example, even be written into the contract that not only are they gonna offer to buy you out, but in fact you have to sell. At perhaps a formulaically determined price, so that way they might specifically not want the ownership to escape the currently active employees, for example. Basically, I think when you have LLCs or other non-Corp structures, things can get a little bit weirder and different just because they’re not as solidified and standardized in terms of how they operate. But there’s some similarities in spirit of, OK, you’ve completed this part of your journey and you want to get some liquidity for that, and the company has interests in acquiring that equity, and so it makes mutual sense to do this transaction. 00:40:42 - Speaker 2: Yeah, I guess they all seem similar to me in that typically an ownership stake in a private firm of any size is just totally non-liquid. You cannot really do anything with it. You can look at, OK, in theory, our last funding round value us this amount or I could take a multiple of revenue, the company is worth a million dollars and I have 50% of it. Yay, I’m a half a millionaire, but that’s not really how it works because you can’t actually sell those shares versus public markets, which of course, It’s very good for liquidity in that way, and then an acquisition scenario where one company is buying 100% of the stock of another company, and then you just divvy up that share price among the owners, and that’s why those two scenarios create exits for the investors or create ways to get liquidity for the investors and the employees who have taken options. But if you say, as we have said, You know, we don’t plan to take either of those paths. We want to build a profitable business that goes in perpetuity, making good software. OK, then how do I ever realize the outcome of my shares? And so the tender offering is one mechanism, as are these others we mentioned for creating liquidity isn’t the word for it, but just the mechanism for one person to sell their shares and get out and get some money to someone else who’s maybe more active in the business. 00:41:58 - Speaker 1: Yep, yep. And another nice thing about tender offers is they don’t need to apply the same to every person, by which I mean if it’s just the case that you or someone else because they’re leaving or whatever, wants to make this exchange, we could potentially set that up versus having to do something equally on the basis of current ownership. And another example of something like that would be a dividend which we can talk about. Yeah, there’s a lot to like about tender offer, but it’s not something that we would do lightly. There’s a variety of reasons. One is that you need quite a bit of capital for it to actually make sense for it to be material, and for you to have an appropriate amount of cash in the bank and the company even after the transaction. So in that sense, it’s definitely a ways out. But also, unfortunately, there’s all kinds of really weird tax consequences which we don’t need to go into the details here, but Basically, by doing a tender offer, you could potentially impair the equity of the other owners, if you do it wrong or do it at the wrong time or do it too much. So it’s fairly fraught. But it’s a potential thing out there. Another thing that we thought about and liked was dividends, and dividends are nice cause they’re very mechanically fair. 00:42:56 - Speaker 2: Big fan of dividends. Yeah. So just to define that, this is the idea that in a way it feels like almost the purest expression of capitalism or how businesses are supposed to work, which is when a company turns profit, they can choose to take some portion of that profit. Some, they’ll reinvest back in the business, kind of retain earnings, I think that’s what that is usually called, but then the rest they say, hey, we made some money, let’s share it with everyone who helped make this business happen. And that share is determined by your ownership in the company. And so for me, I had a, I guess personal experience with this in my very first business, which was a basically a bootstrapped. Business, a payment gateway called Trust commerce, and we had been operating, I don’t know, founders, you know, living on their own savings and whatever, just trying to pay our bills with whatever money came in, or trying to pay the basic business bills, servers and offices and phones and stuff like that. And I remember the first time we were left with $1000 in the bank account that was not accounted for us, well, what should we do with this? Well, we could pay ourselves, that’d be great. And so we wrote dividend checks for $300 for each of us, because there were 3 people in the company, and it felt really great. It felt like this, we made a product that people valued enough that there was a little bit left over that then we could give to ourselves. And even though the, the number, the absolute number was small, that feeling of kind of profit in its purest form is a really nice one. And so dividends are just the idea that the company is making money, so you share it with the owners, and that’s something. It’s not really part of the startup world and even not really as much a part of, I feel like public equities, where I think they could usually call them growth stocks or something like this. I’m probably speaking out of my wheelhouse here or income stocks or whatever, but the idea of just you’re going to buy the stock in a company, so that then when that company makes money, they send you a dividend. Those are usually a lower return type of stock versus ones that are based on the growth of the stock itself. You buy it at a lower price, you sell it later for a higher price. But the income stocks, again, that is business at its most pure and fundamental, which is the company made money, you own a piece of the company, therefore you get a portion share of that. 00:45:04 - Speaker 1: Yeah, and it’s nice because it’s mechanically fair. If you have $100,000 to distribute in dividends, you look at the cap table, so and so has 5%, great, they get a $5000 check, and you know that everyone is being treated fairly, at least insofar as the equity in the company is owned fairly, and you don’t need to have a lot of discussions and machinations about how you actually split up the cash. But dividends are challenging for their own reasons though. One reason, for example, that you don’t see a ton of dividends in the public markets is some companies don’t have cash to throw off. A lot of it is currently, instead of being dividended out, it’s being used to buy back stock, which is kind of equivalent actually, but buybacks get basically better tax treatment. So there’s those pesky tax laws again, causing weird distortions, but in our case, it’s hard because Some staff have straight stock and some staff have options. And that again is because of tax law. Basically, the US government doesn’t want you giving straight stock to people. They view it as compensation that needs to be taxed immediately, even though it’s a liquid. So basically, to avoid bankrupting your staff, you have to give them options. But then options in uh the Corp, when you dividend now you dividend to the stock owners, the straight up stock owners, not the option holders, so that probably wouldn’t be fair to them. 00:46:13 - Speaker 2: And to be fair to the tax man here, trying to levy income taxes on stock earned for work is very challenging because that stock has zero value when you get it, and it’s very likely to have zero value ever, but then in some cases it can be worth a lot, right, that initial stake that, I don’t know, you know, the Google founders had turned out to be worth a huge amount, but the vast majority of startups and even businesses will end up. Not being worth anything. So how do you tax something when you can’t know its value except extremely retroactively? Yeah. And I’ve had my own challenges with that because I’ve basically built a career around starting companies or advising for companies and taking equity and kind of have this, I don’t know, flywheel of I basically earned some money on past ventures, and then I can use that to pay my bills or whatever and earn pure equity in future ventures, and then All of those pan out, but I kind of have a portfolio strategy, you might say if I own stock in companies I’ve started over the last decade or decade and a half as well as companies I’ve advised for in some cases invested for. And so all of that income, all of that stock was worth 0 when I got it, but much of it turns out to be worth $0 ever, but then some of it turns out to be worth a good bit. And when I can cash that out, I can use that to pay my bills and continue my career. But how do you tax that because Typically you tax things at the time they’re earned, but this can only be evaluated when it kind of resolves, which can be often 10 years later, that a piece of stock you earned pans out and has a value that can be attached to it. So it’s not an easy problem. I think it’s still an evolving area. Certainly the US tax law. I know Europe is grappling with this as well, because it’s just the standard models for how we think about income just don’t fit well with us. 00:48:04 - Speaker 1: Yeah, definitely an area that’s being worked on. It’s just too bad that it hasn’t been figured out yet in a way that would be more advantageous to basically giving staff more compensation. 00:48:13 - Speaker 2: Yeah, it can be frustrating, which is basically trying to do something that’s as fair as possible for investors and people earning what they call sweat equity, where they’re essentially earning stock in exchange for their work. We cannot treat those the same because the tax law basically means that, as you said, the people earning equity through sweat get screwed, and so then you have to create these different classes of stock and do different things, but then that effectively means You have more and more divergence in the stakeholders, which is against the spirit of what we’re trying to do. We’re trying to create this thing where everyone’s in it together, we bring different things to the table. Some people bring their efforts, some people bring their money, some people bring both, but everyone can hopefully have a sense of fairness in the sense of kind of knowing what you put in and knowing what you potentially get out or how to share in the success long term. 00:49:05 - Speaker 1: Yeah. And there are ways you could potentially work around this for dividends. You could do a sort of phantom dividend where you say there’s 100% of the cap table and straight stock and there’s an additional 40% in options. You can dividend it out 140 units, 40% to the option holders, and 100% to the stockholders, and the stock would be straight dividends and the option holders would get like a bonus basically. To do something like that, and you could even imagine doing more basically ad hoc type things like that where you essentially make a formula and then do a bonus payout, but make it more formulaic less just like, oh I think you did a good job this year, here’s a check and more you have this sort of ownership in our. Current cap structure and based on that, according to this formula, we’re doing bonus payouts like that, but that also gets messy because there is an element of discretion and also when you’re dealing with investors, like they don’t want to get a $17 check, and you got 4 more employees, you got to take down their address or whatever. This is a lot of weird mechanical stuff there. So I, I think realistically it’s, we gotta wait a few years and see how this all plays out and what the shape of the business is, but what we’ve done is we’ve built up a lot of potential energy, a lot of ownership, a lot of equity with the staff members, and hopefully we can find a way to convert that into kinetic energy to continue the analogy in the future. And I’m pretty optimistic. It is asking the staff to trust us to a significant extent that we’ll be able to figure that out and treat it fairly, but I’m pretty hopeful that we would be able to do something that’s fair to everyone. 00:50:28 - Speaker 2: So, would you recommend a structure like this to someone else who wanted to start a company and or do you imagine, you know, if you had to start a new company yourself today, would you reach for a structure like this? 00:50:42 - Speaker 1: Yeah, well, we thought about this for a very long time and it was hard to come up, and we spoke with a lot of experts, and it was hard to come up with a better setup. So one way to think of this is, insofar as we’re talking about staff compensation, equity ownership, it’s kind of in the standard Silicon Valley model, but with the percentages dialed way in favor of the staff. So in that respect, it’s kind of strictly better, I would say, than a typical Silicon Valley model. And so it can’t be that wrong, strictly better at least for the staff, I would think. And we didn’t talk about the other things that we do there in terms of very long exercise windows and more favorable investing schedules and so on, but basically, we’ve taken the standard mechanisms that are used in stock, Delaware Sea Corps and turned the variables that we can so there’s as favorable as possible as staff. And I think at least that is a good thing if you would have otherwise considered it a standard Silicon Valley model. The one other option that I do think is interesting, but that I couldn’t quite see ourselves going down was Using more like a phantom stock approach, where you have essentially an internal ledger that’s separate from the ledger that you have with Delaware in terms of the equity ownership in the company, and it’s on the basis of that internal ledger that you would make decisions about how you do payoffs. And there are some companies that are exploring this, you know, it’s like every month you work with the company, you earn a point, and then if we ever do dividends, you divide the dividend by the number of points and that’s how much we send you a check for, something like that. That’s nice cause it gives you a ton of flexibility, but it’s much less precedented, and it places even more trust in the company, because you have less of the legal guard rails to confine what they can do or not do. So I think that’s interesting because of the flexibility, and I would love to see people try that more, but I wasn’t ready to, you know, establish a whole bunch of new case law just for the sake of this venture. 00:52:21 - Speaker 2: Now, precedent is very important. There’s the general business wisdom is try not to innovate on the model, try to focus on your product and your and don’t get too caught up in company mechanics. It turned out that this was something that we were both passionate enough about in terms of the place we wanted to work, but also I honestly do think we needed a different type of container, right, that we knew that as we talked about towards the beginning there where an individual productivity tool and what you can sell for even at a prosumer price and what the mechanics of distribution and things look like there versus other, you know, there’s a reason why Venture funded stuff is either Enterprise, SAS, or, you know, a monetized consumer products. Those are models that work well with that funding style, and the thing we wanted to express in terms of the product and the thing we wanted to exist in the world, as well as the company that we wanted to work at, I think just demanded a different model. I don’t think it would have worked with another one, so I think that was a way to justify the ways in which we are deviating or innovating a little bit on the container side of it. But then at the same exactly as you said, I remember a lot of design choices we made and things like, you know, we’d love to give employees options or we’d love to give employees pure stock, but that’s just way too hard or even impossible without these punishing tax consequences. So, OK, we’ll kind of have these two classes of ownership in the company, that’s not the spirit of what we’re doing, but like, at some point you gotta bend a little bit to realities and what there’s precedent for and what attorneys and accountants are used to working with and all that sort of thing. 00:53:54 - Speaker 1: Yeah, and I think in all of this there’s also a very real morale element where let’s suppose the company is very successful some years from now, all the current and former staff are going to remember that we worked very hard to try to do the best we possibly could by them. They were like basically on all the email chains with the lawyers, more or less literally, and we would debrief and talk about, OK, here are the options that we have. What do you all think? Does this work well for you and things like that, versus a model where That was all opaque and there was not even an effort made to try to set things up as best as possible for the staff. I think that just helps people feel like they are being treated well. 00:54:28 - Speaker 2: Well, speaking for myself, I am sometimes in the position of offering advice, let’s say, to folks who are thinking about what kind of vehicle they use for their business, and kind of the new approach does come up and Certainly, it’s huge to ask, what are you actually trying to make, because you need the right vehicle for what you’re doing. If you need huge upfront capital or a big staff, I’m not sure this model can work, to be honest, or if it’s something that can be done with more of a small team, 1 people, 2 people in a shorter period of time, then maybe this is also not the right way to do it. And there’s other kinds of vehicles as well. For example, I think The nonprofit is a little bit underutilized, can be an incredible vehicle even for software and technology products. We know of maybe someone like Mozilla or the Apache Foundation. There’s many smaller examples such as processing Foundation, which does this kind of generative art coding tool language thing. There’s many others where I think if you think, OK, what we want to make is more open source or it’s more of Long term kind of benefit, less of a maybe it’s more educational or maybe the targ
Carl and Richard talk to Bill Wilder about Hadoop on Azure. Hadoop is a technology for analyzing massive (petabytes) amount of data efficiently. Originally developed by Yahoo, it was given to the Apache Foundation as an open source project. Google, Facebook and others have all contributed to the project. Microsoft has come late to the game, but with a very compelling offering - you can run Hadoop on Azure and use Visual Studio to work with it. There's lots to learn about Hadoop, this show is just the starting point!Support this podcast at — https://redcircle.com/net-rocks/donations