Web-based source code repository
POPULARITY
This episode features an interview with Larry Augustin, angel investor and advisor to early-stage technology companies. Larry previously served as the Vice President for Applications at AWS, where he was responsible for application services like Pinpoint, Chime, and WorkSpaces.Before joining AWS, Larry was the CEO of SugarCRM, an open source CRM vendor. He also was the founder and CEO of VA Linux, where he launched SourceForge. Among the group who coined the term “open source”, Larry has sat on the boards of several open source and Linux organizations.In this episode, Sam and Larry discuss who owns the rights to data, the data in to data out ratio, and why Larry is an open source titan.-------------------"People are willing to give up so much of their personal information because they get an awful lot back. And privacy experts come along and say, ‘Well, you're taking all this personal information'. But then most people look at that and say, ‘But I get a lot of value back out of that.' And it's this data ratio value question, which is: for a little in, I get a lot back. That becomes a key element in this. And I think there has to be some kind of similar thought process around open source data in general, which is if I contribute some data into this, I'm going to get a lot of value back. So this data in to data out ratio, I think it's an incredibly important one. And it gets everyone in the mindset of, ‘How do I provide more and more and take less and less?' It's a principle of application development that I like a lot. And I think there's a similar concept here around open source data. Are there models or structures that we can come up with where people can contribute small amounts of data and as a result of that, they get back a lot of value.” – Larry Augustin-------------------Episode Timestamps:(02:52): How Larry is spending his time now after AWS(06:25): What drove Larry to open source(18:41): What is the GPL for data?(24:28): Areas of progress in open source data(28:57): The data in to data out ratio(36:39): Larry's advice for folks in open source-------------------Links:LinkedIn - Connect with LarryTwitter - Follow Larry
Got a Minute? Checkout today's episode of The Guy R Cook Report podcast - the Google Doc for this episode is @ A Free comparisom of G2 to SourceForge for comparing computer software ----more---- Support this podcast Subscribe where you listen to podcasts I help goal oriented business owners that run established companies to leverage the power of the internet Contact Guy R Cook @ https://guyrcook.com The Website Design Questionnaire https://guycook.wordpress.com/start-with-a-plan/ In the meantime, go ahead follow me on Twitter: @guyrcookreport Click to Tweet Be a patron of The Guy R Cook Report. Your help is appreciated. https://guyrcook.com https://theguyrcookreport.com/#theguyrcookreport Follow The Guy R Cook Report on Podbean iPhone and Android App | Podbean https://bit.ly/3m6TJDV Thanks for listening, viewing or reading the show notes for this episode. This episode of The Guy R Cook Report is on YouTube too @ This episode of The Guy R Cook Report Have a great new year, and hopefully your efforts to Entertain, Educate, Convince or Inspire are in play vDomainHosting, Inc 3110 S Neel Place Kennewick, WA 509-200-1429
Coming up in this episode * We do a little upgrade * Firefox fixes a tooltip * The History of W, V, X and CDE * How it went * And a new old desktop to explore 0:00 Cold Open 1:42 Lemmy's Upgraded! 10:56 A 22 Year Old Bug 15:50 Install Firefox Correctly 22:22 CDE History: Intro 24:04 CDE History: X 27:33 CDE History: OPEN LOOK 29:25 CDE History: COSE 31:28 CDE History: CDE & Others 34:24 CDE History: The Opening 36:14 CDE History: The Releases 43:02 How'd CDE Go? 1:16:00 Next Time 1:21:29 Stinger Watch the video! (https://youtu.be/-tycNQ-Ey9Q) https://youtu.be/-tycNQ-Ey9Q Banter The LUS Lemmy instance (https://lemmy.linuxuserspace.show) got an update (https://github.com/LemmyNet/lemmy-ansible/releases/tag/1.2.0). The ansible repo switched to tagged releases. There were ⚠️breaking changes⚠️ that needed to be prepared for (https://github.com/LemmyNet/lemmy-ansible/blob/main/README.md#upgrading). One of the issues Dan had is likely fixed now. (https://github.com/LemmyNet/lemmy-ansible/commit/300a261b2a346dd6489f5eb43d6af632633f4059) The Bug (https://arstechnica.com/gadgets/2023/10/22-year-old-firefox-tooltip-bug-fixed-in-a-few-lines-offering-hope-to-us-all/) that's old enough to drink and drive, but hopefully not at the same time! Dan installed Firefox (https://support.mozilla.org/en-US/kb/install-firefox-linux#w_install-firefox-from-mozilla-builds) from the .tar.gz download. Spoiler - it updates just fine because my user is the owner in the /opt directory. Announcements This program was made possible by: * The letters W, V, X, C, D and E *
This episode features an interview with Larry Augustin, angel investor and advisor to early-stage technology companies. Larry previously served as the Vice President for Applications at AWS, where he was responsible for application services like Pinpoint, Chime, and WorkSpaces.Before joining AWS, Larry was the CEO of SugarCRM, an open source CRM vendor. He also was the founder and CEO of VA Linux, where he launched SourceForge. Among the group who coined the term “open source”, Larry has sat on the boards of several open source and Linux organizations.In this episode, Sam and Larry discuss who owns the rights to data, the data in to data out ratio, and why Larry is an open source titan.-------------------"People are willing to give up so much of their personal information because they get an awful lot back. And privacy experts come along and say, ‘Well, you're taking all this personal information'. But then most people look at that and say, ‘But I get a lot of value back out of that.' And it's this data ratio value question, which is: for a little in, I get a lot back. That becomes a key element in this. And I think there has to be some kind of similar thought process around open source data in general, which is if I contribute some data into this, I'm going to get a lot of value back. So this data in to data out ratio, I think it's an incredibly important one. It's a principle that I drive into application development. If you put a user in front of an app and they start using the app, you're going to ask them for things. And my principle is always, ‘How do you figure out how to never ask them and only give them?' And you can't get 100% of the way there, but every time it's like, ‘Why did you ask them for that? Couldn't you figure it out?' And it gets everyone in the mindset of, ‘How do I provide more and more and take less and less?' It's a principle of application development that I like a lot. And I think there's a similar concept here around open-source data. Are there models or structures that we can come up with where people can contribute small amounts of data and as a result of that, they get back a lot of value.” – Larry Augustin-------------------Episode Timestamps:(02:14): How Larry is spending his time after AWS(06:01): What drove Larry to open source(18:04): What is the GPL for data?(23:51): Areas of progress in open source data(28:37): The data in to data out ratio(36:02): Larry's advice for folks in open source-------------------Links:LinkedIn - Connect with LarryTwitter - Follow Larry
Wieder alle da.
About JuliaJulia Ferraioli calls herself an Open Source Archaeologist, focusing on sustainability, tooling, and research. Her background includes research in machine learning, robotics, HCI, and accessibility. Julia finds energy in developing creative demos, creating beautiful documents, and rainbow sprinkles. She's also a fierce supporter of LaTeX, the Oxford comma, and small pull requests.Links:Open Source Stories: https://www.opensourcestories.org TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: It seems like there is a new security breach every day. Are you confident that an old SSH key, or a shared admin account, isn't going to come back and bite you? If not, check out Teleport. Teleport is the easiest, most secure way to access all of your infrastructure. The open source Teleport Access Plane consolidates everything you need for secure access to your Linux and Windows servers—and I assure you there is no third option there. Kubernetes clusters, databases, and internal applications like AWS Management Console, Yankins, GitLab, Grafana, Jupyter Notebooks, and more. Teleport's unique approach is not only more secure, it also improves developer productivity. To learn more visit: goteleport.com. And not, that is not me telling you to go away, it is: goteleport.com. Corey: This episode is sponsored in part by our friends at Redis, the company behind the incredibly popular open source database that is not the bind DNS server. If you're tired of managing open source Redis on your own, or you're using one of the vanilla cloud caching services, these folks have you covered with the go to manage Redis service for global caching and primary database capabilities; Redis Enterprise. To learn more and deploy not only a cache but a single operational data platform for one Redis experience, visit redis.com/hero. Thats r-e-d-i-s.com/hero. And my thanks to my friends at Redis for sponsoring my ridiculous non-sense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. My guest today is someone I have been very politely badgering to come on the show for a while, ever since I saw her speak a couple years ago in the Before Times, at Monktoberfest. As I've said before, anytime the RedMonk folks are involved in something, it is something you probably want to be involved in. That is my new guiding star philosophy when it comes to conferences, Twitter threads, opinions, breakfast cereals, you name it. Please welcome Julia Ferraioli, the co-founder of Open Source Stories, Julia, thank you for joining me today.Julia: Thank you for having me. And I definitely agree on the RedMonk side of things. They are fantastic folk.Corey: They're a small company, which is sort of interesting to me from a perspective of just how outsized their impact on this entire industry is. But it's, I've had as many of them as they will let me have on the show. They are welcome to come back whatever they want, just because they—every single one of them, though they're very different from one another, make everyone around them better with their presence. And that's just a hard thing to see. I didn't mean to turn this into a love letter to RedMonk, but here we are.Julia: I don't mind it. They have the ability to amplify the goodness that they see, anything from their survey designs to just how they interact online. It's wonderful to see.Corey: Speaking of amplifications, you are the co-founder of Open Source Stories, the idea of telling the—to my understanding—the stories behind open source. Like this is sort of like—what is it, Behind the Music, only in this case it's Behind the Code? I mean, how do you envision this?Julia: Oh, I like that framing. So, Open Source Stories is a project that myself and Amanda Casari founded not that terribly long ago because when we were doing research about how to model open source and open source ecosystems, we realized that a lot of the research papers that have been published about open source are pulled mostly from GitHub Archive, which is this repository of GitHub data. It could be the actual Git commit history as well as the activity streams from GitHub as well, but that doesn't capture a lot of the nuances behind open source, things like the narratives, how communities interact, where communication is happening, et cetera. All of these things can happen outside of the hosting platform. So, we launched this project to help tell these stories of the people and events and scenarios behind the open source projects that really power our industry.Corey: I'm going to get letters for this one, I'm sure of it, but I've been involved in the open source ecosystem for a while and I've noticed that there's been a recurring theme among various projects, particularly the more passionate folks working on them, where they talk an awful lot but they aren't very good at telling stories at the same time. And nowhere is this more evident than when we look at what passes for a lot of these projects' documentation. One of the transformative talks that I went to was Jordan Sissel's years and years ago, at the Southern California Linux Expo. And it was a talk about LogStash, which doesn't actually matter because the part that he'd said that really resonated with me, that his whole theme of his talk was around, was if a new user has a bad time, it's a bug. And the idea that, “Oh, you didn't read the documentation properly.”When about I started working with Linux, in some IRC chat rooms, the standard response to someone asked for help was to assume that they're an idiot, begin immediately accosting them with RTFM, for Read the Frickin' Manual, and then look for ways that you could turn this back around on them and make it their fault. And I looked at this and at the time, it's like, “Wow, these are people that are mean to other people,” and I was a small, angry teenager; it's like, “This is my jam. Here I am.” And yeah, many decades later, I'm looking at this and I feel a sense of shame because that's not the energy I want to put into the world. A lot of those communities have evolved and grown and what used to be the area and arena for hobbyists is now powering trillion-dollar companies.Julia: Absolutely. I like the whole, “If the user has a bad experience, that's a bug,” because it absolutely is. And I feel like a lot of these projects haven't invested nearly as much into the user experience as they have into polishing the code. And the attitude that that kind of perpetuates throughout the project about how you treat your users, it's pervasive and it really sets up the types of features that you develop, the contributors that you encouraged to commit to the project, and it just creates a—to put it minorly—less than welcoming environment for users, contributors, maintainers alike. And we don't really need that sort of hostility, especially when we're talking about projects that underpin the foundations, in some cases, of the internet.Corey: When we look at what open source is, I mean, I shortcut to thinking in terms of the context through which I've always approached it, which was generally code, or in my sad, particular story, back in the olden days on good freenode, when that was where a lot of this discourse happened, I was network staff and helping a bunch of different communities get channels set up through a Byzantine process. Because of course there was a Byzantine process; it was an open source community, and if there's one thing we love in open source, it is pretending to be lawyers when we're not. And we're sort of cargo-culting what we think process and procedure often look like. So yeah, there was a bunch of nonsensical paperwork happening there, but it was mostly about helping folks collaborate and communicate. But I've first and foremost, think in terms of code and in terms of community. What is open source to you?Julia: Well, I entered open source in the Sourceforge days, when all you had to do was go and download some code from the internet and hit the right download button, make sure not to hit one of the extraneous ones. And all you need for that is for the code to be under the right license. And to an extent that's what's true today for open source. At the heart of it, this minimum criteria for what constitutes open source is, “Okay, does it comply with the open source definition that the Open Source Initiative puts forth?” Now, I understand that not everybody necessarily agrees with the Open Source definition, but it's useful as a shortcut for how we think about the basic requirements. But what I find when people are talking about open source online is that they have these very different models. You'll hear from people that, “Okay, well, if it doesn't have a standard governance model, it's not really open source.”Corey: The ‘No True Scotsman' argument.Julia: Yeah. So, I find that we've got these different expectations for what open source is, and that leads to us talking past each other or discounting different types of open source when what we really need to do is come up with better language, a better vocabulary, for how to talk about these things. So, for example, I used to work in developer relations, and in developer relations one of the big things that you do is release sample code. Now, oftentimes, I'm not looking for that sample code to be picked up by a bunch of different developers and incorporated as a library into their project—Corey: [laugh]. Well, that's your error in that case because congratulations, that's running in production at a bank somewhere, now.Julia: Oh, I know. And that has definitely happened with my code, and I'm ashamed to say that. [laugh]. But generally speaking, you're not looking to build a huge community around sample code, right?Corey: You say that, but that again, Stack Overflow, it was—Julia: Okay.Corey: —[unintelligible 00:09:22] done rather well. So, there's that.Julia: Well yes, that is true, but when you release code on Stack Overflow, or GitHub, or in a Jest, or just on your blog, the thing that allows the bank to come in and incorporate that into their own application, or to even just learn from it, is the fact that it is open source. Now, it doesn't have a lot of the things that a community like Python or Kubernetes has, but it is still open source; it just has a different purpose than those communities and those ecosystems.Corey: So, I think it is challenging right now to talk about open source as if it were the same type of thing that it was back in the '90s, and the naughts—and even the teens—where it's a bunch of, more or less either hobbyists or people are perceived to be hobbyists. Sure, an awful lot of them are making commits from their redhat.com email address, but okay. And some of these people are increasingly being paid to work places, but then you see almost—I don't necessarily agree with the framing of The New York Times article by Daisuke Wakabayashi—who's a previous guest on the show—of Amazon strip-mining open source, but they definitely are in there—and other companies as well—are sort of appropriating it, or subverting it, or turning it into something that it was not previously, for lack of a better term. What's your take on that?Julia: Oh, that's a hard one. From a fundamentals perspective, that is absolutely within their rights under the definition of open source, and in some cases, the spirit of open source as well.Corey: Oh, and I would argue with someone who said that they should be constrained from doing this as far as a matter of legalities, or rights, or ridiculous Looney Tunes license changes.Julia: Well, there are definitely folks who are trying to make that the case.Corey: Yeah. Oh, yeah. I'm on the position of, they're within their rights to do it, but it's time for a good old fashioned public shunning as a result.Julia: I'm not sure I agree. I think that it is a natural consequence of how open source has gained in popularity and, in some cases, it's a testament to open source's success. Now, does it pose some serious challenges for the open source community and open source ecosystem? Absolutely because this is a new way of using open source that was unanticipated, and in fact, could be characterized as a Black Swan event in [open source-ware 00:12:18].Corey: The fundamental attribution error that I see, back at the very beginning, was that what we wrote the software, therefore, we are the best in the world at running it, therefore, if there's going to be a managed service, clearly ours will be the best. Amazon's core strength has apparently been operational excellence as they like to call it; my position on that is a little bit less of tying into the mystery, a little bit more of they're really fast and getting paged and fixing things in a hurry before customers notice. So okay, great, but it's column A, column B, whatever. The bigger concern I have with Amazon as its product strategy is, “Yes.” If it were just a way to run EC2 instances or virtual machines, then sure, that's great.And every open source project should, on some level, see some validation of its market through a lens of, “Oh, we're getting some competition. That's great.” The challenge I see is that in the line of competitors, Amazon is at or near the front all the time on basically everything. And it's if they would pick a lane to stay in, great.Google is a good example of this. There are things that Google very strongly considers in its wheelhouse, but for other things, they partner with the open source-based company in question to create a managed service partner offering and that's great. Amazon pulls a, “Nope. We're just going to build this out as first-party. The end.”And they compete with everyone, including themselves on almost every axis. And that's where it just gets into a, “Leave some oxygen for the rest of us.” I mean, it feels like they lie awake at night worrying that someone who isn't them somehow making money somewhere. That is, I think, on some level, more of the Black Swan event than someone else deciding that they can host a particular open source project more effectively. But that's where I stand. And again, this is just me as an enthusiastic and obnoxious observer. You're operating in this space. What do you think? That's the important part of the story.Julia: Well, I mean, you definitely have a point, Amazon—or AWS, maybe not necessarily Amazon—takes on different technologies far and wide, so they're not limiting themselves to a space. But that said, I think it comes down less to what is possible with open source and what is okay under the guise of open source, and what is good for the open source ecosystem. And when you fork a project, you do have to understand that you are bifurcating the open source ecosystem. And that can lead to sustainability problems down the road. So, I think the jury is still out on whether forking a project, running it as a managed service—as Amazon is doing with some of the open source projects—if that's going to come back to bite them just from a developer community standpoint because you're going to have people committing to one or the other, but possibly not both.Corey: I think this is why Amazon—I know, they're very annoyed by their perception in the open source ecosystem, but you take a look at other large tech companies, and almost all of them have a few notable open source projects that started life there. For example, we have—I think Cassandra came out of Facebook, but don't quote me on that; Kubernetes came out of Google, a fact for which they steadfastly refused to apologize, so far; and so on, and so forth. But Amazon's open source initiatives have been, “We've open sourced this thing that is basically only used at Amazon.” Or, my personal favorite, we've put all of our documentation up on GitHub so that you can write a corrections to it yourself from the community, which I'm hearing as, “Please, volunteer for a $1.6 trillion company so that they don't have to improve their documentation by hiring expensive people internally.”You can sort of guess my position on that. It seems like they have not launched anything that has a deep heart within Amazon that is broadly adopted outside of their walls. My question for you is, do you believe that having that level of adoption externally is required for a healthy open source project?Julia: Again, I think it goes back to the goals of why you're open-sourcing something. I don't believe that it's necessarily required for the open source project to be quality and be usable, but if your goal is adoption or if your goal is to get ideas and best practices out there, then yeah, you do need that engagement by the broader community, you do need the contributors. But there are a lot of cases where open-sourcing technology is more for the validation, rather than the adoption of the tech. So, it really depends.Corey: I'd say the most cynical reason I've seen to open source things comes from Netflix, where they have a recurring pattern of open-sourcing something, there are two or three commits, and then it basically sits there unattended. What I firmly believe is happening is that a senior engineer at Netflix is working on the thing and they're about to change jobs, so they open source the project so that they can change jobs and then pick up where they left off with an internal fork, I view it as a game of, basically, they're passing themselves a football as they run across the street. And people laugh when I say that, but I've also had people over drinks say, “You are closer than you might think, sometimes.” Which on some level is terrifying. Feels like life is imitating art, but here we go.Julia: That definitely happens, and I have seen it [laugh] as well. People want to essentially use open source to exfiltrate IP.Corey: Yeah. Only doing it legitimate way as opposed to the, “Please don't—hope they don't find that USB stick I've hidden in my sock on my last day.”Julia: Yes. And this is why open source offices have a challenging job in helping facilitate the release of open source software. So, it is hard to ascertain when that is happening.Corey: Yeah, no company is ever going to have a big statement that is going to be anything other than, honestly, marketing speak when it comes time to explain why they're doing a certain thing. It's, “Oh, yeah, we're open-sourcing this so we don't get sued in three years by this other company that might prove to be a competitive threat.” Or, “We're open-sourcing this as a hiring and recruiting technique.” I mean, I would argue, it wasn't open source, but one of the best approaches that I've seen from that perspective came out of Google, I'm firmly convinced to this day that App Engine was run not by their SRE team, but by their recruiting arm, “Because if you can build a great app on App Engine, well, this is, kind of like, how we think about things inside of Google; come and work here,” either via acqui-hiring or a just outright interview funnel. Maybe that's too cynical, too, but again, that leads to the question of is it really open source when it has these deep ties to specific platforms?Here's an open source tool that presumes you're running on top of AWS. Well, great, sure it's built by the community and anyone can access these things, but without paying per second to a cloud provider, probably the referenced cloud provider they're developing this against, it's not going to get very far. So, it's a nuanced argument, and there are shades of that nuance to every aspect of it. And if there's one thing that Twitter is terrible at is capturing nuance in 280 characters. And even in the, “All right, this is my nuanced take on open source in this thread, I will tweet, one of 5,712.” Great. That's not really the forum for that either. And people lose sight of nuance. It's a sticky, delicate thing, and it feels like a lot of the open source community has been enthusiastically agreeing with each other—sometimes violently so—but they're not sharing a common language in which to do it.Julia: Yeah. And in terms of the purposes of open source projects, it is okay for them to have different ones as long as they're telegraphing those purposes to their users and the people who are looking at the projects for their own use. But whether it's open source? I think it's okay for that to be the baseline and then build out the vocabulary of the types of projects that you want from there, based on those expectations. Yes, this particular technology only works with this cloud provider. That's open source that facilitates and accelerates development with that cloud provider.Corey: This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of "Hello, World" demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking, databases, observability, management, and security. And—let me be clear here—it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself all while gaining the networking load, balancing and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build. With Always Free, you can do things like run small scale applications or do proof-of-concept testing without spending a dime. You know that I always like to put asterisks next to the word free. This is actually free, no asterisk. Start now. Visit snark.cloud/oci-free that's snark.cloud/oci-free.Corey: I always try and stay away from explicit value judgments on a lot of these things because it's nuanced, and no one who doesn't work at Facebook wakes up expecting to do terrible things today. We're all trying to do the best we can with the constraints are operating within. The challenge is that when you're at a company like an AWS, or a Google, or a Microsoft, or one of these giant companies, the same pressures that the rest of the quote-unquote “mere mortals” in ecosystem have to contend with are very different. But talking to people who work at these big companies, they have meetings and review processes that here at my twelve-person company, I don't even have to consider.Easy example of that: Never once have I put something out into the world and had a single discussion about is this going to get us in trouble with respect to antitrust? That has never been on my radar as far as things I have to care about. Even at my previous job at a highly regulated financial company, where you could argue that they are approaching monopoly status in some areas of the market organically, with passive investing being what it is, great, their open source discussions were always much more aligned with what licenses are we willing to accept legal risk for using internally? Because there are things that are—like IP is why we have a business in many respects, so anything that touches that theoretically means we'd have to disclose how the entire system, how the rest of it works, is not allowed to be used here. And there are reviews and processes and compliance requirements for that.I get that concern, and at a certain point of scale, you're negligent if you don't have a function that looks at it through that lens. But I look back to the early days of just puttering around with, “I want to do a thing and I found this project somewhere that people are excited about,” in the pre-GitHub days, I can download it off as Sourceforge or whatnot and I can make it work. And but it doesn't do this one thing I want to do, “Hey, the code's available. Can I fix it myself? Absolutely not. I'm crap at writing code. But I can talk to people and piece it together from wisdom that they offer.” And it turns into something awful until finally it gets enough traction that someone who knows what they're doing looks at it and refactors and it makes it good.And that's the open source community I recognize and that I see from my early developmental period. I don't recognize what we see in ecosystem today through that same lens of, “Okay, go online. Be nice to people”—well, that's new—“See how this thing works. And oh, if I'm having a problem, I'm probably not the only person who's having a problem like this.” You have to get really good at using Google more than you do at writing code in some respects. But at that point, it's almost entirely a copy-and-paste, except that's not technical enough for the open source world. So instead, we have to learn the 500 arcane subcommands to Git in order to get it out there. But it works. Ish.Julia: I think that community is still out there. I really do. I think that it is harder to find and it's not necessarily where you might tend to look, but those projects are still there. They're still running. They might be a little less high-profile than a lot of the ones that are getting a lot of attention right now, but they are still there.Corey: On some level, it feels like the blame for this lies—at least partially—at the feat of Slack and its success because it used to be that you had IRC, that was how folks communicated. And I remember the early days of that and things like Jabber or internal servers, grea—or internal IRC servers at companies—great, you'd have engineering all talking on that, and oh, you want to have someone in finance or marketing join that thing? Yeah, the short answer is, that won't be happening. But you can try and delude yourself and set it up with a special client and the rest.Slack removed all of that friction, but it's balkanized to the point where every once in a while, I have to go through and remove a bunch of Slack channels slash workspaces slash whatever we're calling them this week from my desktop client because it's basically eating all the RAM like it's trying to be Google Chrome. And then it's great, but there's no universal federated thing the way that there was with IRC where I just pop in a different channel for a different project. And IRC is still there and it comes back to life whenever Slack takes an outage. And then Slack gets fixed, it sort of bleeds off again. But I don't want to be in 500 different Slack workspaces, one for every open source project that I'm using, and there's no coherent sense of identity and community anymore the way there once was. And I feel like I'm old man yelling at the passing of time at this. But you're right, open source to me was always much more about community than it was about code.Julia: Yeah, and I think that we do not talk about the impact of tools for open source that we use. Because you're right; with IRC, it was unified. You could pretty much guarantee that projects of a certain size were present there. And with Slack, you have to sign up for yet another account, not quite yet sure why I can't find the right channels that I need to join in Slack. So, there's a lot of navigation and a lot of prerequisite knowledge that you need to have in order to be productive.And then you've got other tools being used for communication by other communities like, I believe Gitter is a major one as well. Then you have to make sure that you're up-to-date with all of these different interfaces, Discord, everything. And the sociological implication of that shouldn't be underestimated. What are you going to do if you find a project that uses a communication tool that you just really don't want to use or don't want to sign up for yet another account? Maybe you pass on by and you find one that works within your existing set of tools. There aren't a lack of open source projects to join right now. You can be choosy. And we don't yet know what the impact is of that.Corey: It's challenging. There's no good answer that I found that solves all of these things. It's become so balkanized, on some level, that every project out there that I see—and there are some small ones that are incredibly foundational to, basically, civilization as we know it, but it's not working right because it's you have to figure out where they are and what the community norms are because they change from project to project, and there are so many different things. And, like, you can go into NPM and install some relatively trivial thing that does command-line string processing, or whatnot, and it installs 40 different dependencies. And there's a problem and you want to figure out exactly how that works, and et cetera, et cetera, et cetera.Julia: Absolutely. With NPM specifically, or Node specifically, it is interesting that the development model kind of encourages this obscurity, an obfuscation of a functionality. So, it is hard to go in, debug an issue, go to the specific community, understand how they work, contribute a patch, just to fix something that is, you know, five levels up. It gets confusing for developers. It can contribute to longer-term bugs that we see propagate throughout the system. It is not an easy problem to solve, and I have a lot of sympathy for newcomers to the open source ecosystem because it is so hard to navigate. And I think that's an as yet unsolved problem that we need to address.Corey: So, what was it that inspired you to create Open Source Stories? I mean, I love the direction you're taking this in; I love the way you're thinking about [audio break 00:29:38]. Where did it come from? What started this?Julia: Well, when Amanda and I were going back and doing research around—you know, aside from the code for an open source project, where are the different entry points? Where are the different interaction points between projects, ecosystems, and the industry? And we did a couple of interviews, just very organic interviews, with some subject matter experts in Node, in Python, in Go. And there was a point where we stopped—or at least I stopped taking notes because I was just so fascinated by the narrative that our interviewee was putting forth and was talking about. And what we wanted was for it to not just be this meeting between a few people, we wanted to be able to share that with anyone. And so one of the things that really inspired us was StoryCorps, which allows you to record, much like we're doing today, 40 minutes worth of interactions between one to three people.Corey: Oh, we're going to cut it down to five minutes at most. Like, one question; one answer. Boom, we're done.Julia: [laugh].Corey: I kid, I kid.Julia: But it's really about facilitating the sharing of knowledge and sharing of these oral histories. Because as you're doing research into interactions in specific open source communities, you'll get articles, you'll get changelogs, all of that good stuff, but you won't get the nuance that we've been talking about over the course of this podcast. You lose the story behind the story, right? How are decisions made? How are people thinking about the interactions with their users? What are the turning points for a project? What are those conversations between the maintainers that changed the entire game?Those are the sorts of stories that we're hoping to capture because they're important for history, for knowledge sharing, for learning from our past, and making decisions for the future. And so that's really what we wanted to capture. And we wanted to capture the narratives behind the people that don't necessarily show up in the codebase, too: Talking about the designers, the product managers, the marketers behind open source that make it successful. Because there's so much more than code.Corey: Oh, my God, yes. It's… how do I put this politely without getting letters? Well, I guess I'll take a stab at it and see how it plays out. I look at so much of the brilliant code that has been written, and the documentation is abhorrent, and the design of the site, and the icon, and the interface, it looks like a joke that I put on Twitter trying to be funny. It's, the code is important, don't get me wrong, but there's so much more to it than that.And we see this in the industry, too, where companies have gone out of business, trying to get their codebase just right. It's, yeah, you can launch code that is really, really bad, but if you have product-market fit, it is survivable. I've heard stories in the early days of Twitter that we saw the fail whale all the time because it was an abhorrent monstrosity, to the point it became a running joke. But it turns out, when you hit product-market fit, you can afford really good engineers to come in and fix a lot of that stuff. That stuff is more important than the quality of the code, and that is something that I think that we have a collective industry-wide delusion about. And it's a blind spot for us.Julia: Yeah. I think we get wrapped up in the cleverness of the tech, and I've fallen prey to this, too. I get so involved in how I'm solving the problem and forget about the actual problem that I'm trying to solve, right? It's not necessarily about the how, but about the what. And without your fantastic tech writers, designers, usability experts, your open source project is going to be your open source project. It's not going to necessarily get that wide adoption, if that is indeed your goal for the technology that you're releasing.So, it really is about making sure that as we're launching and working on these open source projects and ecosystems, that we are inviting people to the table that have these other unique skills that goes beyond that code and speaks to what makes the project different and unique.Corey: I really want to say how much I appreciate your taking the time to talk to me about this. If people want to get involved themselves, how do they do that? Because I have a hard time accepting that you're doing something called Open Source Stories that eschews community involvement.Julia: Yeah. So, we absolutely would love more folks to get involved. I have been primarily the person working on the site, so we can always use contributors to the site itself, but we also want more storytellers and facilitators. And so if you go to opensourcestories.org, we've got a page specifically designed to facilitate contributions. So, check that out, and we look forward to hearing from anyone who wants to participate.Corey: And we will, of course, include links to that in the show notes. Thank you so much for taking the time to speak with me today. I really appreciate it.Julia: Thanks for having me.Corey: Julia Ferraioli, co-founder of Open Source Stories. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment, calling me a fool because I did not bother to RTFM first.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
About SamA 25-year veteran of the Silicon Valley and Seattle technology scenes, Sam Ramji led Kubernetes and DevOps product management for Google Cloud, founded the Cloud Foundry foundation, has helped build two multi-billion dollar markets (API Management at Apigee and Enterprise Service Bus at BEA Systems) and redefined Microsoft's open source and Linux strategy from “extinguish” to “embrace”.He is nerdy about open source, platform economics, middleware, and cloud computing with emphasis on developer experience and enterprise software. He is an advisor to multiple companies including Dell Technologies, Accenture, Observable, Fletch, Orbit, OSS Capital, and the Linux Foundation.Sam received his B.S. in Cognitive Science from UC San Diego, the home of transdisciplinary innovation, in 1994 and is still excited about artificial intelligence, neuroscience, and cognitive psychology.Links: DataStax: https://www.datastax.com Sam Ramji Twitter: https://twitter.com/sramji Open||Source||Data: https://www.datastax.com/resources/podcast/open-source-data Screaming in the Cloud Episode 243 with Craig McLuckie: https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/innovating-in-the-cloud-with-craig-mcluckie/ Screaming in the Cloud Episode 261 with Jason Warner: https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/what-github-can-give-to-microsoft-with-jason-warner/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Redis, the company behind the incredibly popular open source database that is not the bind DNS server. If you're tired of managing open source Redis on your own, or you're using one of the vanilla cloud caching services, these folks have you covered with the go to manage Redis service for global caching and primary database capabilities; Redis Enterprise. Set up a meeting with a Redis expert during re:Invent, and you'll not only learn how you can become a Redis hero, but also have a chance to win some fun and exciting prizes. To learn more and deploy not only a cache but a single operational data platform for one Redis experience, visit redis.com/hero. Thats r-e-d-i-s.com/hero. And my thanks to my friends at Redis for sponsoring my ridiculous non-sense. Corey: Are you building cloud applications with a distributed team? Check out Teleport, an open source identity-aware access proxy for cloud resources. Teleport provides secure access to anything running somewhere behind NAT: SSH servers, Kubernetes clusters, internal web apps and databases. Teleport gives engineers superpowers! Get access to everything via single sign-on with multi-factor. List and see all SSH servers, kubernetes clusters or databases available to you. Get instant access to them all using tools you already have. Teleport ensures best security practices like role-based access, preventing data exfiltration, providing visibility and ensuring compliance. And best of all, Teleport is open source and a pleasure to use.Download Teleport at https://goteleport.com. That's goteleport.com.Corey: Welcome to Screaming in the Cloud, I'm Cloud Economist Corey Quinn, and recurring effort that this show goes to is to showcase people in their best light. Today's guest has done an awful lot: he led Kubernetes and DevOps Product Management for Google Cloud; he founded the Cloud Foundry Foundation; he set open-source strategy for Microsoft in the naughts; he advises companies including Dell, Accenture, the Linux Foundation; and tying all of that together, it's hard to present a lot of that in a great light because given my own proclivities, that sounds an awful lot like a personal attack. Sam Ramji is the Chief Strategy Officer at DataStax. Sam, thank you for joining me, and it's weird when your resume starts to read like, “Oh, I hate all of these things.”Sam: [laugh]. It's weird, but it's true. And it's the only life I could have lived apparently because here I am. Corey, it's a thrill to meet you. I've been an admirer of your public speaking, and public tweeting, and your writing for a long time.Corey: Well, thank you. The hard part is getting over the voice saying don't do it because it turns out that there's no real other side of public shutting up, which is something that I was never good at anyway, so I figured I'd lean into it. And again, I mean, that the sense of where you have been historically in terms of your career not, “Look what you've done,” which is a subtext that I could be accused of throwing in sometimes.Sam: I used to hear that a lot from my parents, actually.Corey: Oh, yeah. That was my name growing up. But you've done a lot of things, and you've transitioned from notable company making significant impact on the industry, to the next one, to the next one. And you've been in high-flying roles, doing lots of really interesting stuff. What's the common thread between all those things?Sam: I'm an intensely curious person, and the thing that I'm most curious about is distributed cognition. And that might not be obvious from what you see is kind of the… Lego blocks of my career, but I studied cognitive science in college when that was not really something that was super well known. So, I graduated from UC San Diego in '94 doing neuroscience, artificial intelligence, and psychology. And because I just couldn't stop thinking about thinking; I was just fascinated with how it worked.So, then I wanted to build software systems that would help people learn. And then I wanted to build distributed software systems. And then I wanted to learn how to work with people who were thinking about building the distributed software systems. So, you end up kind of going up this curve of, like, complexity about how do we think? How do we think alone? How do we learn to think? How do we think together?And that's the directed path through my software engineering career, into management, into middleware at BEA, into open-source at Microsoft because that's an amazing demonstration of distributed cognition, how, you know, at the time in 2007, I think, Sourceforge had 100,000 open-source projects, which was, like, mind boggling. Some of them even worked together, but all of them represented these groups of people, flung around the world, collaborating on something that was just fundamentally useful, that they were curious about. Kind of did the same thing into APIs because APIs are an even better way to reuse for some cases than having the source code—at Apigee. And kept growing up through that into, how are we building larger-scale thinking systems like Cloud Foundry, which took me into Google and Kubernetes, and then some applications of that in Autodesk and now DataStax. So, I love building companies. I love helping people build companies because I think business is distributed cognition. So, those businesses that build distributed systems, for me, are the most fascinating.Corey: You were basically handed a heck of a challenge as far as, “Well, help set open-source strategy,” back at Microsoft, in the days where that was a punchline. And credit where due, I have to look at Microsoft of today, and it's not a joke, you can have your arguments about them, but again in those days, a lot of us built our entire personality on hating Microsoft. Some folks never quite evolved beyond that, but it's a new ballgame and it's very clear that the Microsoft of yesteryear and the Microsoft of today are not completely congruent. What was it like at that point understanding that as you're working with open-source communities, you're doing that from a place of employment with a company that was widely reviled in the space.Sam: It was not lost on me. The irony, of course, was that—Corey: Well, thank God because otherwise the question where you would have been, “What do you mean they didn't like us?”Sam: [laugh].Corey: Which, on some levels, like, yeah, that's about the level of awareness I would have expected in that era, but contrary to popular opinion, execs at these companies are not generally oblivious.Sam: Yeah, well, if I'd been clever as a creative humorist, I would have given you that answer instead of my serious answer, but for some reason, my role in life is always to be the straight guy. I used to have Slashdot as my homepage, right? I love when I'd see some conspiracy theory about, you know, Bill Gates dressed up as the Borg, taking over the world. My first startup, actually in '97, was crushed by Microsoft. They copied our product, copied the marketing, and bundled it into Office, so I had lots of reasons to dislike Microsoft.But in 2004, I was recruited into their venture capital team, which I couldn't believe. It was really a place that they were like, “Hey, we could do better at helping startups succeed, so we're going to evangelize their success—if they're building with Microsoft technologies—to VCs, to enterprises, we'll help you get your first big enterprise deal.” I was like, “Man, if I had this a few years ago, I might not be working.” So, let's go try to pay it forward.I ended up in open-source by accident. I started going to these conferences on Software as a Service. This is back in 2005 when people were just starting to light up, like, Silicon Valley Forum with, you know, the CEO of Demandware would talk, right? We'd hear all these different ways of building a new business, and they all kept talking about their tech stack was Linux, Apache, MySQL, and PHP. I went to one eight-hour conference, and Microsoft technologies were mentioned for about 12 seconds in two separate chunks. So, six seconds, he was like, “Oh, and also we really like Microsoft SQL Server for our data layer.”Corey: Oh, Microsoft SQL Server was fantastic. And I know that's a weird thing for people to hear me say, just because I've been renowned recently for using Route 53 as the primary data store for everything that I can. But there was nothing quite like that as far as having multiple write nodes, being able to handle sharding effectively. It was expensive, and you would take a bath on the price come audit time, but people were not rolling it out unaware of those things. This was a trade off that they were making.Oracle has a similar story with databases. It's yeah, people love to talk smack about Oracle and its business practices for a variety of excellent reasons, at least in the database space that hasn't quite made it to cloud yet—knock on wood—but people weren't deploying it because they thought Oracle was warm and cuddly as a vendor; they did it because they can tolerate the rest of it because their stuff works.Sam: That's so well said, and people don't give them the credit that's due. Like, when they built hypergrowth in their business, like… they had a great product; it really worked. They made it expensive, and they made a lot of money on it, and I think that was why you saw MySQL so successful and why, if you were looking for a spec that worked, that you could talk through through an open driver like ODBC or JDBC or whatever, you could swap to Microsoft SQL Server. But I walked out of that and came back to the VC team and said, “Microsoft has a huge problem. This is a massive market wave that's coming. We're not doing anything in it. They use a little bit of SQL Server, but there's nothing else in your tech stack that they want, or like, or can afford because they don't know if their businesses are going to succeed or not. And they're going to go out of business trying to figure out how much licensing costs they would pay to you in order to consider using your software. They can't even start there. They have to start with open-source. So, if you're going to deal with SaaS, you're going to have to have open-source, and get it right.”So, I worked with some folks in the industry, wrote a ten-page paper, sent it up to Bill Gates for Think Week. Didn't hear much back. Bought a new strategy to the head of developer platform evangelism, Sanjay Parthasarathy who suggested that the idea of discounting software to zero for startups, with the hope that they would end up doing really well with it in the future as a Software as a Service company; it was dead on arrival. Dumb idea; bring it back; that actually became BizSpark, the most popular program in Microsoft partner history.And then about three months later, I got a call from this guy, Bill Hilf. And he said, “Hey, this is Bill Hilf. I do open-source at Microsoft. I work with Bill Gates. He sent me your paper. I really like it. Would you consider coming up and having conversation with me because I want you to think about running open-source technology strategy for the company.” And at this time I'm, like, 33 or 34. And I'm like, “Who me? You've got to be joking.” And he goes, “Oh, and also, you'll be responsible for doing quarterly deep technical briefings with Bill… Gates.” I was like, “You must be kidding.” And so of course I had to check it out. One thing led to another and all of a sudden, with not a lot of history in the open-source community but coming in it with a strategist's eye and with a technologist's eye, saying, “This is a problem we got to solve. How do we get after this pragmatically?” And the rest is history, as they say.Corey: I have to say that you are the Chief Strategy Officer at DataStax, and I pull up your website quickly here and a lot of what I tell earlier stage companies is effectively more or less what you have already done. You haven't named yourself after the open-source project that underlies the bones of what you have built so you're not going to wind up in the same glorious challenges that, for example, Elastic or MongoDB have in some ways. You have a pricing page that speaks both to the reality of, “It's two in the morning. I'm trying to get something up and running and I want you the hell out of my way. Just give me something that I can work with a reasonable free tier and don't make me talk to a salesperson.” But also, your enterprise tier is, “Click here to talk to a human being,” which is speaking enterprise slash procurement slash, oh, there will be contract negotiation on these things.It's being able to serve different ends of your market depending upon who it is that encounters you without being off-putting to any of those. And it's deceptively challenging for companies to pull off or get right. So clearly, you've learned lessons by doing this. That was the big problem with Microsoft for the longest time. It's, if I want to use some Microsoft stuff, once you were able to download things from the internet, it changed slightly, but even then it was one of those, “What exactly am I committing to here as far as signing up for this? And am I giving them audit rights into my environment? Is the BSA about to come out of nowhere and hit me with a surprise audit and find out that various folks throughout the company have installed this somewhere and now I owe more than the company's worth?” That was always the haunting fear that companies had back then.These days, I like the approach that companies are taking with the SaaS offering: you pay for usage. On some level, I'd prefer it slightly differently in a pay-per-seat model because at least then you can predict the pricing, but no one is getting surprise submarined with this type of thing on an audit basis, and then they owe damages and payment in arrears and someone has them over a barrel. It's just, “Oh. The bill this month was higher than we expected.” I like that model I think the industry does, too.Sam: I think that's super well said. As I used to joke at BEA Systems, nothing says ‘I love you' to a customer like an audit, right? That's kind of a one-time use strategy. If you're going to go audit licenses to get your revenue in place, you might be inducing some churn there. It's a huge fix for the structural problem in pricing that I think package software had, right?When we looked at Microsoft software versus open-source software, and particularly Windows versus Linux, you would have a structure where sales reps were really compensated to sell as much as possible upfront so they could get the best possible commission on what might be used perpetually. But then if you think about it, like, the boxes in a curve, right, if you do that calculus approximation of a smooth curve, a perpetual software license is a huge box and there's an enormous amount of waste in there. And customers figured out so as soon as you can go to a pay-per-use or pay-as-you-go, you start to smooth that curve, and now what you get is what you deserve, right, as opposed to getting filled with way more cost than you expect. So, I think this model is really super well understood now. Kind of the long run the high point of open-source meets, cloud, meets Software as a Service, you look at what companies like MongoDB, and Confluent, and Elastic, and Databricks are doing. And they've really established a very good path through the jungle of how to succeed as a software company. So, it's still difficult to implement, but there are really world-class guides right now.Corey: Moving beyond where Microsoft was back in the naughts, you were then hired as a VP over at Google. And in that era, the fact that you were hired as a VP at Google is fascinating. They preferred to grow those internally, generally from engineering. So, first question, when you were being hired as a VP in the product org, did they make you solve algorithms on a whiteboard to get there?Sam: [laugh]. They did not. I did have somewhat of an advantage [because they 00:13:36] could see me working pretty closely as the CEO of the Cloud Foundry Foundation. I'd worked closely with Craig McLuckie who notably brought Kubernetes to the world along with Joe Beda, and with Eric Brewer, and a number of others.And he was my champion at Google. He was like, “Look, you know, we need him doing Kubernetes. Let's bring Sam in to do that.” So, that was helpful. I also wrote a [laugh] 2000-word strategy document, just to get some thoughts out of my head. And I said, “Hey, if you like this, great. If you don't throw it away.” So, the interviews were actually very much not solving problems in a whiteboard. There were super collaborative, really excellent conversations. It was slow—Corey: Let's be clear, Craig McLuckie's most notable achievement was being a guest on this podcast back in Episode 243. But I'll say that this is a close second.Sam: [laugh]. You're not wrong. And of course now with Heptio and their acquisition by VMware.Corey: Ehh, they're making money beyond the wildest dreams of avarice, that's all well and good, but an invite to this podcast, that's where it's at.Sam: Well, he should really come on again, he can double down and beat everybody. That can be his landmark achievement, a two-timer on Screaming in [the] Cloud.Corey: You were at Google; you were at Microsoft. These are the big titans of their era, in some respect—not to imply that there has beens; they're bigger than ever—but it's also a more crowded field in some ways. I guess completing the trifecta would be Amazon, but you've had the good judgment never to work there, directly of course. Now they're clearly in your market. You're at DataStax, which is among other things, built on Apache Cassandra, and they launched their own Cassandra service named Keyspaces because no one really knows why or how they name things.And of course, looking under the hood at the pricing model, it's pretty clear that it really is just DynamoDB wearing some Groucho Marx classes with a slight upcharge for API level compatibility. Great. So, I don't see it a lot in the real world and that's fine, but I'm curious as to your take on looking at all three of those companies at different eras. There was always the threat in the open-source world that they are going to come in and crush you. You said earlier that Microsoft crushed your first startup.Google is an interesting competitor in some respects; people don't really have that concern about them. And your job as a Chief Strategy Officer at Amazon is taken over by a Post-it Note that simply says ‘yes' on it because there's nothing they're not going to do, or try, and experiment with. So, from your perspective, if you look at the titans, who is it that you see as the largest competitive threat these days, if that's even a thing?Sam: If you think about Sun Tzu and the Art of War, right—a lot of strategy comes from what we've learned from military environments—fighting a symmetric war, right, using the same weapons and the same army against a symmetric opponent, but having 1/100th of the personnel and 1/100th of the money is not a good plan.Corey: “We're going to lose money, going to be outcompeted; we'll make it up in volume. Oh, by the way, we're also slower than they are.”Sam: [laugh]. So, you know, trying to come after AWS, or Microsoft, or Google as an independent software company, pound-for-pound, face-to-face, right, full-frontal assault is psychotic. What you have to do, I think, at this point is to understand that these are each companies that are much like we thought about Linux, and you know, Macintosh, and Windows as operating systems. They're now the operating systems of the planet. So, that creates some economies of scale, some efficiencies for them. And for us. Look at how cheap object storage is now, right? So, there's never been a better time in human history to create a database company because we can take the storage out of the database and hand it over to Amazon, or Google, or Microsoft to handle it with 13 nines of durability on a constantly falling cost basis.So, that's super interesting. So, you have to prosecute the structure of the world as it is, based on where the giants are and where they'll be in the future. Then you have to turn around and say, like, “What can they never sell?”So, Amazon can never sell something that is standalone, right? They're a parts factory and if you buy into the Amazon-first strategy of cloud computing—which we did at Autodesk when I was VP of cloud platform there—everything is a primitive that works inside Amazon, but they're not going to build things that don't work outside of the Amazon primitives. So, your company has to be built on the idea that there's a set of people who value something that is purpose-built for a particular use case that you can start to broaden out, it's really helpful if they would like it to be something that can help them escape a really valuable asset away from the center of gravity that is a cloud. And that's why data is super interesting. Nobody wakes up in the morning and says, “Boy, I had such a great conversation with Oracle over the last 20 years beating me up on licensing. Let me go find a cloud vendor and dump all of my data in that so they can beat me up for the next 20 years.” Nobody says that.Corey: It's the idea of data portability that drives decision-making, which makes people, of course, feel better about not actually moving in anywhere. But the fact that they're not locked in strategically, in a way that requires a full software re-architecture and data model rewrite is compelling. I'm a big believer in convincing people to make decisions that look a lot like that.Sam: Right. And so that's the key, right? So, when I was at Autodesk, we went from our 100 million dollar, you know, committed spend with 19% discount on the big three services to, like—we started realize when we're going to burn through that, we were spending $60 million or so a year on 20% annual growth as the cloud part of the business grew. Thought, “Okay, let's renegotiate. Let's go and do a $250 million deal. I'm sure they'll give us a much better discount than 19%.” Short story is they came back and said, “You know, we're going to take you from an already generous 19% to an outstanding 22%.” We thought, “Wait a minute, we already talked to Intuit. They're getting a 40% discount on a $400 million spend.”So, you know, math is hard, but, like, 40% minus 22% is 18% times $250 million is a lot of money. So, we thought, “What is going on here?” And we realized we just had no credible threat of leaving, and Intuit did because they had built a cross-cloud capable architecture. And we had not. So, now stepping back into the kind of the world that we're living in 2021, if you're an independent software company, especially if you have the unreasonable advantage of being an open-source software company, you have got to be doing your customers good by giving them cross-cloud capability. It could be simply like the Amdahl coffee cup that Amdahl reps used to put as landmines for the IBM reps, later—I can tell you that story if you want—even if it's only a way to save money for your customer by using your software, when it gets up to tens and hundreds of million dollars, that's a really big deal.But they also know that data is super important, so the option value of being able to move if they have to, that they have to be able to pull that stick, instead of saying, “Nice doggy,” we have to be on their side, right? So, there's almost a detente that we have to create now, as cloud vendors, working in a world that's invented and operated by the giants.Corey: This episode is sponsored by our friends at Oracle HeatWave is a new high-performance accelerator for the Oracle MySQL Database Service. Although I insist on calling it “my squirrel.” While MySQL has long been the worlds most popular open source database, shifting from transacting to analytics required way too much overhead and, ya know, work. With HeatWave you can run your OLTP and OLAP, don't ask me to ever say those acronyms again, workloads directly from your MySQL database and eliminate the time consuming data movement and integration work, while also performing 1100X faster than Amazon Aurora, and 2.5X faster than Amazon Redshift, at a third of the cost. My thanks again to Oracle Cloud for sponsoring this ridiculous nonsense.Corey: When we look across the, I guess, the ecosystem as it's currently unfolding, a recurring challenge that I have to the existing incumbent cloud providers is they're great at offering the bricks that you can use to build things, but if I'm starting a company today, I'm not going to look at building it myself out of, “Ooh, I'm going to take a bunch of EC2 instances, or Lambda functions, or popsicles and string and turn it into this thing.” I'm going to want to tie together things that are way higher level. In my own case, now I wind up paying for Retool, which is, effectively, yeah, it runs on some containers somewhere, presumably, I think in Azure, but don't quote me on that. And that's great. Could I build my own thing like that?Absolutely not. I would rather pay someone to tie it together. Same story. Instead of building my own CRM by running some open-source software on an EC2 instance, I wind up paying for Salesforce or Pipedrive or something in that space. And so on, and so forth.And a lot of these companies that I'm doing business with aren't themselves running on top of AWS. But for web hosting, for example; if I look at the reference architecture for a WordPress site, AWS's diagram looks like a punchline. It is incredibly overcomplicated. And I say this as someone who ran large WordPress installations at Media Temple many years ago. Now, I have the good sense to pay WP Engine. And on a monthly basis, I give them money and they make the website work.Sure, under the hood, it's running on top of GCP or AWS somewhere. But I don't have to think about it; I don't have to build this stuff together and think about the backups and the failover strategy and the rest. The website just works. And that is increasingly the direction that business is going; things commoditize over time. And AWS in particular has done a terrible job, in my experience, of differentiating what it is they're doing in the language that their customers speak.They're great at selling things to existing infrastructure engineers, but folks who are building something from scratch aren't usually in that cohort. It's a longer story with time and, “Well, we're great at being able to sell EC2 instances by the gallon.” Great. Are you capable of going to a small doctor's office somewhere in the American Midwest and offering them an end-to-end solution for managing patient data? Of course not. You can offer them a bunch of things they can tie together to something that will suffice if they all happen to be software engineers, but that's not the opportunity.So instead, other companies are building those solutions on top of AWS, capturing the margin. And if there's one thing guaranteed to keep Amazon execs awake at night, it's the idea of someone who isn't them making money somehow somewhere, so I know that's got to rankle them, but they do not speak that language. At all. Longer-term, I only see that as a more and more significant crutch. A long enough timeframe here, we're talking about them becoming the Centurylinks of the world, the tier one backbone provider that everyone uses, but no one really thinks about because they're not a household name.Sam: That is a really thoughtful perspective. I think the diseconomies of scale that you're pointing to start to creep in, right? Because when you have to sell compute units by the gallon, right, you can't care if it's a gallon of milk, [laugh] or a gallon of oil, or you know, a gallon of poison. You just have to keep moving it through. So, the shift that I think they're going to end up having to make pragmatically, and you start to see some signs of it, like, you know, they hired but could not retain Matt [Acey 00:23:48]. He did an amazing job of bringing them to some pragmatic realization that they need to partner with open-source, but more broadly, when I think about Microsoft in the 2000s as they were starting to learn their open-source lessons, we were also being able to pull on Microsoft's deep competency and partners. So, most people didn't do the math on this. I was part of the field governance council so I understood exactly how the Microsoft business worked to the level that I was capable. When they had $65 billion in revenue, they produced $24 billion in profit through an ecosystem that generated $450 billion in revenue. So, for every dollar Microsoft made, it was $8 to partners. It was a fundamentally platform-shaped business, and that was how they're able to get into doctors offices in the Midwest, and kind of fit the curve that you're describing of all of those longtail opportunities that require so much care and that are complex to prosecute. These solved for their diseconomies of scale by having 1.2 million partner companies. So, will Amazon figure that out and will they hire, right, enough people who've done this before from Microsoft to become world-class in partnering, that's kind of an exercise left to the [laugh] reader, right? Where will that go over time? But I don't see another better mathematical model for dealing with the diseconomies of scale you have when you're one of the very largest providers on the planet.Corey: The hardest problem as I look at this is, at some point, you hit a point of scale where smaller things look a lot less interesting. I get that all the time when people say, “Oh, you fix AWS bills, aren't you missing out by not targeting Google bills and Azure bills as well?” And it's, yeah. I'm not VC-backed. It turns out that if I limit the customer base that I can effectively service to only AWS customers, yeah turns out, I'm not going to starve anytime soon. Who knew? I don't need to conquer the world and that feels increasingly antiquated, at least going by the stories everyone loves to tell.Sam: Yeah, it's interesting to see how cloud makes strange bedfellows, right? We started seeing this in, like, 2014, 2015, weird partnerships that you're like, “There's no way this would happen.” But the cloud economics which go back to utilization, rather than what it used to be, which was software lock-in, just changed who people were willing to hang out with. And now you see companies like Databricks going, you know, we do an amazing amount of business, effectively competing with Amazon, selling Spark services on top of predominantly Amazon infrastructure, and everybody seems happy with it. So, there's some hint of a new sensibility of what the future of partnering will be. We used to call it coopetition a long time ago, which is kind of a terrible word, but at least it shows that there's some nuance in you can't compete with everybody because it's just too hard.Corey: I wish there were better ways of articulating these things because it seems from the all the outside world, you have companies like Amazon and Microsoft and Google who go and build out partner networks because they need that external accessibility into various customer profiles that they can't speak to super well themselves, but they're also coming out with things that wind up competing directly or indirectly, with all of those partners at the same time. And I don't get it. I wish that there were smarter ways to do it.Sam: It is hard to even talk about it, right? One of the things that I think we've learned from philosophy is if we don't have a word for it, we can't be intelligent about it. So, there's a missing semantics here for being able to describe the complexity of where are you partnering? Where are you competing? Where are you differentiating? In an ecosystem, which is moving and changing.I tend to look at the tools of game theory for this, which is to look at things as either, you know, nonzero-sum games or zero-sum games. And if it's a nonzero-sum game, which I think are the most interesting ones, can you make it a positive sum game? And who can you play positive-sum games with? An organization as big as Amazon, or as big as Microsoft, or even as big as Google isn't ever completely coherent with itself. So, thinking about this as an independent software company, it doesn't matter if part of one of these hyperscalers has a part of their business that competes with your entire business because your business probably drives utilization of a completely different resource in their company that you can partner within them against them, effectively. Right?For example, Cassandra is an amazingly powerful but demanding workload on Kubernetes. So, there's a lot of Cassandra on EKS. You grow a lot of workload, and EKS business does super well. Does that prevent us from working with Amazon because they have Dynamo or because they have Keyspaces? Absolutely not, right?So, this is when those companies get so big that they are almost their own forest, right, of complexity, you can kind of get in, hang out, do well, and pretty much never see the competitive product, unless you're explicitly looking for it, which I think is a huge danger for us as independent software companies. And I would say this to anybody doing strategy for an organization like this, which is, don't obsess over the tiny part of their business that competes with yours, and do not pay attention to any of the marketing that they put out that looks competitive with what you have. Because if you can't figure out how to make a better product and sell it better to your customers as a single purpose corporation, you have bigger problems.Corey: I want to change gears slightly to something that's probably a fair bit more insulting, but that's okay. We're going to roll with it. That seems to be the theme of this episode. You have been, in effect, a CIO a number of times at different companies. And if we take a look at the typical CIO tenure, industry-wide, it's not long; it approaches the territory from an executive perspective of, “Be sure not to buy green bananas. You might not be here by the time they ripen.” And I'm wondering what it is that drives that and how you make a mark in a relatively short time frame when you're providing inputs and deciding on strategy, and those decisions may not bear fruit for years.Sam: CIO used to—we used say it stood for ‘Career Is Over' because the tenure is so short. I think there's a couple of reasons why it's so short. And I think there's a way I believe you can have impact in a short amount of time. I think the reason that it's been short is because people aren't sure what they want the CIO role to be.Do they want it to be a glorified finance person who's got a lot of data processing experience, but now really has got, you know, maybe even an MBA in finance, but is not focusing on value creation? Do they want it to be somebody who's all-singing, all-dancing Chief Data Officer with a CTO background who did something amazing and solved a really hard problem? The definition of success is difficult. Often CIOs now also have security under them, which is literally a job I would never ever want to have. Do security for a public corporation? Good Lord, that's a way to lose most of your life. You're the only executive other than the CEO that the board wants to hear from. Every sing—Corey: You don't sleep; you wait, in those scenarios. And oh, yeah, people joke about ablative CSOs in those scenarios. Yeah, after SolarWinds, you try and get an ablative intern instead, but those don't work as well. It's a matter of waiting for an inevitability. One of the things I think is misunderstood about management broadly, is that you are delegating work, but not the responsibility. The responsibility rests with you.So, when companies have these statements blaming some third-party contractor, it's no, no, no. I'm dealing with you. You were the one that gave my data to some sketchy randos. It is your responsibility that data has now been compromised. And people don't want to hear that, but it's true.Sam: I think that's absolutely right. So, you have this high risk, medium reward, very fungible job definition, right? If you ask all of the CIO's peers what their job is, they'll probably all tell you something different that represents their wish list. The thing that I learned at Autodesk, I was only there for 15 months, but we established a fundamental transformation of the work of how cloud platform is done at the company that's still in place a couple years later.You have to realize that you're a change agent, right? You're actually being hired to bring in the bulk of all the different biases and experiences you have to solve a problem that is not working, right? So, when I got to Autodesk, they didn't even know what their uptime was. It took three months to teach the team how to measure the uptime. Turned out the uptime was 97.7% for the cloud, for the world's largest engineering software company.That is 200 hours a year of unplanned downtime, right? That is not good. So, a complete overhaul [laugh] was needed. Understanding that as a change agent, your half-life is 12 to 18 months, you have to measure success not on tenure, but on your ability to take good care of the patient, right? It's going to be a lot of pain, you're going to work super hard, you're going to have to build trust with everyone, and then people are still going to hate you at the end. That is something you just have to kind of take on.As a friend of mine, Jason Warner joined Redpoint Ventures recently, he said this when he was the CTO of GitHub: “No one is a villain in their own story.” So, you realize, going into a big organization, people are going to make you a villain, but you still have to do incredibly thoughtful, careful work, that's going to take care of them for a long time to come. And those are the kinds of CIOs that I can relate to very well.Corey: Jason is great. You're name-dropping all the guests we've had. My God, keep going. It's a hard thing to rationalize and wrap heads around. It's one of those areas where you will not be measured during your tenure in the role, in some respects. And, of course, that leads to the cynical perspective as well, where well, someone's not going to be here long and if they say, “Yeah, we're just going to keep being stewards of the change that's already underway,” well, that doesn't look great, so quick, time to do a cloud migration, or a cloud repatriation, or time to roll something else out. A bit of a different story.Sam: One of the biggest challenges is how do you get the hearts and the minds of the people who are in the organization when they are no fools, and their expectation is like, “Hey, this company's been around for decades, and we go through cloud leaders or CIOs, like Wendy's goes through hamburgers.” They could just cloud-wash, right, or change-wash all their language. They could use the new language to describe the old thing because all they have to do is get through the performance review and outwait you. So, there's always going to be a level of defection because it's hard to change; it's hard to think about new things.So, the most important thing is how do you get into people's hearts and minds and enable them to believe that the best thing they could do for their career is to come along with the change? And I think that was what we ended up getting right in the Autodesk cloud transformation. And that requires endless optimism, and there's no room for cynicism because the cynicism is going to creep in around the edges. So, what I found on the job is, you just have to get up every morning and believe everything is possible and transmit that belief to everybody.So, if it seems naive or ingenuous, I think that doesn't matter as long as you can move people's hearts in each conversation towards, like, “Oh, this person cares about me. They care about a good outcome from me. I should listen a little bit more and maybe make a 1% change in what I'm doing.” Because 1% compounded daily for a year, you can actually get something done in the lifetime of a CIO.Corey: And I think that's probably a great place to leave it. If people want to learn more about what you're up to, how you think about these things, how you view the world, where can they find you?Sam: You can find me on Twitter, I'm @sramji, S-R-A-M-J-I, and I have a podcast that I host called Open||Source||Datawhere I invite innovators, data nerds, computational networking nerds to hang out and explain to me, a software programmer, what is the big world of open-source data all about, what's happening with machine learning, and what would it be like if you could put data in a container, just like you could put code in a container, and how might the world change? So, that's Open||Source||Data podcast.Corey: And we'll of course include links to that in the [show notes 00:35:58]. Thanks so much for your time. I appreciate it.Sam: Corey, it's been a privilege. Thank you so much for having me.Corey: Likewise. Sam Ramji, Chief Strategy Officer at DataStax. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with a comment telling me exactly which item in Sam's background that I made fun of is the place that you work at.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
In the previous episodes, we looked at the rise of patents and software and their impact on the nascent computer industry. But a copyright is a right. And that right can be given to others in whole or in part. We have all benefited from software where the right to copy was waved and it's shaped the computing industry as much, if not more, than proprietary software. The term Free and Open Source Software (FOSS for short) is a blanket term to describe software that's free and/or whose source code is distributed for varying degrees of tinkeration. It's a movement and a choice. Programmers can commercialize our software. But we can also distribute it free of copy protections. And there are about as many licenses as there are opinions about what is unique, types of software, underlying components, etc. But given that many choose to commercialize their work products, how did a movement arise that specifically didn't? The early computers were custom-built to perform various tasks. Then computers and software were bought as a bundle and organizations could edit the source code. But as operating systems and languages evolved and businesses wanted their own custom logic, a cottage industry for software started to emerge. We see this in every industry - as an innovation becomes more mainstream, the expectations and needs of customers progress at an accelerated rate. That evolution took about 20 years to happen following World War II and by 1969, the software industry had evolved to the point that IBM faced antitrust charges for bundling software with hardware. And after that, the world of software would never be the same. The knock-on effect was that in the 1970s, Bell Labs pushed away from MULTICS and developed Unix, which AT&T then gave away as compiled code to researchers. And so proprietary software was a growing industry, which AT&T began charging for commercial licenses as the bushy hair and sideburns of the 70s were traded for the yuppy culture of the 80s. In the meantime, software had become copyrightable due to the findings of CONTU and the codifying of the Copyright Act of 1976. Bill Gates sent his infamous “Open Letter to Hobbyists” in 1976 as well, defending the right to charge for software in an exploding hobbyist market. And then Apple v Franklin led to the ability to copyright compiled code in 1983. There was a growing divide between those who'd been accustomed to being able to copy software freely and edit source code and those who in an up-market sense just needed supported software that worked - and were willing to pay for it, seeing the benefits that automation was having on the capabilities to scale an organization. And yet there were plenty who considered copyright software immoral. One of the best remembered is Richard Stallman, or RMS for short. Steven Levy described Stallman as “The Last of the True Hackers” in his epic book “Hackers: Heroes of the Computer Revolution.” In the book, he describes the MIT Stallman joined where there weren't passwords and we didn't yet pay for software and then goes through the emergence of the LISP language and the divide that formed between Richard Greenblatt, who wanted to keep The Hacker Ethic alive and those who wanted to commercialize LISP. The Hacker Ethic was born from the young MIT students who freely shared information and ideas with one another and help push forward computing in an era they thought was purer in a way, as though it hadn't yet been commercialized. The schism saw the death of the hacker culture and two projects came out of Stallman's technical work: emacs, which is a text editor that is still included freely in most modern Unix variants and the GNU project. Here's the thing, MIT was sitting on patents for things like core memory and thrived in part due to the commercialization or weaponization of the technology they were producing. The industry was maturing and since the days when kings granted patents, maturing technology would be commercialized using that system. And so Stallman's nostalgia gave us the GNU project, born from an idea that the industry moved faster in the days when information was freely shared and that knowledge was meant to be set free. For example, he wanted the source code for a printer driver so he could fix it and was told it was protected by an NDAQ and so couldn't have it. A couple of years later he announced GNU, a recursive acronym for GNU's Not Unix. The next year he built a compiler called GCC and the next year released the GNU Manifesto, launching the Free Software Foundation, often considered the charter of the free and open source software movement. Over the next few years as he worked on GNU, he found emacs had a license, GCC had a license, and the rising tide of free software was all distributed with unique licenses. And so the GNU General Public License was born in 1989 - allowing organizations and individuals to copy, distribute, and modify software covered under the license but with a small change, that if someone modified the source, they had to release that with any binaries they distributed as well. The University of California, Berkley had benefited from a lot of research grants over the years and many of their works could be put into the public domain. They had brought Unix in from Bell Labs in the 70s and Sun cofounder and Java author Bill Joy worked under professor Fabry, who brought Unix in. After working on a Pascal compiler that Unix coauthor Ken Thompson left for Berkeley, Joy and others started working on what would become BSD, not exactly a clone of Unix but with interchangeable parts. They bolted on the OSI model to get networking and through the 80s as Joy left for Sun and DEC got ahold of that source code there were variants and derivatives like FreeBSD, NetBSD, Darwin, and others. The licensing was pretty permissive and simple to understand: Copyright (c) . All rights reserved. Redistribution and use in source and binary forms are permitted provided that the above copyright notice and this paragraph are duplicated in all such forms and that any documentation, advertising materials, and other materials related to such distribution and use acknowledge that the software was developed by the . The name of the may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED ``AS IS AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. By 1990 the Board of Regents at Berkley accepted a four clause BSD license that spawned a class of licenses. While it's matured into other formats like a 0 clause license it's one of my favorites as it is truest to the FOSS cause. And the 90s gave us the Apache License, from the Apache Group, loosely based on the BSD License and then in 2004 leaning away from that with the release of the Apache License 2 that was more compatible with the GPL license. Given the modding nature of Apache they didn't require derivative works to also be open sourced but did require leaving the license in place for unmodified parts of the original work. GNU never really caught on as an OS in the mainstream, although a collection of tools did. The main reason the OS didn't go far is probably because Linus Torvalds started releasing prototypes of his Linux operating system in 1991. Torvalds used The GNU General Public License v2, or GPLv2 to license his kernel, having been inspired by a talk given by Stallman. GPL 2 had been released in 1991 and something else was happening as we turned into the 1990s: the Internet. Suddenly the software projects being worked on weren't just distributed on paper tape or floppy disks; they could be downloaded. The rise of Linux and Apache coincided and so many a web server and site ran that LAMP stack with MySQL and PHP added in there. All open source in varying flavors of what open source was at the time. And collaboration in the industry was at an all-time high. We got the rise of teams of developers who would edit and contribute to projects. One of these was a tool for another aspect of the Internet, email. It was called popclient, Here Eric S Raymond, or ESR for short, picked it up and renamed it to fetchmail, releasing it as an open source project. Raymond presented on his work at the Linux Congress in 1997, expanded that work into an essay and then the essay into “The Cathedral and the Bazaar” where bazaar is meant to be like an open market. That inspired many to open source their own works, including the Netscape team, which resulted in Mozilla and so Firefox - and another book called “Freeing the Source: The Story of Mozilla” from O'Reilly. By then, Tim O'Reilly was a huge proponent of this free or source code available type of software as it was known. And companies like VA Linux were growing fast. And many wanted to congeal around some common themes. So in 1998, Christine Peterson came up with the term “open source” in a meeting with Raymond, Todd Anderson, Larry Augustin, Sam Ockman, and Jon “Maddog” Hall, author of the first book I read on Linux. Free software it may or may not be but open source as a term quickly proliferated throughout the lands. By 1998 there was this funny little company called Tivo that was doing a public beta of a little box with a Linux kernel running on it that bootstrapped a pretty GUI to record TV shows on a hard drive on the box and play them back. You remember when we had to wait for a TV show, right? Or back when some super-fancy VCRs could record a show at a specific time to VHS (but mostly failed for one reason or another)? Well, Tivo meant to fix that. We did an episode on them a couple of years ago but we skipped the term Tivoization and the impact they had on GPL. As the 90s came to a close, VA Linux and Red Hat went through great IPOs, bringing about an era where open source could mean big business. And true to the cause, they shared enough stock with Linus Torvalds to make him a millionaire as well. And IBM pumped a billion dollars into open source, with Sun moving to open source openoffice.org. Now, what really happened there might be that by then Microsoft had become too big for anyone to effectively compete with and so they all tried to pivot around to find a niche, but it still benefited the world and open source in general. By Y2K there was a rapidly growing number of vendors out there putting Linux kernels onto embedded devices. TiVo happened to be one of the most visible. Some in the Linux community felt like they were being taken advantage of because suddenly you had a vendor making changes to the kernel but their changes only worked on their hardware and they blocked users from modifying the software. So The Free Software Foundation updated GPL, bundling in some other minor changes and we got the GNU General Public License (Version 3) in 2006. There was a lot more in GPL 3, given that so many organizations were involved in open source software by then. Here, the full license text and original copyright notice had to be included along with a statement of significant changes and making source code available with binaries. And commercial Unix variants struggled with SGI going bankrupt in 2006 and use of AIX and HP-UX Many of these open source projects flourished because of version control systems and the web. SourceForge was created by VA Software in 1999 and is a free service that can be used to host open source projects. Concurrent Versions System, or CVS had been written by Dick Grune back in 1986 and quickly became a popular way to have multiple developers work on projects, merging diffs of code repositories. That gave way to git in the hearts of many a programmer after Linus Torvalds wrote a new versioning system called git in 2005. GitHub came along in 2008 and was bought by Microsoft in 2018 for 2018. Seeing a need for people to ask questions about coding, Stack Overflow was created by Jeff Atwood and Joel Spolsky in 2008. Now, we could trade projects on one of the versioning tools, get help with projects or find smaller snippets of sample code on Stack Overflow, or even Google random things (and often find answers on Stack Overflow). And so social coding became a large part of many a programmers day. As did dependency management, given how many tools are used to compile a modern web app or app. I often wonder how much of the code in many of our favorite tools is actually original. Another thought is that in an industry dominated by white males, it's no surprise that we often gloss over previous contributions. It was actually Grace Hopper's A-2 compiler that was the first software that was released freely with source for all the world to adapt. Sure, you needed a UNIVAC to run it, and so it might fall into the mainframe era and with the emergence of minicomputers we got Digital Equipment's DECUS for sharing software, leading in part to the PDP-inspired need for source that Stallman was so adamant about. General Motors developed SHARE Operating System for the IBM 701 and made it available through the IBM user group called SHARE. The ARPAnet was free if you could get to it. TeX from Donald Knuth was free. The BASIC distribution from Dartmouth was academic and yet Microsoft sold it for up to $100,000 a license (see Commodore ). So it's no surprise that people avoided paying upstarts like Microsoft for their software or that it took until the late 70s to get copyright legislation and common law. But Hopper's contributions were kinda' like open source v1, the work from RMS to Linux was kinda' like open source v2, and once the term was coined and we got the rise of a name and more social coding platforms from SourceForge to git, we moved into a third version of the FOSS movement. Today, some tools are free, some are open source, some are free as in beer (as you find in many a gist), some are proprietary. All are valid. Today there are also about as many licenses as there are programmers putting software out there. And here's the thing, they're all valid. You see, every creator has the right to restrict the ability to copy their software. After all, it's their intellectual property. Anyone who chooses to charge for their software is well within their rights. Anyone choosing to eschew commercialization also has that right. And every derivative in between. I wouldn't judge anyone based on any model those choose. Just as those who distribute proprietary software shouldn't be judged for retaining their rights to do so. Why not just post things we want to make free? Patents, copyrights, and trademarks are all a part of intellectual property - but as developers of tools we also need to limit our liability as we're probably not out there buying large errors and omissions insurance policies for every script or project we make freely available. Also, we might want to limit the abuse of our marks. For example, Linus Torvalds monitors the use of the Linux mark through the Linux Mark Institute. Apparently some William Dell Croce Jr tried to register the Linux trademark in 1995 and Torvalds had to sue to get it back. He provides use of the mark using a free and perpetual global sublicense. Given that his wife won the Finnish karate championship six times I wouldn't be messing with his trademarks. Thank you to all the creators out there. Thank you for your contributions. And thank you for tuning in to this episode of the History of Computing Podcast. Have a great day.
About JasonJason is now the Managing Director at Redpoint Ventures.Links: GitHub: https://github.com/ @jasoncwarner: https://twitter.com/jasoncwarner GitHub: https://github.com/jasoncwarner Jasoncwarner/ama: https://github.com/jasoncwarner/ama TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Honeycomb. When production is running slow, it's hard to know where problems originate: is it your application code, users, or the underlying systems? I've got five bucks on DNS, personally. Why scroll through endless dashboards, while dealing with alert floods, going from tool to tool to tool that you employ, guessing at which puzzle pieces matter? Context switching and tool sprawl are slowly killing both your team and your business. You should care more about one of those than the other, which one is up to you. Drop the separate pillars and enter a world of getting one unified understanding of the one thing driving your business: production. With Honeycomb, you guess less and know more. Try it for free at Honeycomb.io/screaminginthecloud. Observability, it's more than just hipster monitoring.Corey: This episode is sponsored in part by Liquibase. If you're anything like me, you've screwed up the database part of a deployment so severely that you've been banned from touching every anything that remotely sounds like SQL, at at least three different companies. We've mostly got code deployments solved for, but when it comes to databases we basically rely on desperate hope, with a roll back plan of keeping our resumes up to date. It doesn't have to be that way. Meet Liquibase. It is both an open source project and a commercial offering. Liquibase lets you track, modify, and automate database schema changes across almost any database, with guardrails to ensure you'll still have a company left after you deploy the change. No matter where your database lives, Liquibase can help you solve your database deployment issues. Check them out today at liquibase.com. Offer does not apply to Route 53.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by Jason Warner, the Chief Technology Officer at GifHub, although he pronounces it differently. Jason, welcome to the show.Jason: Thanks, Corey. Good to be here.Corey: So, GitHub—as you insist on pronouncing it—is one of those companies that's been around for a long time. In fact, I went to a training conducted by one of your early folks, Scott Chacon, who taught how Git works over the course of a couple of days, and honestly, I left more confused than I did when I entered. It's like, “Oh, this is super awful. Good thing I'll never need to know this because I'm not really a developer.” And I'm still not really a developer and I still don't really know how Git works, but here we are.And it's now over a decade later; you folks have been acquired by Microsoft, and you are sort of the one-stop-shop, from the de facto perspective of, “I'm going to go share some code with people on the internet. I'll use GitHub to do it.” Because, you know, copying and pasting and emailing Microsoft Word documents around isn't ideal.Jason: That is right. And I think that a bunch of things that you mentioned there, played into, you know, GitHub's early and sustained success. But my God, do you remember the old days when people had to email tar files around or drop them in weird spots?Corey: What the hell do you mean, by, “Old days?” It still blows my mind that the Linux kernel is managed by—they use Git, obviously. Linus Torvalds did write Git once upon a time—and it has the user interface you would expect for that. And the way that they collaborate is not through GitHub or anything like that. No, they use Git to generate patches, which they then email to the mailing list. Which sounds like I'm making it up, like, “Oh, well, yeah, tell another one, but maybe involve a fax machine this time.” But no, that is actually what they do.Jason: It blew my mind when I saw that, too, by the way. And you realize, too, that workflows are workflows, and people will build interesting workflows to solve their use case. Now, obviously, anyone that you would be talking to in 2021, if you walked in and said, “Yeah, install Git. Let's set up an email server and start mailing patches to each other and we're going to do it this way.” They would just kind of politely—or maybe impolitely—show you out of the room, and rightfully [laugh] so. But it works for one of the most important software projects in history: Linux.Corey: Yeah, and it works almost in spite of itself to some extent. You've come a long way as a company because initially, it was, “Oh, there's this amazing, decentralized version control system. How do we make it better? I know, we're going to take off the decentralized part of it and give it a central point that everything can go through.” And collaboratively, it works well, but I think that viewing GitHub as a system that is used to sell free Git repositories to people is rather dramatically missing the point. It feels like it's grown significantly beyond just code repository hosting. Tell me more about that.Jason: Absolutely. I remember talking to a bunch of folks right around when I was joining GitHub, and you know, there was still talk about GitHub as, you know, GitHub for lawyers, or GitHub for doctors, or what could you do in a different way? And you know, social coding as an aspect, and maybe turning into a social network with a resume. And all those things are true to a percentage standpoint. But what GitHub should be in the world is the world's most important software development platform, end-to-end software development platform.We obviously have grown a bunch since me joining in that way which we launched dependency management packages, Actions with built-in CI, we've got some deployment mechanisms, we got advanced security underneath it, we've Codespaces in beta and alpha on top of it now. But if you think about GitHub as, join, share, and see other people's code, that's evolution one. If you see it as world's largest, maybe most developed software development platform, that's evolution two, and in my mind, its natural place where it should be, given what it has done already in the world, is become the world's most important software company. I don't mean the most profitable. I just mean the most important.Corey: I would agree. I had a blog post that went up somewhat recently about the future of cloud being Microsoft's to lose. And it's not because Azure is the best cloud platform out there, with respect, and I don't need you to argue the point. It is very clearly not. It is not like other clouds, but I can see a path to where it could become far better than it is.But if I'm out there and I'm just learning how to write code—because I make terrible life choices—and I go to a boot camp or I follow a tutorial online or I take a course somewhere, I'm going to be writing code probably using VS Code, the open-source editor that you folks launched after the acquisition. And it was pretty clear that Atom wasn't quite where the world was going. Great. Then I'm going to host it on GitHub, which is a natural evolution. Then you take a look at things like GitHub Actions that build in CI/CD pipelines natively.All that's missing is a ‘Deploy to Azure' button that is the next logical step, and you're mostly there for an awful lot of use cases. But you can't add that button until Azure itself gets better. Done right, this has the potential to leave, effectively, every other cloud provider in the dust because no one can touch this.Jason: One hundred percent. I mean, the obvious thing that any other cloud should be looking at with us—or should have been before the acquisition, looking at us was, “Oh, no, they could jump over us. They could stop our funnel.” And I used internal metrics when I was talking to them about partnership that led to the sale, which was I showed them more about their running business than they knew about themselves. I can tell them where they were stacked-ranked against each other, based on the ingress and egress of all the data on GitHub, you know, and various reactions to that in those meetings was pretty astounding.And just with that data alone, it should tell you what GitHub would be capable of and what Azure would be capable of in the combination of those two things. I mean, you did mention the ‘Deploy to Azure' button; this has been a topic, obviously, pre and post-acquisition, which is, “When is that coming?” And it was the one hard rule I set during the acquisition was, there will be no ‘Deploy to Azure' button. Azure has to earn the right to get things deployed to, in my opinion. And I think that goes to what you're saying is, if we put a ‘Deploy to Azure' button on top of this and Azure is not ready for that, or is going to fail, ultimately, that looks bad for all of us. But if it earned the right and it gets better, and it becomes one of those, then, you know, people will choose it, and that is, to me, what we're after.Corey: You have to choose the moment because if you do it too soon, you'll set the entire initiative back five years. Do it too late, and you get leapfrogged. There's a golden window somewhere and finding it is going to be hard. And I think it's pretty clear that the other hyperscalers in this space are learning, or have learned, that the next 10 years of cloud or 15 years of cloud or whatever they want to call it, and the new customers that are going to come are not the same as the customers that have built the first half of the business. And they're trying to wrap their heads around that because a lot of where the growth is going to come from is established blue chips that are used to thinking in very enterprise terms.And people think I'm making fun of them when I say this, but Microsoft has 40 years' experience apologizing to enterprises for computer failures. And that is fundamentally what cloud is. It's about talking computers to business executives because as much as we talk about builders, that is not the person at an established company with an existing IT estate, who gets to determine where $50 million a year in cloud-spend is going to go.Jason: It's [laugh] very, [laugh] very true. I mean, we've entered a different spot with cloud computing in the bell curve of adoption, and if you think that they will choose the best technology every time, well, history of computing is littered with better technologies that have failed because the distribution was better on one side. As you mentioned, Microsoft has 40 years, and I wager that Microsoft has the best sales organizations and the best enterprise accounts and, you know, all that sort of stuff, blah, blah, blah, on that side of the world than anyone in the industry. They can sell to enterprises better than almost anyone in the industry. And the other hyperscalers—there's a reason why [TK 00:08:34] is running Google Cloud right now. And Amazon, classically, has been very, very bad assigned to the enterprises. They just happened to be the first mover.Corey: In the early days, it was easy. You'd have an Amazon salesperson roll up to a company, and the exec would say, “Great, why should we consider running things on AWS?” And the answer was, “Oh, I'm sorry, wrong conversation. Right now you have 80 different accounts scattered throughout your org. I'm just here to help you unify them, get some visibility into it, and possibly give you a discount along the way.” And it was a different conversation. Shadow IT was the sole driver of cloud adoption for a long time. That is no longer true. It has to go in the front door, and that is a fundamental shift in how you go to market.Jason: One hundred percent true, and it's why I think that Microsoft has been so successful with Azure, in the last, let's call it five years in that, is that the early adopters in the second wave are doing that; they're all enterprise IT, enterprise dev shops who are buying from the top down. Now, there is still the bottoms-up adoption that going to be happening, and obviously, bottom-up adoption will happen still going forward, but we've entered the phase where that's not the primary or sole mechanism I should say. The sole mechanism of buying in. We have tops-down selling still—or now.Corey: When Microsoft announced it was acquiring GitHub, there was a universal reaction of, “Oh, shit.” Because it's Microsoft; of course they're going to ruin GitHub. Is there a second option? No, unless they find a way to ruin it twice. And none of it came to pass.It is uniformly excellent, and there's a strong argument that could be made by folks who are unaware of what happened—I'm one of them, so maybe I'm right, maybe I'm wrong—that GitHub had a positive effect on Microsoft more than Microsoft had an effect on GitHub. I don't know if that's true or not, but I could believe it based upon what I've seen.Jason: Obviously, the skepticism was well deserved at the time of acquisition, let's just be honest with it, particularly given what Microsoft's history had been for about 15—well, 20 years before, previous to Satya joining. And I was one of those people in the late '90s who would write ‘M$' in various forums. I was 18 or 19 years old, and just got into—Corey: Oh, hating Microsoft was my entire personality.Jason: [laugh]. And it was, honestly, well-deserved, right? Like, they had anti-competitive practices and they did some nefarious things. And you know, I talked about Bill Gates as an example. Bill Gates is, I mean, I don't actually know how old he is, but I'm going to guess he's late '50s, early '60s, but he's basically in the redemption phase of his life for his early years.And Microsoft is making up for Ballmer years, and later Gates years, and things of that nature. So, it was well-deserved skepticism, and particularly for a mid-career to older-career crowd who have really grown to hate Microsoft over that time. But what I would say is, obviously, it's different under Satya, and Scott, and Amy Hood, and people like that. And all we really telling people is give us a chance on this one. And I mean, all of us. The people who were running GitHub at the time, including myself and, you know, let Scott and Satya prove that they are who they say they are.Corey: It's one of those things where there's nothing you could have said that would have changed the opinion of the world. It was, just wait and see. And I think we have. It's now, I daresay, gotten to a point where Microsoft announces that they're acquiring some other beloved company, then people, I think, would extend a lot more credit than they did back then.Jason: I have to give Microsoft a ton of credit, too, on this one for the way in which they handled acquisitions, like us and others. And the reason why I think it's been so successful is also the reason why I think so many others die post-acquisition, which is that Microsoft has basically—I'll say this, and I know I won't get fired because it feels like it's true. Microsoft is essentially a PE holding company at this point. It is acquired a whole bunch of companies and lets them run independent. You know, we got LinkedIn, you got Minecraft, Xbox is its own division, but it's effectively its own company inside of it.Azure is run that way. GitHub's got a CEO still. I call it the archipelago model. Microsoft's the landmass underneath the water that binds them all, and finance, and HR, and a couple of other things, but for the most part, we manifest our own product roadmap still. We're not told what to go do. And I think that's why it's successful. If we're going to functionally integrate GitHub into Microsoft, it would have died very quickly.Corey: You clearly don't mix the streams. I mean, your gaming division writes a lot of interesting games and a lot of interesting gaming platforms. And, like, one of the most popularly played puzzle games in the world is a Microsoft property, and that is, of course, logging into a Microsoft account correctly. And I keep waiting for that to bleed into GitHub, but it doesn't. GitHub is a terrific SAML provider, it is stupidly easy to log in, it's great.And at some level, I wish that would bleed into other aspects, but you can't have everything. Tell me what it's like to go through an acquisition from a C-level position. Because having been through an acquisition before, the process looks a lot like a surprise all-hands meeting one day after the markets close and, “Listen up, idiots.” And [laugh] there we go. I have to imagine with someone in your position, it's a slightly different experience.Jason: It's definitely very different for all C-levels. And then myself in particular, as the primary driver of the acquisition, obviously, I had very privy inside knowledge. And so, from my position, I knew what was happening the entire time as the primary driver from the inside. But even so, it's still disconcerting to a degree because, in many ways, you don't think you're going to be able to pull it off. Like, you know, I remember the months, and the nights, and the weekends, and the weekend nights, and all the weeks I spent on the road trying to get all the puzzle pieces lined up for the Googles, or the Microsofts, or the eventually AWSs, the VMwares, the IBMs of the world to take seriously, just from a product perspective, which I knew would lead to, obviously, acquisition conversations.And then, once you get the call from the board that says, “It's done. We signed the letter of intent,” you basically are like, “Oh. Oh, crap. Okay, hang on a second. I actually didn't—I don't actually believe in my heart of hearts that I was going to actually be able to pull that off.” And so now, you probably didn't plan out—or at least I didn't. I was like, “Shit if we actually pulled this off what comes next?” And I didn't have that what comes next, which is odd for me. I usually have some sort of a loose plan in place. I just didn't. I wasn't really ready for that.Corey: It's got to be a weird discussion, too, when you start looking at shopping a company around to be sold, especially one at the scale of GitHub because you're at such a high level of visibility in the entire environment, where—it's the idea of would anyone even want to buy us? And then, duh, of course they would. And you look the hyperscalers, for example. You have, well, you could sell it to Amazon and they could pull another Cloud9, where they shove it behind the IAM login process, fail to update the thing meaningfully over a period of years, to a point where even now, a significant portion of the audience listening to this is going to wonder if it's a service I just made up; it sounds like something they might have done, but Cloud9 sounds way too inspired for an AWS service name, so maybe not. And—which it is real. You could go sell to Google, which is going to be awesome until some executive changes roles, and then it's going to be deprecated in short order.Or then there's Microsoft, which is the wild card. It's, well, it's Microsoft. I mean, people aren't really excited about it, but okay. And I don't think that's true anymore at all. And maybe I'm not being fair to all the hyperscalers there. I mean, I'm basically insulting everyone, which is kind of my shtick, but it really does seem that Microsoft was far and away the best acquirer possible because it has been transformative. My question—if you can answer it—is, how the hell did you see that beforehand? It's only obvious—even knowing what I know now—in hindsight.Jason: So, Microsoft was a target for me going into it, and the reason why was I thought that they were in the best overall position. There was enough humility on one side, enough hubris on another, enough market awareness, probably, organizational awareness to, kind of, pull it off. There's too much hubris on one side of the fence with some of the other acquirers, and they would try to hug us too deeply, or integrate us too quickly, or things of that nature. And I think it just takes a deep understanding of who the players are and who the egos involved are. And I think egos has actually played more into acquisitions than people will ever admit.What I saw was, based on the initial partnership conversations, we were developing something that we never launched before GitHub Actions called GitHub Launch. The primary reason we were building that was GitHub launches a five, six-year journey, and it's got many, many different phases, which will keep launching over the next couple of years. The first one we never brought to market was a partnership between all of the clouds. And it served a specific purpose. One, it allowed me to get into the room with the highest level executive at every one of those companies.Two allow me to have a deep economic conversation with them at a partnership level. And three, it allowed me to show those executives that we knew what GitHub's value was in the world, and really flip the tables around and say, “We know what we're worth. We know what our value is in the world. We know where we sit from a product influence perspective. If you want to be part of this, we'll allow it.” Not, “Please come work with us.” It was more of a, “We'll allow you to be part of this conversation.”And I wanted to see how people reacted to that. You know how Amazon reacted that told me a lot about how they view the world, and how Google reacted to that showed me exactly where they viewed it. And I remember walking out of the Google conversation, feeling a very specific way based upon the reaction. And you know, when I talked to Microsoft, got a very different feel and it, kind of, confirmed a couple of things. And then when I had my very first conversation with Nat, who have known for a while before that, I realized, like, yep, okay, this is the one. Drive hard at this.Corey: If you could do it all again, would you change anything meaningful about how you approached it?Jason: You know, I think I got very lucky doing a couple of things. I was very intentional aspects of—you know, I tried to serendipitously show up, where Diane Greene was at one point, or a serendipitously show up where Satya or Scott Guthrie was, and obviously, that was all intentional. But I never sold a company like this before. The partnership and the product that we were building was obviously very intentional. I think if I were to go through the sale, again, I would probably have tried to orchestrate at least one more year independent.And it's not—for no other reason alone than what we were building was very special. And the world sees it now, but I wish that the people who built it inside GitHub got full credit for it. And I think that part of that credit gets diffused to saying, “Microsoft fixed GitHub,” and I want the people inside GitHub to have gotten a lot more of that credit. Microsoft obviously made us much better, but that was not specific to Microsoft because we're run independent; it was bringing Nat in and helping us that got a lot of that stuff done. Nat did a great job at those things. But a lot of that was already in play with some incredible engineers, product people, and in particular our sales team and finance team inside of GitHub already.Corey: When you take a look across the landscape of the fact that GitHub has become for a certain subset of relatively sad types of which I'm definitely one a household name, what do you think the biggest misconception about the company is?Jason: I still think the biggest misconception of us is that we're a code host. Every time I talk to the RedMonk folks, they get what we're building and what we're trying to be in the world, but people still think of us as SourceForge-plus-plus in many ways. And obviously, that may have been our past, but that's definitely not where we are now and, for certain, obviously, not our future. So, I think that's one. I do think that people still, to this day, think of GitLab as one of our main competitors, and I never have ever saw GitLab as a competitor.I think it just has an unfortunate naming convention, as well as, you know, PRs, and MRs, and Git and all that sort of stuff. But we take very different views of the world in how we're approaching things. And then maybe the last thing would be that what we're doing at the scale that we're doing it as is kind of easy. When I think that—you know, when you're serving almost every developer in the world at this point at the scale at which we're doing it, we've got some scale issues that people just probably will never thankfully encounter for themselves.Corey: Well, everyone on Hacker News believes that they will, as soon as they put up their hello world blog, so Kubernetes is the only way to do anything now. So, I'm told.Jason: It's quite interesting because I think that everything breaks at scale, as we all know about from the [hyperclouds 00:20:54]. As we've learned, things are breaking every day. And I think that when you get advice, either operational, technical, or managerial advice from people who are running 10 person, 50 person companies, or X-size sophisticated systems, it doesn't apply. But for whatever reason, I don't know why, but people feel inclined to give that feedback to engineers at GitHub directly, saying, “If you just…” and in many [laugh] ways, you're just like, “Well, I think that we'll have that conversation at some point, you know, but we got a 100-plus-million repos and 65 million developers using us on a daily basis.” It's a very different world.Corey: This episode is sponsored by our friends at Oracle HeatWave is a new high-performance accelerator for the Oracle MySQL Database Service. Although I insist on calling it “my squirrel.” While MySQL has long been the worlds most popular open source database, shifting from transacting to analytics required way too much overhead and, ya know, work. With HeatWave you can run your OLTP and OLAP, don't ask me to ever say those acronyms again, workloads directly from your MySQL database and eliminate the time consuming data movement and integration work, while also performing 1100X faster than Amazon Aurora, and 2.5X faster than Amazon Redshift, at a third of the cost. My thanks again to Oracle Cloud for sponsoring this ridiculous nonsense.Corey: One of the things that I really appreciate personally because, you know, when you see something that company does, it's nice to just thank people from time to time, so I'm inviting the entire company on the podcast one by one, at some point, to wind up thanking them all individually for it, but Codespaces is one of those things that I think is transformative for me. Back in the before times, and ideally the after times, whenever I travel the only computer I brought with me for a few years now has been an iPad or an iPad Pro. And trying to get an editor on that thing that works reasonably well has been like pulling teeth, my default answer has just been to remote into an EC2 instance and use vim like I have for the last 20 years. But Code is really winning me over. Having to play with code-server and other things like that for a while was obnoxious, fraught, and difficult.And finally, we got to a point where Codespaces was launched, and oh, it works on an iPad. This is actually really slick. I like this. And it was the thing that I was looking for but was trying to have to monkey patch together myself from components. And that's transformative.It feels like we're going back in many ways—at least in my model—to the days of thin clients where all the heavy lifting was done centrally on big computers, and the things that sat on people's desks were mostly just, effectively, relatively simple keyboard, mouse, screen. Things go back and forth and I'm sure we'll have super powerful things in our pockets again soon, but I like the interaction model; it solves for an awful lot of problems and that's one of the things that, at least from my perspective, that the world may not have fully wrapped it head around yet.Jason: Great observation. Before the acquisition, we were experimenting with a couple of different editors, that we wanted to do online editors. And same thing; we were experimenting with some Action CI stuff, and it just didn't make sense for us to build it; it would have been too hard, there have been too many moving parts, and then post-acquisition, we really love what the VS Code team was building over there, and you could see it; it was just going to work. And we had this one person, well, not one person. There was a bunch of people inside of GitHub that do this, but this one person at the highest level who's just obsessed with make this work on my iPad.He's the head of product design, his name's Max, he's an ex-Heroku person as well, and he was just obsessed with it. And he said, “If it works on my iPad, it's got a chance to succeed. If it doesn't work on my iPad, I'm never going to use this thing.” And the first time we booted up Codespaces—or he booted it up on the weekend, working on it. Came back and just, “Yep. This is going to be the one. Now, we got to work on those, the sanding the stones and those fine edges and stuff.”But it really does unlock a lot for us because, you know, again, we want to become the software developer platform for everyone in the world, you got to go end-to-end, and you got to have an opinion on certain things, and you got to enable certain functionality. You mentioned Cloud9 before with Amazon. It was one of the most confounding acquisitions I've ever seen. When they bought it I was at Heroku and I thought, I thought at that moment that Amazon was going to own the next 50 years of development because I thought they saw the same thing a lot of us at Heroku saw, and with the Cloud9 acquisition, what they were going to do was just going to stomp on all of us in the space. And then when it didn't happen, we just thought maybe, you know, okay, maybe something else changed. Maybe we were wrong about that assumption, too. But I think that we're on to it still. I think that it just has to do with the way you approach it and, you know, how you design it.Corey: Sorry, you just said something that took me aback for a second. Wait, you mean software can be designed? It's not this emergent property of people building thing on top of thing? There's actually a grand plan behind all these things? I've only half kidding, on some level, where if you take a look at any modern software product that is deployed into the world, it seems impossible for even small aspects of it to have been part of the initial founding design. But as a counterargument, it would almost have to be for a lot of these things. How do you square that circle?Jason: I think you have to, just like anything on spectrums and timelines, you have to flex at various times for various things. So, if you think about it from a very, very simple construct of time, you just have to think of time horizons. So, I have an opinion about what GitHub should look like in 10 years—vaguely—in five years much more firmly, and then very, very concretely, for the next year, as an example. So, a lot of the features you might see might be more emergent, but a lot of long-term work togetherness has to be loosely tied together with some string. Now, that string will be tightened over time, but it loosely has to see its way through.And the way I describe this to folks is that you don't wake up one day and say, “I'm going on vacation,” and literally just throw a finger on the map. You have to have some sort of vague idea, like, “Hey, I want to have a beach vacation,” or, “I want to have an adventure vacation.” And then you can kind of pick a destination and say, “I'm going to Hawaii,” or, “I'm going to San Diego.” And if you're standing on the East Coast knowing you're going to San Diego, you basically know that you have to just start marching west, or driving west, or whatever. And now, you don't have to have the route mapped out just yet, but you know that hey, if I'm going due southeast, I'm off course, so how do I reorient to make sure I'm still going in the right direction?That's basically what I think about as high-level, as scale design. And it's not unfair to say that a lot of the stuff is not designed today. Amazon is very famous for not designing anything; they design a singular service. But there's no cohesiveness to what Amazon—or AWS specifically, I should say, in this case—has put out there. And maybe that's not what their strategy is. I don't know the internal workings of them, but it's very clear.Corey: Well, oh, yeah. When I first started working in the AWS space and looking through the console, it like, “What is this? It feels like every service's interface was designed by a different team, but that would—oh…” and then the light bulb went on. Yeah. You ship your culture.Jason: It's exactly it. It works for them, but I think if you're going to try to do something very, very, very different, you know, it's going to look a certain way. So, intentional design, I think, is part of what makes GitHub and other products like it special. And if you think about it, you have to have an end-to-end view, and then you can build verticals up and down inside of that. But it has to work on the horizontal, still.And then if you hire really smart people to build the verticals, you get those done. So, a good example of this is that I have a very strong opinion about the horizontal workflow nature of GitHub should look like in five years. I have a very loose opinion about what the matrix build system of Actions looks like. Because we have very, very smart people who are working on that specific problem, so long as that maps back and snaps into the horizontal workflows. And that's how it can work together.Corey: So, when you look at someone who is, I don't know, the CTO of a wildly renowned company that is basically still catering primarily to developers slash engineers, but let's be honest, geeks, it's natural to think that, oh, they must be the alpha geek. That doesn't really apply to you from everything I've been able to uncover. Am I just not digging deeply enough, or are you in fact, a terrible nerd?Jason: [laugh]. I am. I'm a terrible nerd. I am a very terrible nerd. I feel very lucky, obviously, to be in the position I'm in right now, in many ways, and people call me up and exactly that.It's like, “Hey, you must be king of the geeks.” And I'm like, “[laugh], ah, funny story here.” But um, you know, I joke that I'm not actually supposed to be in tech in first place, the way I grew up, and where I did, and how, I wasn't supposed to be here. And so, it's serendipitous that I am in tech. And then turns out I had an aptitude for distributed systems, and complex, you know, human systems as well. But when people dig in and they start talking about topics, I'm confounded. I never liked Star Wars, I never like Star Trek. Never got an anime, board games, I don't play video games—Corey: You are going to get letters.Jason: [laugh]. When I was at Canonical, oh, my goodness, the stuff I tried to hide about myself, and, like, learn, like, so who's this Boba Fett dude. And, you know, at some point, obviously, you don't have to pretend anymore, but you know, people still assume a bunch stuff because, quote, “Nerd” quote, “Geek” culture type of stuff. But you know, some interesting facts that people end up being surprised by with me is that, you know, I was very short in high school and I grew in college, so I decided that I wanted to take advantage of my newfound height and athleticism as you grow into your body. So, I started playing basketball, but I obsessed over it.I love getting good at something. So, I'd wake up at four o'clock in the morning, and go shoot baskets, and do drills for hours. Well, I got really good at it one point, and I end up playing in a Pro-Am basketball game with ex-NBA Harlem Globetrotter legends. And that's just not something you hear about in most engineering circles. You might expect that out of a salesperson or a marketing person who played pro ball—or amateur ball somewhere, or college ball or something like that. But not someone who ends up running the most important software company—from a technical perspective—in the world.Corey: It's weird. People counterintuitively think that, on some level, that code is the answer to all things. And that, oh, all this human interaction stuff, all the discussions, all the systems thinking, you have to fit a certain profile to do that, and anyone outside of that is, eh, they're not as valuable. They can get ignored. And we see that manifesting itself in different ways.And even if we take a look at people whose profess otherwise, we take a look at folks who are fresh out of a boot camp and don't understand much about the business world yet; they have transformed their lives—maybe they're fresh out of college, maybe didn't even go to college—and 18 weeks later, they are signing up for six-figure jobs. Meanwhile, you take a look at virtually any other business function, in order to have a relatively comparable degree of earning potential, it takes years of experience and being very focused on a whole bunch of other things. There's a massive distortion around technical roles, and that's a strange and difficult thing to wrap my head around. But as you're talking about it, it goes both ways, too. It's the idea of, “Oh, I'll become technical than branch into other things.” It sounded like you started off instead with a non-technical direction and then sort of adopted that from other sides. Is that right, or am I misremembering exactly how the story unfolds?Jason: No, that's about right. People say, “Hey, when did I start programming?” And it's very in vogue, I think, for a lot of people to say, “I started programming at three years old,” or five years old, or whatever, and got my first computer. I literally didn't get my first computer until I was 18-years-old. And I started programming when I got to a high school co-op with IBM at 17.It was Lotus Notes programming at the time. Had no exposure to it before. What I did, though, in college was IBM told me at the time, they said, “If you get a computer science degree will guarantee you a job.” Which for a kid who grew up the way I grew up, that is manna from heaven type of deal. Like, “You'll guarantee me a job inside where don't have to dig ditches all day or lay asphalt? Oh, my goodness. What's computer science? I'll go figure it out.”And when I got to school, what I realized was I was really far behind. Everyone was that ubergeek type of thing. So, what I did is I tried to hack the system, and what I said was, “What is a topic that nobody else has an advantage on from me?” And so I basically picked the internet because the internet was so new in the mid-'90s that most people were still not fully up to speed on it. And then the underpinnings in the internet, which basically become distributed systems, that's where I started to focus.And because no one had a real advantage, I just, you know, could catch up pretty quickly. But once I got into computers, it turned out that I was probably a very average developer, maybe even below average, but it was the system's thinking that I stood out on. And you know, large-scale distributed systems or architectures were very good for me. And then, you know, that applies not, like, directly, but it applies decently well to human systems. It's just, you know, different types of inputs and outputs. But if you think about organizations at scale, they're barely just really, really, really complex and kind of irrational distributed systems.Corey: Jason, thank you so much for taking the time to speak with me today. If people want to learn more about who you are, what you're up to, how you think about the world, where can they find you?Jason: Twitter's probably the best place at this point. Just @jasoncwarner on Twitter. I'm very unimaginative. My name is my GitHub handle. It's my Twitter username. And that's the best place that I, kind of, interact with folks these days. I do an AMA on GitHub. So, if you ever want to ask me anything, just kind of go to jasoncwarner/ama on GitHub and drop a question in one of the issues and I'll get to answering that. Yeah, those are the best spots.Corey: And we will, of course, include links to those things in the [show notes 00:33:52]. Thank you so much for taking the time to speak with me today. I really appreciate it.Jason: Thanks, Corey. It's been fun.Corey: Jason Warner, Chief Technology Officer at GitHub. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review in your podcast platform of choice anyway, along with a comment that includes a patch.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Today's episode on spam is read by the illustrious Joel Rennich. Spam is irrelevant or inappropriate and unsolicited messages usually sent to a large number of recipients through electronic means. And while we probably think of spam as something new today, it's worth noting that the first documented piece of spam was sent in 1864 - through the telegraph. With the advent of new technologies like the fax machine and telephone, messages and unsolicited calls were quick to show up. Ray Tomlinson is widely accepted as the inventor of email, developing the first mail application in 1971 for the ARPANET. It took longer than one might expect to get abused, likely because it was mostly researchers and people from the military industrial research community. Then in 1978, Gary Thuerk at Digital Equipment Corporation decided to send out a message about the new VAX computer being released by Digital. At the time, there were 2,600 email accounts on ARPANET and his message found its way to 400 of them. That's a little over 15% of the Internet at the time. Can you imagine sending a message to 15% of the Internet today? That would be nearly 600 million people. But it worked. Supposedly he closed $12 million in deals despite rampant complaints back to the Defense Department. But it was too late; the damage was done. He proved that unsolicited junk mail would be a way to sell products. Others caught on. Like Dave Rhodes who popularized MAKE MONEY FAST chains in the 1988. Maybe not a real name but pyramid schemes probably go back to the pyramids so we might as well have them on the Internets. By 1993 unsolicited email was enough of an issue that we started calling it spam. That came from the Monty Python skit where Vikings in a cafe and spam was on everything on the menu. That spam was in reference to canned meat made of pork, sugar, water, salt, potato starch, and sodium nitrate that was originally developed by Jay Hormel in 1937 and due to how cheap and easy it was found itself part of a cultural shift in America. Spam came out of Austin, Minnesota. Jay's dad George incorporated Hormel in 1901 to process hogs and beef and developed canned lunchmeat that evolved into what we think of as Spam today. It was spiced ham, thus spam. During World War II, Spam would find its way to GIs fighting the war and Spam found its way to England and countries the war was being fought in. It was durable and could sit on a shelf for moths. From there it ended up in school lunches, and after fishing sanctions on Japanese-Americans in Hawaii restricted the foods they could haul in, spam found its way there and some countries grew to rely on it due to displaced residents following the war. And yet, it remains a point of scorn in some cases. As the Monty Python sketch mentions, spam was ubiquitous, unavoidable, and repetitive. Same with spam through our email. We rely on email. We need it. Email was the first real, killer app for the Internet. We communicate through it constantly. Despite the gelatinous meat we sometimes get when we expect we're about to land that big deal when we hear the chime that our email client got a new message. It's just unavoidable. That's why a repetitive poster on a list had his messages called spam and the use just grew from there. Spam isn't exclusive to email. Laurence Canter and Martha Siegel sent the first commercial Usenet spam in the “Green Card” just after the NSF allowed commercial activities on the Internet. It was a simple Perl script to sell people on the idea of paying a fee to have them enroll people into the green card lottery. They made over $100,000 and even went so far as to publish a book on guerrilla marketing on the Internet. Canter got disbarred for illegal advertising in 1997. Over the years new ways have come about to try and combat spam. RBLs, or using DNS blacklists to mark hosts as unable to send blacklists and thus having port 25 blocked emerged in 1996 from the Mail Abuse Prevention System, or MAPS. Developed by Dave Rand and Paul Vixie, the list of IP addresses helped for a bit. That is, until spammers realized they could just send from a different IP. Vixie also mentioned the idea of of matching a sender claim to a mail server a message came from as a means of limiting spam, a concept that would later come up again and evolve into the Sender Policy Framework, or SPF for short. That's around the same time Steve Linford founded Spamhaus to block anyone that knowingly spams or provides services to spammers. If you have a cable modem and try to setup an email server on it you've probably had to first get them to unblock your address from their Don't Route list. The next year Mark Jeftovic created a tool called filter.plx to help filter out spam and that project got picked up by Justin Mason who uploaded his new filter to SourceForge in 2001. A filter he called SpamAssassin. Because ninjas are cooler than pirates. Paul Graham, the co-creator of Y Combinator (and author a LISP-like programming language) wrote a paper he called “A Plan for Spam” in 2002. He proposed using a Bayesian filter as antivirus software vendors used to combat spam. That would be embraced and is one of the more common methods still used to block spam. In the paper he would go into detail around how scoring of various words would work and probabilities that compared to the rest of his email that a spam would get flagged. That Bayesian filter would be added to SpamAssassin and others the next year. Dana Valerie Reese came up with the idea for matching sender claims independently and she and Vixie both sparked a conversation and the creation of the Anti-Spam Research Group in the IETF. The European Parliament released the Directive on Privacy and Electronic Communications in the EU criminalizing spam. Australia and Canada followed suit. 2003 also saw the first laws in the US regarding spam. The CAN-SPAM Act of 2003 was signed by President George Bush in 2003 and allowed the FTC to regulate unsolicited commercial emails. Here we got the double-opt-in to receive commercial messages and it didn't take long before the new law was used to prosecute spammers with Nicholas Tombros getting the dubious honor of being the first spammer convicted. What was his spam selling? Porn. He got a $10,000 fine and six months of house arrest. Fighting spam with laws turned international. Christopher Pierson was charged with malicious communication after he sent hoax emails. And even though spammers were getting fined and put in jail all the time, the amount of spam continued to increase. We had pattern filters, Bayesian filters, and even the threat of legal action. But the IETF Anti-Spam Research Group specifications were merged by Meng Weng Wong and by 2006 W. Schlitt joined the paper to form a new Internet standard called the Sender Policy Framework which lives on in RFC 7208. There are a lot of moving parts but at the heart of it, Simple Mail Transfer Protocol, or SMTP, allows sending mail from any connection over port 25 (or others if it's SSL-enabled) and allowing a message to pass requiring very little information - although the sender or sending claim is a requirement. A common troubleshooting technique used to be simply telnetting into port 25 and sending a message from an address to a mailbox on a mail server. Theoretically one could take the MX record, or the DNS record that lists the mail server to deliver mail bound for a domain to and force all outgoing mail to match that. However, due to so much spam, some companies have dedicated outbound mail servers that are different than their MX record and block outgoing mail like people might send if they're using personal mail at work. In order not to disrupt a lot of valid use cases for mail, SPF had administrators create TXT records in DNS that listed which servers could send mail on their behalf. Now a filter could check the header for the SMTP server of a given message and know that it didn't match a server that was allowed to send mail. And so a large chunk of spam was blocked. Yet people still get spam for a variety of reasons. One is that new servers go up all the time just to send junk mail. Another is that email accounts get compromised and used to send mail. Another is that mail servers get compromised. We have filters and even Bayesian and more advanced forms of machine learning. Heck, sometimes we even sign up for a list by giving our email out when buying something from a reputable site or retail vendor. Spam accounts for over 90% of the total email traffic on the Internet. This is despite blacklists, SPF, and filters. And despite the laws and threats spam continues. And it pays well. We mentioned Canter & Sigel. Shane Atkinson was sending 100 million emails per day in 2003. That doesn't happen for free. Nathan Blecharczyk, a co-founder of Airbnb paid his way through Harvard on the back of spam. Some spam sells legitimate products in illegitimate ways, as we saw with early IoT standard X10. Some is used to spread hate and disinformation, going back to Sender Argic, known for denying the Armenian genocide through newsgroups in 1994. Long before infowars existed. Peter Francis-Macrae sent spam to solicit buying domains he didn't own. He was convicted after resorting to blackmail and threats. Jody Michael Smith sold replica watches and served almost a year in prison after he got caught. Some spam is sent to get hosts loaded with malware so they could be controlled as happened with Peter Levashov, the Russian czar of the Kelihos botnet. Oleg Nikolaenko was arrested by the FBI in 2010 for spamming to get hosts in his Mega-D botnet. The Russians are good at this; they even registered the Russian Business Network as a website in 2006 to promote running an ISP for phishing, spam, and the Storm botnet. Maybe Flyman is connected to the Russian oligarchs and so continues to be allowed to operate under the radar. They remain one of the more prolific spammers. Much is sent by a small number of spammers. Khan C. Smith sent a quarter of the spam in the world until he got caught in 2001 and fined $25 million. Again, spam isn't limited to just email. It showed up on Usenet in the early days. And AOL sued Chris “Rizler” Smith for over $5M for his spam on their network. Adam Guerbuez was fined over $800 million dollars for spamming Facebook. And LinkedIn allows people to send me unsolicited messages if they pay extra, probably why Microsoft payed $26 billion for the social network. Spam has been with us since the telegraph; it isn't going anywhere. But we can't allow it to run unchecked. The legitimate organizations that use unsolicited messages to drive business help obfuscate the illegitimate acts where people are looking to steal identities or worse. Gary Thuerk opened a Pandora's box that would have been opened if hadn't of done so. The rise of the commercial Internet and the co-opting of the emerging cyberspace as a place where privacy and so anonymity trump verification hit a global audience of people who are not equal. Inequality breeds crime. And so we continually have to rethink the answers to the question of sovereignty versus the common good. Think about that next time an IRS agent with a thick foreign accent calls asking for your social security number - and remember (if you're old enough) that we used to show our social security cards to grocery store clerks when we wrote checks. Can you imagine?!?!
Neste vídeo vou te explicar o que é Git e o que é GitHub. Qual a diferença destes dis termos e para que serve cada um. Além disso, você precisa saber como usar ferramentas como GitHub, Bitbucket, o GitLab, o SourceForge, entre outros, para fortalecer seu currículo e se posicionar melhor no mercado de trabalho. Vamos falar sobre a importância de manter seus códigos e versões em servidores que oferecem redundância, e como isso pode salvar o seu dia caso precise fazer ol rollback em produção em apenas alguns minutos. É válido ressaltar que o Git é uma plataforma que foi criada por Linus Torvalds. O mesmo responsável pela criação do Kernel do Linux. É uma solução desenvolvida, testada e utilizada por milhões de desenvolvedores ao redor do planeta, em projetos de todos os tamanhos. Link do Git: https://git-scm.com Link do GitHub: https://github.com/ Link do Bitbucket: https://bitbucket.org/product/ Assista a este conteúdo no Youtube: https://youtu.be/960fsfBcovE Assista mais vídeos do canal:
Timestamps: 1:06 - Being obsessed with running businesses 6:28 - Creating the mulesoft open source project 12:15 - Should you solve your own problem? 21:00 - Open sourcing software 1:01:30 - Getting prospective clients to follow through 1:04:08 - Letting go of your CTO position 1:07:51 - Falling into a depression 1:11:14 - Ringing the bell at the NY stock exchange 1:35:10 - Adapting to a slower paced life About Ross Mason Ross Mason is the founder of MuleSoft Inc. and Dig Ventures. He is also a Board Member at Stackin' and Syncari. Ross has always had an entrepreneurial spirit, and jokes that at the ripe age of seven, he was already running his own bootleg Lego club! His parents were business owners, and taught Ross two important lessons for his future projects: you should never lower your bar, and working hard isn't necessarily a bad thing if you enjoy what you do. In 1997, Ross graduated in Computer Science from the University of the West of England and started his career in corporate work. However, he knew he wanted to build his own path, and open source software was his prime choice, since it gives you widespread distribution and insight about potential product/market fit. That’s exactly how The Mule Project was born. The platform started out as a SourceForge project in April 2003, and its aim was to make programmers’ lives simpler by allowing them to easily enable the sending of data between their SaaS (software as a service), on-premise software and legacy systems platforms which was previously hard and tedious to do. This philosophy of avoiding “donkey work” became a guiding star for his endeavors. Born in 2006, MuleSource, later MuleSoft, got the timing just right: many open source companies were becoming commercial, and investors were starting to realize it’s actually a viable distribution model. Ross had a strong drive for building the next game changer in a fragmented market. Though integration had taken a big hit after the rise of web services, MuleSoft kept the ball rolling by uniting many different approaches in one platform, with all the proper testing, monitoring, debugging and graphical environment.
ANTIC Episode 76 - The Bill Kendrick Show In this episode of ANTIC The Atari 8-Bit Computer Podcast… Bill Kendrick gets more mentions than when he’s on the show, Kay discovers he owns more Atari disk drives than the rest of the Atari community combined, and we discuss all the news rocking the Atari 8-bit world. READY! Recurring Links Floppy Days Podcast AtariArchives.org AtariMagazines.com Kevin’s Book “Terrible Nerd” New Atari books scans at archive.org ANTIC feedback at AtariAge Atari interview discussion thread on AtariAge Interview index: here ANTIC Facebook Page AHCS Eaten By a Grue Next Without For What We’ve Been Up To Worms? Source code archiving - https://github.com/savetz/worms Atari Speed Reading Receipts - https://archive.org/details/atari-speed-reading-receipts News 800XL PCB remake: https://ezcontents.org/atari-800xl-pcb-soldering-and-troubleshooting https://ezcontents.org/atari-800xl-bill-materials-bom https://ezcontents.org/atari-800xl-pcb-remake ATasm, a command-line based 6502 cross-assembler that's compatible with OSS's 1982 "Mac/65" macroassembler: SourceForge page - https://sourceforge.net/projects/atasm/ The documentation - https://sourceforge.net/p/atasm/code/HEAD/tree/trunk/atasm.txt#l54 Atari Projects - Jason Moore does it again! - http://atariprojects.org/ Learn about Vertical Blank Interrupts in BASIC for Atari 8-Bit Computers (30-60 mins) Read “How Atari took on Apple in the 1980s home PC wars” by Benj Edwards (5-10 mins) Atari Flashback X with Atari Computer Games - https://www.atariteca.net.pe/2021/03/pack-con-mas-de-130-juegos-para-consola.html Paul Nicholls’ Coded Snippets Cookbook - 6502 edition - https://syntaxerrorsoftware.itch.io/code-snippets-cookbook-6502-edition Atari Giant - http://atarigiant.com/ - Web site store that caters to Atari 8-bit Pro(c) issue 15 - https://proc-atari.de/en/proc-atari-magazine/proc-atari-issue-15-softcover-book-edition USB Keyboard Interface available from Lotharek - https://lotharek.pl/productdetail.php?id=311 Belts for 1050 - https://console5.com/store/fabric-reinforced-belt-for-atari-1050-tandon-tm100-4p-floppy-drive.html Atari Compendium Website - Mostly 2600, with a smattering of computer - http://www.ataricompendium.com/game_library/controllers/controllers.html Gem Drop Deluxe - Bill Kendrick - http://www.newbreedsoftware.com/gemdrop_deluxe/ Shows Upcoming Shows where you might see Atari computers (or Atari people): VCFSE August 20-22 http://southernfriedgameroomexpo.com/ KansasFest July 23-24 https://www.kansasfest.org ; virtual event PRGE - cancelled August 7 & 8, 2021: Vintage Computer Festival West 2021 (VCF West) October 8, 9, 10, 2021: Vintage Computer Festival East 2021 (VCF East) Event page created by Chicago Classic Computing - http://chiclassiccomp.org/events.html?fbclid=IwAR3Fm5hf7PCQj0yXBxXvj9J8Mp8GDwD2w1bfD_qktpPOnNYNoQUmN_EpgB8 Event page created by Floppy Days - https://www.facebook.com/VintageComputerShows/ Event page on Vintage Is The New Old - https://vintageisthenewold.com/vintage-is-the-new-old-releases-new-events-calendar/ YouTube videos this month The real fight Atari versus Commodore - IT Guy in Action - https://www.youtube.com/watch?v=YFhAX9gijXY Atari 800 - Part 2 - Replacing Electrolytic Capacitors - ShadowTron Blog - https://www.youtube.com/watch?v=e-dgDZ4MJYM NEW IMPROVED VERSION EN Atari 8-bit emulator (Atari800 emulator) - IT Guy in Action - https://www.youtube.com/watch?v=uoONYg8Yehs Gem Drop Deluxe! (Atari 800) - ArcadeUSA (William Culver) - Programmed by Bill Kendrick - https://www.youtube.com/watch?v=3SNvh88SiW4 Also Gem Drop Deluxe! video by Atari 8 Bits For Ever - https://www.youtube.com/watch?v=3HBQOjnBKu8 Gem Drop Deluxe! blog by Bill Kendrick - http://newbreedsoftware.com/gemdrop_deluxe/?fbclid=IwAR3VrwTV4-XAVd-S1exD5EiDdMhy0CQtRZIWBH8oqkfGqTVUJzWva3aE94M Quarter Express - 256 bytes intro for Atari XL/XE by Ilmenit / Agenda - For Lovebyte party 2021, "Low-End 256 byte intro compo" - https://www.youtube.com/watch?v=6UKnPHhKaFg Atari 800 XL Lite Rally Motorcycle racing game - The Modern Atari 8bit Computer (Nir Dary) - https://www.youtube.com/watch?v=JnG43ooEHtE New at Archive.org Pigeons at Internet Archive Scholar. Several researchers tested pigeons' perception and visual ability using Atari 800 computers. A dozen papers dated 1983-1993 - https://scholar.archive.org/search?q=%22atari+800%22+pigeons&sort_order=time_desc Atari HQ Archive #1 - https://archive.org/details/atari-hq-archive-1 Allan Bushman: Your First Atari Program by Rodnay Zaks https://archive.org/details/your-first-atari-program-rodnay-zaks Software Merchandising magazine, January, 1983 https://archive.org/details/software-merchandising-january-1983/ Current Notes magazines 1994-1995 https://archive.org/details/current-notes-volume-15-number-1-january-february-1995 Portland Atari Club newsletters 1994-1995 https://archive.org/details/portland-atari-club-january-1985 Adventure International's Airline manual https://archive.org/details/airline-adventure-international/page/n19/mode/2up Commercial Touch Me By Atari (Commercial, 1979) - https://archive.org/details/touch-me-by-atari-commercial-1979 New at Github Atari 800 Soundbox https://github.com/zbyti/atari800-soundbox ATARI XE Replacement Keyboard https://github.com/gianlucarenzi/A130KB_MX XEGS-DS https://github.com/wavemotion-dave/XEGS-DS Also A5200DS https://github.com/wavemotion-dave/A5200DS Atari800-Display-Lists https://github.com/pedromagician/Atari800-Display-Lists Atari 1090XL expansion box remake https://github.com/kenames99/1090 Atari800-benchmarks https://github.com/pedromagician/Atari800-benchmarks Micview https://github.com/tschak909/micview Turbo Decoder https://github.com/baktragh/turbodecoder MidiJoy https://github.com/fredlcore/MidiJoy USB_to_RS232 Connector https://github.com/pjones1063/USB_to_RS232#usb_to_rs232-connector-usbmodem Listener Feedback Vegas 1988 World of Atari show - https://archive.org/details/WorldOfAtariConventionLasVegas1998/ Closing END OF SHOW MUSIC: Donnie Iris and the Cruisers - Do You Compute? (1983) - music video featuring an Atari 1200XL - https://www.youtube.com/watch?v=Y2Rjyu_4HzI
@thefluffy007 A Bay Area Native (Berkeley) I always tell people my computer journey started at 14, but it really started at 5th grade (have a good story to tell about this) Was a bad student in my ninth grade year - almost kicked out of high school due to cutting. Had a 1.7 GPA. After my summer internship turned it around to a 4.0. Once I graduated from high school, I knew I wanted to continue on the path of computers. Majored in Computer Science Graduated with Bachelors and Masters in Computer Science. Graduate Certificate in Information Security and Privacy. Minor in Math. Interested in security from a Yahoo! Group on Cryptography. Liked how you can turn text into gibberish and back again. Became interested in penetration testing after moving to Charlotte, and moonlighted as a QA while a full-stack developer. Co-workers did not want me to test their code because I would always find bugs. Moved into penetration testing space. Always had an interest in mobile, but never did mobile development and decided it wasn’t for me Became interested in bug bounties and noticed that mobile payouts were higher. At this time also completed SANS 575 - Mobile Device Security and Ethical Hacking. Realized the barrier to entry was VERY (almost non-existent) low in Android as it’s open source. Started to learn/expand mobile hacking on my own time The threat exposure is VERY high with mobile hacking. As you have a web app component, network component, and phone component. I always reference a slide from Secure Works. Link to YouTube Channel → thefluffy007 - YouTube thefluffy007 – A security researchers thoughts on all things security – web, mobile, and cloud The Mobile App Security Company | NowSecure owasp-mstg/Crackmes at master · OWASP/owasp-mstg · GitHub Rana Android Malware (reversinglabs.com) These 21 Android Apps Contain Malware | PCMag Android Tamer -Android Tamer The Diary of an (Inexperienced) Bug Hunter - Intro to Android Hacking | Bugcrowd Android Debug Bridge (adb) | Android Developers Goal: discussing best practices and methods to reverse engineer Android applications Introduction to Java (w3schools.com) JavaScript Introduction (w3schools.com) Introduction to Python (w3schools.com) Frida • A world-class dynamic instrumentation framework | Inject JavaScript to explore native apps on Windows, macOS, GNU/Linux, iOS, Android, and QNX (Frida can be used with JavaScript, and Python, along with other languages) GitHub - dweinstein/awesome-frida: Awesome Frida - A curated list of Frida resources http://www.frida.re/ (https://github.com/frida/frida) Android APK crackme: owasp-mstg/0x05c-Reverse-Engineering-and-Tampering.md at master · OWASP/owasp-mstg · GitHub Reverse-Engineering - YobiWiki Apktool - A tool for reverse engineering 3rd party, closed, binary Android apps. (ibotpeaches.github.io) GitHub - MobSF/Mobile-Security-Framework-MobSF: Mobile Security Framework (MobSF) is an automated, all-in-one mobile application (Android/iOS/Windows) pen-testing, malware analysis and security assessment framework capable of performing static and dynamic analysis. IntroAndroidSecurity download | SourceForge.net ←- link to my virtual machine and Androidx86 emulator Background: **consider this a primer for any class you might teach, a teaser, if you will** Why do we want to be able to reverse engineer APKs and IPKs? Android APKS (Android Packages) holds the source code to the application. If you can reverse this you will essentially have the keys to the kingdom. Developers and companies (if they’re proprietary) will add obfuscation - a technique to make the code unreadable to thwart reverse engineers from finding out their code. What are some of the structures and files contained in APKs that are useful for ppl analyzing binaries? Android applications have to have a MainActivity (written in Java). This activity is the entry point to the application. Android applications also have an AndroidManifest.xml file which is the skeleton of the application. This describes the main activity, intents, service providers, permissions, and what Android operating system can run the application. When testing apps for security, how easy is it to emulate security and physical controls if you’re not on a handset? Pretty easy. You can use an emulator. I must forewarn though - you will need A LOT of memory for it to work effectively. Are there ever any times you HAVE to use a handset? An app that tests something like Android’s Safetynet and won’t run without it? Do they ever want perf testing on their apps? Was thinking about how you check events in logs, battery drain, using apps on older Android/iOS versions? When organizations or developers ask you to test an app, is there anything in particular in scope? Out of scope? How do progressive web apps differ than a more traditional app? Lab setup IntroToAndroidSecurity VM Android Emulator Tools to use Why use them? (free, full-featured) Setup and installation OS-specific tools? Tools used - Frida, Jadx-GUI (or command line), text editor. All of these items are free. No setup required if using my virtual machine :-) These apps are OS specific if you choose Linux or Windows. Callbacks Methodology Decompile the application - can use a tool titled - Apktool (free) Look “under the hood” of the application - Jadx-GUI (Graphical User Interface) or Jadx-CLI (command line) Connect your emulator/device using Android Debug Bridge (adb) Get version of Frida on device Look online to find correct version of Frida **this is important** Start to play around with the tool and see if you receive error messages/prompts. Can then go back to code that was reverse engineered and see where it’s located. Best practices Leave no stones unturned! Meaning you might see something that seems too rudimentary to work - and yet it does. Cert pinning - Typical issues seen Hard-coded passwords, data that is not being encrypted in rest or transit. Check out our Store on Teepub! https://brakesec.com/store Join us on our #Slack Channel! Send a request to @brakesec on Twitter or email bds.podcast@gmail.com #AmazonMusic: https://brakesec.com/amazonmusic #Spotify: https://brakesec.com/spotifyBDS #Pandora: https://brakesec.com/pandora #RSS: https://brakesec.com/BrakesecRSS #Youtube Channel: http://www.youtube.com/c/BDSPodcast #iTunes Store Link: https://brakesec.com/BDSiTunes #Google Play Store: https://brakesec.com/BDS-GooglePlay Our main site: https://brakesec.com/bdswebsite #iHeartRadio App: https://brakesec.com/iHeartBrakesec #SoundCloud: https://brakesec.com/SoundcloudBrakesec Comments, Questions, Feedback: bds.podcast@gmail.com Support Brakeing Down Security Podcast by using our #Paypal: https://brakesec.com/PaypalBDS OR our #Patreon https://brakesec.com/BDSPatreon #Twitter: @brakesec @boettcherpwned @bryanbrake @infosystir #Player.FM : https://brakesec.com/BDS-PlayerFM #Stitcher Network: https://brakesec.com/BrakeSecStitcher #TuneIn Radio App: https://brakesec.com/TuneInBrakesec
@thefluffy007 A Bay Area Native (Berkeley) I always tell people my computer journey started at 14, but it really started at 5th grade (have a good story to tell about this) Was a bad student in my ninth grade year - almost kicked out of high school due to cutting. Had a 1.7 GPA. After my summer internship turned it around to a 4.0. Once I graduated from high school, I knew I wanted to continue on the path of computers. Majored in Computer Science Graduated with Bachelors and Masters in Computer Science. Graduate Certificate in Information Security and Privacy. Minor in Math. Interested in security from a Yahoo! Group on Cryptography. Liked how you can turn text into gibberish and back again. Became interested in penetration testing after moving to Charlotte, and moonlighted as a QA while a full-stack developer. Co-workers did not want me to test their code because I would always find bugs. Moved into penetration testing space. Always had an interest in mobile, but never did mobile development and decided it wasn’t for me Became interested in bug bounties and noticed that mobile payouts were higher. At this time also completed SANS 575 - Mobile Device Security and Ethical Hacking. Realized the barrier to entry was VERY (almost non-existent) low in Android as it’s open source. Started to learn/expand mobile hacking on my own time The threat exposure is VERY high with mobile hacking. As you have a web app component, network component, and phone component. I always reference a slide from Secure Works. Link to YouTube Channel → thefluffy007 - YouTube thefluffy007 – A security researchers thoughts on all things security – web, mobile, and cloud The Mobile App Security Company | NowSecure owasp-mstg/Crackmes at master · OWASP/owasp-mstg · GitHub Rana Android Malware (reversinglabs.com) These 21 Android Apps Contain Malware | PCMag Android Tamer -Android Tamer The Diary of an (Inexperienced) Bug Hunter - Intro to Android Hacking | Bugcrowd Android Debug Bridge (adb) | Android Developers Goal: discussing best practices and methods to reverse engineer Android applications Introduction to Java (w3schools.com) JavaScript Introduction (w3schools.com) Introduction to Python (w3schools.com) Frida • A world-class dynamic instrumentation framework | Inject JavaScript to explore native apps on Windows, macOS, GNU/Linux, iOS, Android, and QNX (Frida can be used with JavaScript, and Python, along with other languages) GitHub - dweinstein/awesome-frida: Awesome Frida - A curated list of Frida resources http://www.frida.re/ (https://github.com/frida/frida) Android APK crackme: owasp-mstg/0x05c-Reverse-Engineering-and-Tampering.md at master · OWASP/owasp-mstg · GitHub Reverse-Engineering - YobiWiki Apktool - A tool for reverse engineering 3rd party, closed, binary Android apps. (ibotpeaches.github.io) GitHub - MobSF/Mobile-Security-Framework-MobSF: Mobile Security Framework (MobSF) is an automated, all-in-one mobile application (Android/iOS/Windows) pen-testing, malware analysis and security assessment framework capable of performing static and dynamic analysis. IntroAndroidSecurity download | SourceForge.net ←- link to my virtual machine and Androidx86 emulator Background: **consider this a primer for any class you might teach, a teaser, if you will** Why do we want to be able to reverse engineer APKs and IPKs? Android APKS (Android Packages) holds the source code to the application. If you can reverse this you will essentially have the keys to the kingdom. Developers and companies (if they’re proprietary) will add obfuscation - a technique to make the code unreadable to thwart reverse engineers from finding out their code. What are some of the structures and files contained in APKs that are useful for ppl analyzing binaries? Android applications have to have a MainActivity (written in Java). This activity is the entry point to the application. Android applications also have an AndroidManifest.xml file which is the skeleton of the application. This describes the main activity, intents, service providers, permissions, and what Android operating system can run the application. When testing apps for security, how easy is it to emulate security and physical controls if you’re not on a handset? Pretty easy. You can use an emulator. I must forewarn though - you will need A LOT of memory for it to work effectively. Are there ever any times you HAVE to use a handset? An app that tests something like Android’s Safetynet and won’t run without it? Do they ever want perf testing on their apps? Was thinking about how you check events in logs, battery drain, using apps on older Android/iOS versions? When organizations or developers ask you to test an app, is there anything in particular in scope? Out of scope? How do progressive web apps differ than a more traditional app? Lab setup IntroToAndroidSecurity VM Android Emulator Tools to use Why use them? (free, full-featured) Setup and installation OS-specific tools? Tools used - Frida, Jadx-GUI (or command line), text editor. All of these items are free. No setup required if using my virtual machine :-) These apps are OS specific if you choose Linux or Windows. Callbacks Methodology Decompile the application - can use a tool titled - Apktool (free) Look “under the hood” of the application - Jadx-GUI (Graphical User Interface) or Jadx-CLI (command line) Connect your emulator/device using Android Debug Bridge (adb) Get version of Frida on device Look online to find correct version of Frida **this is important** Start to play around with the tool and see if you receive error messages/prompts. Can then go back to code that was reverse engineered and see where it’s located. Best practices Leave no stones unturned! Meaning you might see something that seems too rudimentary to work - and yet it does. Cert pinning - Typical issues seen Hard-coded passwords, data that is not being encrypted in rest or transit. Check out our Store on Teepub! https://brakesec.com/store Join us on our #Slack Channel! Send a request to @brakesec on Twitter or email bds.podcast@gmail.com #AmazonMusic: https://brakesec.com/amazonmusic #Spotify: https://brakesec.com/spotifyBDS #Pandora: https://brakesec.com/pandora #RSS: https://brakesec.com/BrakesecRSS #Youtube Channel: http://www.youtube.com/c/BDSPodcast #iTunes Store Link: https://brakesec.com/BDSiTunes #Google Play Store: https://brakesec.com/BDS-GooglePlay Our main site: https://brakesec.com/bdswebsite #iHeartRadio App: https://brakesec.com/iHeartBrakesec #SoundCloud: https://brakesec.com/SoundcloudBrakesec Comments, Questions, Feedback: bds.podcast@gmail.com Support Brakeing Down Security Podcast by using our #Paypal: https://brakesec.com/PaypalBDS OR our #Patreon https://brakesec.com/BDSPatreon #Twitter: @brakesec @boettcherpwned @bryanbrake @infosystir #Player.FM : https://brakesec.com/BDS-PlayerFM #Stitcher Network: https://brakesec.com/BrakeSecStitcher #TuneIn Radio App: https://brakesec.com/TuneInBrakesec
Over the last 25 years, Scott Collison has held CEO and General Management positions at some of the world’s leading technology companies. Most recently he was CEO at Anaconda, a market-leading data science and AI company, where he rapidly grew product revenue and helped build a great company culture that was recognized as a Top Ten Employer in Austin by local media. He was also named one of the Top Ten CEOs among companies in the Data Science and AI space in 2019. He has led organizations at large software companies and startups, including Microsoft, VMware, Salesforce and SourceForge. He has been a founder at three startups. One of them, Signio, was acquired by VeriSign in 1999 in a transaction exceeding $1 billion. He is a Fulbright scholar and holds a Master of Arts and Ph.D. from the University of California, Berkeley and a Bachelor of Arts with special honors from the University of Texas, Austin. Scott continues to serve as a board member, investor and advisor for a number of early stage start-ups. In his free time, he enjoys spending time with his family and cycling the Texas Hill Country on the weekends.
Today's episode is an overview of the different types of learn-to-code resources I've found out there. From apps to use when you're bored to full-time study programs, there are a lot of options! Before deciding which option to go for it's important to think about your own personal learning style: are you self-motivated or do you need more structure? Is there something specific you want to build straight away? Do you already have some coding experience? Do you like learning on your own or in a group? Once you have an idea of how you learn best, it's time to look at the different tools. I'll go into these in much more depth in future episodes, but in the meantime here is the overview: Free online resources (YouTube, MOOCs, dedicated learning sites, etc.) Online games & coding challenges (example: Code Wars) Paid online resources (bootcamp prep courses, Udemy, Treehouse, etc.) Mobile apps (examples: Mimo, Lrn) Bootcamps Community programs (Meetup groups, Facebook, etc.) University / school courses Programming books Figuring it out from scratch (via Github, SourceForge, etc.) Bonus method! Kids apps (example: Tynker, or this blog post) Regardless of the method you choose, the one piece of advice I have is to code every day! There is a lot to learn and you want to make sure it sticks. This episode was originally published 23 March, 2017.
Origins of the youtube-dl project Posted on 2020-11-07T13:52Z. Updated on 2020-11-10T16:28Z. As you may know, as of the time this text is being writte... https://rg3.name/202011071352.html SourceForgethe web space that my Internet provider gave meWiresharkAdobe FlashFlashblockwas already downloading a copy of those videos to your hard drive
Coming up in this episode we cover 1. We try on 2 boots 2. We'll discuss distributions from afar 3. and we have an app focus that will help you find your way. Welcome to the Linux User Space Dual (multi) Booting Why? Who? How? Should you? Maybe WSL2? How about a VM? Risk vs. Reward Joe raves about rEFInd (see app focus) Windows first - Linux next Not a beginner thing What works for you? It is not the same for everyone. Housekeeping Email us (mailto:contact@linuxuserspace.show) Ubuntu Podcast (https://ubuntupodcast.org) Support us at Patreon (https://patreon.com/linuxuserspace) Join us on Telegram (https://linuxuserspace.show/telegram) Follow us on twitter (https://twitter.com/LinuxUserSpace) Check out Linux User Space (https://linuxuserspace.show) on the web App Focus rEFInd This episode's app: rEFInd (https://www.rodsbooks.com/refind/) The code is hosted on Sourceforge (https://sourceforge.net/projects/refind/) Next Time Do we have concerns about distributions (desktop environments) from certain places? e.g. UKUI, Deepin, etc. Do some research for yourself from trusted sources. Next episode we discuss October's distro of the month Deepin (https://www.deepin.org/en/) Join us in two weeks when we return to the Linux User Space
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.10.09.333898v1?rss=1 Authors: Yones, C. A., Macchiaroli, N., Kamenetzky, L., Stegmayer, G., Milone, D. Abstract: Extracting stem-loop sequences (hairpins) from genome-wide data is very important nowadays for some data mining tasks in bioinformatics. The genome preprocessing is very important because it has a strong influence on the later steps and the final results. For example, for novel miRNA prediction, all well-known hairpins must be properly located. Although there are some scripts that can be adapted and put together to achieve this task, they are outdated, none of them guarantees finding correspondence to well-known structures in the genome under analysis, and they do not take advantage of the latest advances in secondary structure prediction. We present here an R package for automatic extraction of hairpins from genome-wide data (HextractorR). HextractoR makes an exhaustive and smart analysis of the genome in order to obtain a very good set of short sequences for further processing. Moreover, genomes can be processed in parallel and with low memory requirements. Results obtained showed that HextractoR has effectively outperformed other methods. HextractoR it is freely available at CRAN and Sourceforge. Copy rights belong to original authors. Visit the link for more info
Fighting the Coronavirus with FreeBSD, Wireguard VPN Howto in OPNsense, NomadBSD 1.3.1 available, fresh GhostBSD 20.02, New FuryBSD XFCE and KDE images, pf-badhost 0.3 released, and more. Headlines Fighting the Coronavirus with FreeBSD (https://www.leidinger.net/blog/2020/03/19/fighting-the-coronavirus-with-freebsd-foldinghome/) Here is a quick HOWTO for those who want to provide some FreeBSD based compute resources to help finding vaccines. UPDATE 2020-03-22: 0mp@ made a port out of this, it is in “biology/linux-foldingathome”. Per default it will now pick up some SARS-CoV‑2 (COVID-19) related folding tasks. There are some more config options (e.g. how much of the system resources are used). Please refer to the official Folding@Home site for more information about that. Be also aware that there is a big rise in compute resources donated to Folding@Home, so the pool of available work units may be empty from time to time, but they are working on adding more work units. Be patient. How to configure the Wireguard VPN in OPNsense (https://homenetworkguy.com/how-to/configure-wireguard-opnsense/) WireGuard is a modern designed VPN that uses the latest cryptography for stronger security, is very lightweight, and is relatively easy to set up (mostly). I say ‘mostly’ because I found setting up WireGuard in OPNsense to be more difficult than I anticipated. The basic setup of the WireGuard VPN itself was as easy as the authors claim on their website, but I came across a few gotcha's. The gotcha's occur with functionality that is beyond the scope of the WireGuard protocol so I cannot fault them for that. My greatest struggle was configuring WireGuard to function similarly to my OpenVPN server. I want the ability to connect remotely to my home network from my iPhone or iPad, tunnel all traffic through the VPN, have access to certain devices and services on my network, and have the VPN devices use my home's Internet connection. WireGuard behaves more like a SSH server than a typical VPN server. With WireGuard, devices which have shared their cryptographic keys with each other are able to connect via an encrypted tunnel (like a SSH server configured to use keys instead of passwords). The devices that are connecting to one another are referred to as “peer” devices. When the peer device is an OPNsense router with WireGuard installed, for instance, it can be configured to allow access to various resources on your network. It becomes a tunnel into your network similar to OpenVPN (with the appropriate firewall rules enabled). I will refer to the WireGuard installation on OPNsense as the server rather than a “peer” to make it more clear which device I am configuring unless I am describing the user interface because that is the terminology used interchangeably by WireGuard. The documentation I found on WireGuard in OPNsense is straightforward and relatively easy to understand, but I had to wrestle with it for a little while to gain a better understanding on how it should be configured. I believe it was partially due to differing end goals – I was trying to achieve something a little different than the authors of other wiki/blog/forum posts. Piecing together various sources of information, I finally ended up with a configuration that met the goals stated above. News Roundup NomadBSD 1.3.1 (https://nomadbsd.org/index.html#1.3.1) NomadBSD 1.3.1 has recently been made available. NomadBSD is a lightweight and portable FreeBSD distribution, designed to run on live on a USB flash drive, allowing you to plug, test, and play on different hardware. They have also started a forum as of yesterday, where you can ask questions and mingle with the NomadBSD community. Notable changes in 1.3.1 are base system upgraded to FreeBSD 12.1-p2. automatic network interface setup improved, image size increased to over 4GB, Thunderbird, Zeroconf, and some more listed below. GhostBSD 20.02 (https://ghostbsd.org/20.02_release_announcement) Eric Turgeon, main developer of GhostBSD, has announced version 20.02 of the FreeBSD based operating system. Notable changes are ZFS partition into the custom partition editor installer, allowing you to install alongside with Windows, Linux, or macOS. Other changes are force upgrade all packages on system upgrade, improved update station, and powerd by default for laptop battery performance. New FuryBSD XFCE and KDE images (https://www.furybsd.org/new-furybsd-12-1-based-images-are-available-for-xfce-and-kde/) This new release is now based on FreeBSD 12.1 with the latest FreeBSD quarterly packages. This brings XFCE up to 4.14, and KDE up to 5.17. In addition to updates this new ISO mostly addresses community bugs, community enhancement requests, and community pull requests. Due to the overwhelming amount of reports with GitHub hosting all new releases are now being pushed to SourceForge only for the time being. Previous releases will still be kept for archive purposes. pf-badhost 0.3 Released (https://www.geoghegan.ca/pfbadhost.html) pf-badhost is a simple, easy to use badhost blocker that uses the power of the pf firewall to block many of the internet's biggest irritants. Annoyances such as SSH and SMTP bruteforcers are largely eliminated. Shodan scans and bots looking for webservers to abuse are stopped dead in their tracks. When used to filter outbound traffic, pf-badhost blocks many seedy, spooky malware containing and/or compromised webhosts. Beastie Bits DragonFly i915 drm update (https://www.dragonflydigest.com/2020/03/23/24324.html) CShell is punk rock (http://blog.snailtext.com/posts/cshell-is-punk-rock.html) The most surprising Unix programs (https://minnie.tuhs.org/pipermail/tuhs/2020-March/020664.html) Feedback/Questions Master One - Torn between OpenBSD and FreeBSD (http://dpaste.com/102HKF5#wrap) Brad - Follow up to Linus ZFS story (http://dpaste.com/1VXQA2Y#wrap) Filipe Carvalho - Call for Portuguese BSD User Groups (http://dpaste.com/2H7S8YP) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Your browser does not support the HTML5 video tag.
An airhacks.fm conversation with Bela Ban belaban.blogspot.com about: C64 wasn't real, Atari was the way to go, Atari ST vs. Amiga wars, Pascal, Modula-2 and Modula 3, Atari had a nice IDE with 1MB RAM, War Games movie, contact list application as "hello, world", fixing Epson printer hexcodes, chess and tennis over programming, learning C was a step down from Modula, system programming and the fascination with immediate feedback, writing CORBA to CMIP bridges in GDMO, C++ templates are an own language, "C++ is crap", Java at the first World Wide Web conference in 1995 in ...Darmstadt, starting with oak, applets and NCSA Mosaic, Netscape server, extracting data from mainsframes with Java over JNI, Cornell University research with Sun's Java 1.0, working with Ken Birman, Robbert van Renesse, Werner Vogels, Ensemble in Ocaml, replacing Ocaml with Java the "Java Groups", Jim Waldo was leading the JINI project, Sun Microsystems and Cornell worked together to make Java Intelligent Network Infrastructure (JINI) reliable using Java Groups, leasing JINI was revolutionary, JINI message was changed several times, there was no elevator pitch for JINI, Sun tried to keep the JINI / Java Groups cooperation secret, A Note on Distributed computing by Jim Waldo, the Eight Fallacies of Distributed Computing, JGroups on Sourceforge in 2000 (and still on available), revival of JGroups at Fujitsus's Network Management System, the Sacha Labourey and Marc Fleury contact, writing JBoss Cache on unpaid vacation in 6 weeks, the Blue and Red Papers from Mark Fleury, the EJB Open Source System, Mark Fleury and paratroopers, JBoss Cache started as tree and became a distributed map, meeting Manik Surtani in a Taxi, JBoss Cache became Infinispan, JGroups is the communication layer of Infinispan, the CP of CAP interests resulted in RAFT, JGroups RAFT is used in production, there are many Paxos implementations Raff is a Paxos simplification, RAFT for kids in JBoss Distributed Singletons, useless but consistent systems, vector clocks is an inconvenient reconciliation system, JGroups is using RocksDB and MapDB, JGroups makes UDP and other protocols like RDMA reliable, JGroups is particularly efficient with many nodes, JGroups and Sun Cluster Lab in Switzerland, running JGroups on 2000+ nodes at Gcloud, Project Loom and Fibers, mini sabaticals for hype chasing, back to easy request response to Project Java's Loom and Fibers, injecting JChannel in Quarkus, JGroups runs on Quarkus in native mode, KISS and JGroups - No Dependencies in JGroups, Bela's blog: belaban.blogspot.com
The Epson HX-20, Part 2, With Earl Evans Hello everyone, and welcome to episode 95 of the Floppy Days Podcast for November, 2019, where once again this month (in Part 2) we will continue talking about one of the world’s first portable computers: The Epson HX-20. I’m extremely happy to again have my good friend, and vintage computer podcast legend, Mr. Earl Evans, as my co-host for this episode. It turned out we had so much material to cover that I ended up breaking this topic into 2 parts. Last month was part 1, where Earl and I covered HX-20 history, tech specs, and peripherals. This month will be part 2, in which Earl and I will continue coverage by discussing how to use the machine, emulators, software, ads and appearances, modern upgrades, Web sites and more. In addition, I will include an interview with a gentleman who has done recent work around the HX-20 in the area of emulation, Mr. Pontus Rodling. First, however, I will spend a few minutes talking about my recent acquisitions in the vintage computing space and what I’ve been up to, then I’ll cover upcoming shows. Links Mentioned in the Show: Commercial https://www.youtube.com/watch?v=pnX7d0Ty9A4 New Acquisitions book "The BBC Micro an expert guide" by Mike James - https://www.amazon.com/B-C-Micro-Expert-Guide/dp/0003831175 SDrive Max from Vintage Computer Center (810) for Atari 8-bit - https://www.vintagecomputercenter.com/product/atari-810-sdrive-max Incognito for Atari 800 - https://lotharek.pl/productdetail.php?id=275 CPC 464 RGB-to-SCART cable - https://www.ebay.co.uk/itm/Amstrad-CPC-464-6128-High-Quality-POWERED-RGB-Scart-Cable-TV-Video-Lead/261898189777 - Retro Computer Shack CPC M4 upgrade - http://www.cpcwiki.eu/index.php/M4_Board Upcoming Shows Dec 7, 2019, World of Commodore 2019, hosted by the Toronto PET Users Group (TPUG) at the Admiral Inn Mississauga - https://www.tpug.ca/…/09/announcement-world-of-commodore-2…/ March 21-22, 2020, Vintage Computer Festival Pacific Northwest, Living Computers:Museum+Labs in Seattle,Washington - http://vcfed.org/…/vintage-computer-festival-pacific-north…/ April 18-19, 2020, CoCoFest, Elk Grove Village, IL - http://www.glensideccc.com/cocofest/ April 24-26, 2020, Vintage Computer Festival East, InfoAge Science Center, Wall, NJ - http://vcfed.org/…/festivals/vintage-computer-festival-east/ October 30 - November 1, 2020, Tandy Assembly, Springfield, OH - http://www.tandyassembly.com Ads and Appearances http://starringthecomputer.com/computer.html?c=171 Epson announced the HX-20 to the world in this famous double page ad spread from 1982. Its compact size is obviously the big selling point here - http://www.ganjatron.net/retrocomputing/epson-hx20/creativecomp0183-2a.jpg another four page ad from Epson UK. Epson apparently wanted to make it clear that only they had the foresight to devise such a compact computing platform. As depicted in the ad, the HX-20 is equally at home in the office, on the plane, in a power station and at home. - http://www.ganjatron.net/retrocomputing/epson-hx20/practicalcomputing128202a.jpg Modern Upgrades MH-20 by Martin Hepperle - https://www.mh-aerotools.de/hp/hx-20/MH-20-Display-Controller.zip Flashx20 by Norbert Kehrer - http://members.aon.at/nkehrer/flashx20.html Emulators "eHC-20"-EPSON HC-20 / HX-20 emulator for Win32 (Toshiya Takeda) - http://takeda-toshiya.my.coocan.jp/ M.A.M.E. - Multiple Arcade Machine Emulator / MAME 0.184 ROMs - https://edgeemu.net/details-25542.htm HXEmu by Pontus Rodling - https://frigolit.net/projects/hxemu/ Community Facebook https://www.facebook.com/groups/405903130000150/ Forums AtariAge - https://atariage.com/forums/search/?q=epson%20hx-20 https://stardot.org.uk/forums/index.php Web Sites An HX-20 Enthusiast Site - Julian Ward - https://web.archive.org/web/20161023223930/http://classway.com/hx20/index.html - contains FAQ, memory map, description of utilities (on WayBackMachine) Epson HX-20 Programs - Courtney McFarren - http://www.geocities.ws/abcmcfarren/hx20/hx20.htm - software downloads (BASIC listings), memory map, memory dump, information on interfacing with the HX-20 Review in Byte Magazine Volume 8, Number 9 - https://archive.org/stream/byte-magazine-1983-09/1983_09_BYTE_08-09_Portable_Computers_in_Depth#page/n201/mode/2up - compares HX-20 and TI CC-40 GanjaTron’s site - http://www.ganjatron.net/retrocomputing/epson-hx20/index.html - nice description of the HX-20 Epson HX-20 Review in Creative Computing by David Ahl - https://www.atarimagazines.com/creative/v9n3/101_Epson_HX20_computer.php Epson HX-20 Tips and Tricks by Martin Hepperle - https://www.mh-aerotools.de/hp/hx-20/Epson%20HX-20%20-%20Tips-Tricks.pdf TV ad for the HX-20 (1982) - https://www.youtube.com/watch?v=pnX7d0Ty9A4 Wickensonline - information from sites now only on the Wayback Machine (archive.org) - https://web.archive.org/web/20160323052226/http://www.wickensonline.co.uk/hx-20/index.html Epson HX-20 Operations Manual (from the Epson support site) - https://files.support.epson.com/pdf/hx20__/hx20__u1.pdf HX-20 page at Epson - https://global.epson.com/company/corporate_history/milestone_products/13_hx20.html PDF documentation for the Epson HX-20 (Tech Ref, BASIC Ref, etc.) - http://electrickery.xs4all.nl/comp/hx20/doc/index.html HXTAPE software at SourceForge - http://hxtape.sourceforge.net/ - HXTape is a suite of small programs designed to read and write Epson HX-20 tapes using a sound card under Linux. Additionally, it can be used as a simple means of direct data transfer between a PC and an HX-20, by connecting the two using an ordinary audio cable. The Epson HX-20: As seen in Tezza's classic computer collection - https://www.youtube.com/watch?v=q-Esopw_KK8&app=desktop The World's First Laptop - Epson HX-20 / HC-20 by RetroManCave - https://www.youtube.com/watch?v=o-F_hL1bZsw “Japanese portable has real staying power” From: PRACTICAL COMPUTING October 1982 page 40 - http://electrickery.xs4all.nl/comp/hx20/pc198210.html Yet another computer museum - The Epson HX-20 - http://electrickery.xs4all.nl/comp/hx20/ References Wikipedia - https://en.wikipedia.org/wiki/Epson_HX-20
The Thanksgiving holiday is upon us in the United States. That allows us a week that is a little lighter than our normal workload and time to reflect. In this time of reflection, one of the things to consider is the effective tools we have for doing our jobs. Avoid Software Piracy The first thing to consider is the people and companies that provide all of those effective tools. Therefore, we should be respectful of their efforts and ensure we are compliant with licenses. I have heard all of the excuses for software piracy, and none of them hold water. In particular, we need to set an excellent example for those that do not make a living creating software. If you need a little extra support for this concept, you can check out the link below. A Summary Of Software Piracy Along with being compliant with the software you use, go ahead and clean up what you do not. This includes deleting unused applications and canceling subscriptions that are no longer needed. You want to be in a comfortable place as licenses go. Thus, you should pay for what you use and not the things that you leave on a shelf. Lower Cost, Effective Tools The rise in popularity of software-as-a-service has made many applications affordable. There are free demos, trial periods, and even some free tier (or entry-level) solutions that small organizations may find very useful. Take advantage of these options and check back regularly via searches or popular sites (like Sourceforge) that apply to your needs.
Jon Sobel is Co-founder and CEO of Sight Machine, a provider of software solutions that help manufacturing firms gain visibility and insights into their operations to drive efficiencies, cost savings and better outcomes. Our conversation covered a range of topics including his experience working with open source at SourceForge, working around the energy industry in the early days of Tesla, and the experiences of visiting dozens of manufacturing plants at the genesis of Sight Machine. He discusses the unique challenges that manufacturing data presents, along with some of the best practices and considerations for success. He relates some of the organizational obstacles that impede adoption of analytic technologies in manufacturing and discusses the role that AI plays in next generation vision.
Bradley and Karen discuss and critique the new initiative by the Linux Foundation called CommunityBridge. The podcast includes various analysis that expands upon their blog post about Linux Foundation's CommunityBridge. Show Notes: Segment 0 (00:36) Conservancy helped Free Software Foundation and GNOME Foundation begin fiscal sponsorship work. (07:50) Conservancy has always been very coordinated with Software in the Public Interest, which is a FOSS fiscal sponsor that predates Conservancy. (08:26) Conservancy helped NumFocus get started as a fiscal sponsor by providing advice. (08:53) The above are all 501(c)(3) charities, but there are also 501(c)(6) fiscal sponsors, such as Linux Foundation and Eclipse Foundation. (10:00) Bradley mentioned that projects that are forks can end up in different fiscal sponsors, such as Hudson being in Eclipse Foundation, and Jenkins being associated with a Linux Foundation sub-org. (10:30) Bradley mentioned that any project — be it SourceForge, GitHub, or Community Bridge — that attempts to convince FOSS developers to use proprietary software for their projects is immediately suspect (12:00) Open Collective, a for-profit company seeking to do fiscal sponsorship (but attempting to release their code for it) is likely under the worst “competitive” threat from this initiative. (19:50) Segment 1 (21:23) Projects that use CommunityBridge are required to act in the common business interest of the Linux Foundation members. (27:30) Board of Directors seats at the Linux Foundation are for sale, according to their by-laws. (28:50) Bradley advises that you should not put anything copylefted into CommunityBridge — given Linux Foundation's position on copyleft and citing the ArduPilot/DroneCode example. (29:50) CommunityBridge appears to only allow governance based on the “benevolent dictator for life model” (31:40), at least with regard to who controls the money (34:30) Bradley mentioned the LWN article about Community Bridge. (33:22) Segment 2 (36:54) Karen mentioned that CommunityBridge also purports to address diversity and security issues for FOSS projects. (37:00) Bradley mentioned the code hosted on k.sfconservancy.org and also the Reimbursenator project that PSU students wrote. (42:00) Segment 3 (42:44) Bradley and Karen discuss (or, possibly don't) discuss what's coming up on the next episode. Fact of the matter is that this announcement wasn't written yet when we recorded this episode and we weren't sure if 0x65 would be released before or after that announcement was released. We'll be discussing that topic on 0x66. Send feedback and comments on the cast to . You can keep in touch with Free as in Freedom on our IRC channel, #faif on irc.freenode.net, and by following Conservancy on identi.ca and and Twitter. Free as in Freedom is produced by Dan Lynch of danlynch.org. Theme music written and performed by Mike Tarantino with Charlie Paxson on drums. The content of this audcast, and the accompanying show notes and music are licensed under the Creative Commons Attribution-Share-Alike 4.0 license (CC BY-SA 4.0).
Bradley and Karen discuss and critique the new initiative by the Linux Foundation called CommunityBridge. The podcast includes various analysis that expands upon their blog post about Linux Foundation's CommunityBridge. Show Notes: Segment 0 (00:36) Conservancy helped Free Software Foundation and GNOME Foundation begin fiscal sponsorship work. (07:50) Conservancy has always been very coordinated with Software in the Public Interest, which is a FOSS fiscal sponsor that predates Conservancy. (08:26) Conservancy helped NumFocus get started as a fiscal sponsor by providing advice. (08:53) The above are all 501(c)(3) charities, but there are also 501(c)(6) fiscal sponsors, such as Linux Foundation and Eclipse Foundation. (10:00) Bradley mentioned that projects that are forks can end up in different fiscal sponsors, such as Hudson being in Eclipse Foundation, and Jenkins being associated with a Linux Foundation sub-org. (10:30) Bradley mentioned that any project — be it SourceForge, GitHub, or Community Bridge — that attempts to convince FOSS developers to use proprietary software for their projects is immediately suspect (12:00) Open Collective, a for-profit company seeking to do fiscal sponsorship (but attempting to release their code for it) is likely under the worst “competitive” threat from this initiative. (19:50) Segment 1 (21:23) Projects that use CommunityBridge are required to act in the common business interest of the Linux Foundation members. (27:30) Board of Directors seats at the Linux Foundation are for sale, according to their by-laws. (28:50) Bradley advises that you should not put anything copylefted into CommunityBridge — given Linux Foundation's position on copyleft and citing the ArduPilot/DroneCode example. (29:50) CommunityBridge appears to only allow governance based on the “benevolent dictator for life model” (31:40), at least with regard to who controls the money (34:30) Bradley mentioned the LWN article about Community Bridge. (33:22) Segment 2 (36:54) Karen mentioned that CommunityBridge also purports to address diversity and security issues for FOSS projects. (37:00) Bradley mentioned the code hosted on k.sfconservancy.org and also the Reimbursenator project that PSU students wrote. (42:00) Segment 3 (42:44) Bradley and Karen discuss (or, possibly don't) discuss what's coming up on the next episode. Fact of the matter is that this announcement wasn't written yet when we recorded this episode and we weren't sure if 0x65 would be released before or after that announcement was released. We'll be discussing that topic on 0x66. Send feedback and comments on the cast to . You can keep in touch with Free as in Freedom on our IRC channel, #faif on irc.freenode.net, and by following Conservancy on on Twitter and and FaiF on Twitter. Free as in Freedom is produced by Dan Lynch of danlynch.org. Theme music written and performed by Mike Tarantino with Charlie Paxson on drums. The content of this audcast, and the accompanying show notes and music are licensed under the Creative Commons Attribution-Share-Alike 4.0 license (CC BY-SA 4.0).
Les logiciels gratuits indispensables Nous allons voir ensemble les logiciels gratuits les plus populaires que vous deviez utiliser. De nos jours, vous pouvez très facilement acheter un ordinateur flambant neuf et installer tous les logiciels dont vous avez besoin gratuitement, à l'aide des applications offertes avec la licence logicielle libre. Un logiciel libre est un logiciel dont l'utilisation, l'étude, la modification et la duplication par autrui en vue de sa diffusion sont permises, techniquement et légalement1, ceci afin de garantir certaines libertés induites, dont le contrôle du programme par l'utilisateur et la possibilité de partage entre individus Définition d'un un logiciel libre Un logiciel est considéré comme libre, au sens de la Free Software Foundation, s'il confère à son utilisateur quatre libertés La liberté d'exécuter le programme, pour tous les usages ; La liberté d'étudier le fonctionnement du programme et de l'adapter à ses besoins ; La liberté de redistribuer des copies du programme (ce qui implique la possibilité aussi bien de donner que de vendre des copies) ; La liberté d'améliorer le programme et de distribuer ces améliorations au public, pour en faire profiter toute la communauté. Vous pouvez obtenir un éditeur d'image gratuit, un éditeur de son gratuit, un traitement de texte gratuit, un lecteur multimédia, un archiveur de fichiers, un créateur de PDF… la liste s'allonge encore et encore. Alors que certaines de ces applications gratuites n'offrent pas le même niveau de fonctionnalités sophistiquées que leurs rivales commerciales, d'autres dépassent de loin les capacités de tout le reste du marché. Nous examinons de près la crème de la crème des applications libres indispensables que vous devriez vraiment utiliser, si vous ne l'êtes pas déjà. La grande majorité d'entre elles sont multi-plateformes et absolument 100% gratuits. Vous en trouverez sûrement plusieurs qui conviennent parfaitement à vos besoins. La crème de la crème des applications libres indispensables 1-Wordpress WordPress est la plate-forme de blogs la plus populaire au monde, utilisée par 126 millions de sites Web. En 2017, 50.000 sites lancés de façon quotidienne. Aussi simple ou aussi complexe que vous le souhaitez, WordPress est supporté par un large éventail de plug-ins qui peuvent être utilisés pour transformer un blog standard en tout ce que vous pouvez désirer. 2- Magento Magento, utilisé par des milliers de marchands, dont des grands noms comme Samsung, Nespresso et plusieurs autres, est la plate-forme de commerce électronique qui connaît la croissance la plus rapide au monde. Magento Community Edition est offerte gratuitement sous licence de logiciel libre. Il existe une édition entreprise, pour laquelle vous devez payer, offre des fonctionnalités telles les cartes-cadeaux, prêts à l'emploi et d'autres options intéressantes. 3- Mozilla Thunderbird Avec ses recherches rapides, ses flux RSS intégrés, sa sécurité renforcée et ses superbes add-ons, Thunderbird doit être la meilleure application de messagerie gratuite disponible. Si vous êtes prêt à passer du temps à adapter votre environnement de messagerie avec des modules complémentaires, vous allez l'adorer, mais ce n'est probablement pas idéal pour les novices. 4- FileZilla FileZilla est un client FTP multi-plateformes extrêmement réussi. Il est également disponible en tant que serveur, uniquement pour Windows. Il prend en charge non seulement FTP, mais aussi FTP sur TLS (FTPS) et SFTP. Créé en janvier 2001 en tant que projet pédagogique, FileZilla est le 5ème téléchargé sur SourceForge.net de tous les temps. 5- Audacity Les logiciels de musique tels que Cubase et Logic Pro peuvent être extrêmement coûteux, raison pour laquelle de plus en plus de gens se tournent vers Audacity, un éditeur de son gratuit multi-plate-forme. Les utilisateurs peuvent enregistrer et éditer de l'audio en direct; couper, copier,
In their second episode, Serge and Chris return from Thanksgiving thinking about malware in Free Software, specifically the NPM bitcoin attack found in event-streamerShow links:Software Freedom Conservancy (conservancy)Backdoor in event-stream library dependency (hacker news)The event-stream bug report (github)Statement about the event-stream vulerability (bitpay)npm's statement on the event-stream incidentBug Report on ESLint (github)Malware in Linux kernel (lwn)Don't Download Software from Sourceforge (howtogeek.com)Let's Package jQuery: A Javascript Packaging Dystopian Novella (dustycloud.org)Reflections on Trusting Trust - aka the "Thompson attack" mentioned in the episode, a way of embedding malicious code in a compiler that embeds it into the next compiled version of the compilerZooko's Tweet (twitter)Linus's Law (wikipedia)Ka-Ping Yee's dissertation (zesty.ca) -Securing EcmaScript, presentation to Node Security (youtube)Mandatory Access Control (wikipedia)SE Linux Project (github)AppArmor (ubuntu)Docker For Development (medium)The Qubes Operating System (qubes)Android Application SandboxingChris's talk at Northeastern on December 5th - Chris gave the wrong date in the episode, it's on Wednesday... oops!Chris mentioned that they changed their org-mode configuration inspired by the chat from our first episode to incorporate a priorities-based workflow. Maybe you want to look at Chris's updated org-mode configuration! It looks like so:;; (c) 2018 by Christopher Lemmer Webber ;; Under GPLv3 or later as published by the FSF ;; We want the lowest and "default" priority to be D. That way ;; when we calculate the agenda, any task that isn't specifically ;; marked with a priority or SCHEDULED/DEADLINE won't show up. (setq org-default-priority ?D) (setq org-lowest-priority ?D) ;; Custom agenda dispatch commands which allow you to look at ;; priorities while still being able to see when deadlines, appointments ;; are coming up. Very often you'll just be looking at the A or B tasks, ;; and when you clear off enough of those or have some time you might ;; look also at the C tasks ;; ;; Hit "C-c a" then one of the following key sequences... ;; - a for the A priority items, plus the agenda below it ;; - b for A-B priority items, plus the agenda below it ;; - c for A-C priority items, plus the agenda below it ;; - A for just the agenda ;; - t for just the A-C priority TODOs (setq org-agenda-custom-commands '(("a" "Agenda plus A items" ((tags-todo "+PRIORITY="A"" ((org-agenda-sorting-strategy '(priority-down)))) (agenda ""))) ("b" "Agenda plus A+B items" ((tags-todo "+PRIORITY="A"|+PRIORITY="B"" ((org-agenda-sorting-strategy '(priority-down)))) (agenda ""))) ("c" "Agenda plus A+B+C items" ((tags-todo "+PRIORITY="A"|+PRIORITY="B"|+PRIORITY="C"" ((org-agenda-sorting-strategy '(priority-down)))) (agenda ""))) ("A" "Agenda" ((agenda ""))) ("t" "Just TODO items" ((tags-todo "+PRIORITY="A"|+PRIORITY="B"|+PRIORITY="C"" ((org-agenda-sorting-strategy '(priority-down))))))))
In their second episode, Serge and Chris return from Thanksgiving thinking about malware in Free Software, specifically the NPM bitcoin attack found in event-streamerShow links:Software Freedom Conservancy (conservancy)Backdoor in event-stream library dependency (hacker news)The event-stream bug report (github)Statement about the event-stream vulerability (bitpay)npm's statement on the event-stream incidentBug Report on ESLint (github)Malware in Linux kernel (lwn)Don't Download Software from Sourceforge (howtogeek.com)Let's Package jQuery: A Javascript Packaging Dystopian Novella (dustycloud.org)Reflections on Trusting Trust - aka the "Thompson attack" mentioned in the episode, a way of embedding malicious code in a compiler that embeds it into the next compiled version of the compilerZooko's Tweet (twitter)Linus's Law (wikipedia)Ka-Ping Yee's dissertation (zesty.ca) -Securing EcmaScript, presentation to Node Security (youtube)Mandatory Access Control (wikipedia)SE Linux Project (github)AppArmor (ubuntu)Docker For Development (medium)The Qubes Operating System (qubes)Android Application SandboxingChris's talk at Northeastern on December 5th - Chris gave the wrong date in the episode, it's on Wednesday... oops!Chris mentioned that they changed their org-mode configuration inspired by the chat from our first episode to incorporate a priorities-based workflow. Maybe you want to look at Chris's updated org-mode configuration! It looks like so:;; (c) 2018 by Christopher Lemmer Webber ;; Under GPLv3 or later as published by the FSF ;; We want the lowest and "default" priority to be D. That way ;; when we calculate the agenda, any task that isn't specifically ;; marked with a priority or SCHEDULED/DEADLINE won't show up. (setq org-default-priority ?D) (setq org-lowest-priority ?D) ;; Custom agenda dispatch commands which allow you to look at ;; priorities while still being able to see when deadlines, appointments ;; are coming up. Very often you'll just be looking at the A or B tasks, ;; and when you clear off enough of those or have some time you might ;; look also at the C tasks ;; ;; Hit "C-c a" then one of the following key sequences... ;; - a for the A priority items, plus the agenda below it ;; - b for A-B priority items, plus the agenda below it ;; - c for A-C priority items, plus the agenda below it ;; - A for just the agenda ;; - t for just the A-C priority TODOs (setq org-agenda-custom-commands '(("a" "Agenda plus A items" ((tags-todo "+PRIORITY="A"" ((org-agenda-sorting-strategy '(priority-down)))) (agenda ""))) ("b" "Agenda plus A+B items" ((tags-todo "+PRIORITY="A"|+PRIORITY="B"" ((org-agenda-sorting-strategy '(priority-down)))) (agenda ""))) ("c" "Agenda plus A+B+C items" ((tags-todo "+PRIORITY="A"|+PRIORITY="B"|+PRIORITY="C"" ((org-agenda-sorting-strategy '(priority-down)))) (agenda ""))) ("A" "Agenda" ((agenda ""))) ("t" "Just TODO items" ((tags-todo "+PRIORITY="A"|+PRIORITY="B"|+PRIORITY="C"" ((org-agenda-sorting-strategy '(priority-down))))))))
In their second episode, Serge and Chris return from Thanksgiving thinking about malware in Free Software, specifically the NPM bitcoin attack found in event-streamerShow links:Software Freedom Conservancy (conservancy)Backdoor in event-stream library dependency (hacker news)The event-stream bug report (github)Statement about the event-stream vulerability (bitpay)npm's statement on the event-stream incidentBug Report on ESLint (github)Malware in Linux kernel (lwn)Don't Download Software from Sourceforge (howtogeek.com)Let's Package jQuery: A Javascript Packaging Dystopian Novella (dustycloud.org)Reflections on Trusting Trust - aka the "Thompson attack" mentioned in the episode, a way of embedding malicious code in a compiler that embeds it into the next compiled version of the compilerZooko's Tweet (twitter)Linus's Law (wikipedia)Ka-Ping Yee's dissertation (zesty.ca) -Securing EcmaScript, presentation to Node Security (youtube)Mandatory Access Control (wikipedia)SE Linux Project (github)AppArmor (ubuntu)Docker For Development (medium)The Qubes Operating System (qubes)Android Application SandboxingChris's talk at Northeastern on December 5th - Chris gave the wrong date in the episode, it's on Wednesday... oops!Chris mentioned that they changed their org-mode configuration inspired by the chat from our first episode to incorporate a priorities-based workflow. Maybe you want to look at Chris's updated org-mode configuration! It looks like so:;; (c) 2018 by Christopher Lemmer Webber ;; Under GPLv3 or later as published by the FSF ;; We want the lowest and "default" priority to be D. That way ;; when we calculate the agenda, any task that isn't specifically ;; marked with a priority or SCHEDULED/DEADLINE won't show up. (setq org-default-priority ?D) (setq org-lowest-priority ?D) ;; Custom agenda dispatch commands which allow you to look at ;; priorities while still being able to see when deadlines, appointments ;; are coming up. Very often you'll just be looking at the A or B tasks, ;; and when you clear off enough of those or have some time you might ;; look also at the C tasks ;; ;; Hit "C-c a" then one of the following key sequences... ;; - a for the A priority items, plus the agenda below it ;; - b for A-B priority items, plus the agenda below it ;; - c for A-C priority items, plus the agenda below it ;; - A for just the agenda ;; - t for just the A-C priority TODOs (setq org-agenda-custom-commands '(("a" "Agenda plus A items" ((tags-todo "+PRIORITY="A"" ((org-agenda-sorting-strategy '(priority-down)))) (agenda ""))) ("b" "Agenda plus A+B items" ((tags-todo "+PRIORITY="A"|+PRIORITY="B"" ((org-agenda-sorting-strategy '(priority-down)))) (agenda ""))) ("c" "Agenda plus A+B+C items" ((tags-todo "+PRIORITY="A"|+PRIORITY="B"|+PRIORITY="C"" ((org-agenda-sorting-strategy '(priority-down)))) (agenda ""))) ("A" "Agenda" ((agenda ""))) ("t" "Just TODO items" ((tags-todo "+PRIORITY="A"|+PRIORITY="B"|+PRIORITY="C"" ((org-agenda-sorting-strategy '(priority-down))))))))
Financial markets, as said in my last post, took the heaviest losses in United States History. However, somewhere in the world was this many Satoshi formalizing what's going to take the world by storm.A day after publishing that white paper, Satoshi sent an email called "The Cryptography Mailing List".He later wrote: "you will not find a solution to political problems in cryptography....but we can win a major battle in the arms race and gain a new territory of freedom for several years. Governments are good at cutting off heads of centrally controlled networks like Napster, but pure P2P networks like Gnutella and Tor seem to be holding their own." - SatoshiOn the 9th of November in 2008, Bitcoin project was registered on SourceForge.net. Wall Street continued crumbling, Satoshi laid-low, and then nine days after that, the first ever transaction using bitcoin took place.Podcast - https://www.spreaker.com/show/arsenio...Podcast on iTunes - https://itunes.apple.com/us/podcast/t...Podcast on Stitcher - https://www.stitcher.com/podcast/arse...Podcast on SoundCloud - https://soundcloud.com/arsenio-buck/g...YouTube - https://www.youtube.com/channel/UCIzp...Facebook - The Arsenio Buck Show - Home | FacebookTwitter - https://twitter.com/arseniobuckshow?l...Instagram - https://www.instagram.com/thearseniob...Website - https://thearseniobuckshow.com/Q & A - ArsenioBuck@icloud.comLinkedIn - https://www.linkedin.com/in/arsenio-b...Instagram - https://www.instagram.com/thearseniobuckshow/?hl=en
FreeBSD internship learnings, exciting developments coming to FreeBSD, running FreeNAS on DigitalOcean, Network Manager control for OpenBSD, OpenZFS User Conference Videos are here and batch editing files with ed. Headlines What I learned during my FreeBSD intership Hi, my name is Mitchell Horne. I am a computer engineering student at the University of Waterloo, currently in my third year of studies, and fortunate to have been one of the FreeBSD Foundation’s co-op students this past term (January to April). During this time I worked under Ed Maste, in the Foundation’s small Kitchener office, along with another co-op student Arshan Khanifar. My term has now come to an end, and so I’d like to share a little bit about my experience as a newcomer to FreeBSD and open-source development. I’ll begin with some quick background — and a small admission of guilt. I have been an open-source user for a large part of my life. When I was a teenager I started playing around with Linux, which opened my eyes to the wider world of free software. Other than some small contributions to GNOME, my experience has been mostly as an end user; however, the value of these projects and the open-source philosophy was not lost on me, and is most of what motivated my interest in this position. Before beginning this term I had no personal experience with any of the BSDs, although I knew of their existence and was extremely excited to receive the position. I knew it would be a great opportunity for growth, but I must confess that my naivety about FreeBSD caused me to make the silent assumption that this would be a form of compromise — a stepping stone that would eventually allow me to work on open-source projects that are somehow “greater” or more “legitimate”. After four months spent immersed in this project I have learned how it operates, witnessed its community, and learned about its history. I am happy to admit that I was completely mistaken. Saying it now seems obvious, but FreeBSD is a project with its own distinct uses, goals, and identity. For many there may exist no greater opportunity than to work on FreeBSD full time, and with what I know now I would have a hard time coming up with a project that is more “legitimate”. What I Liked In all cases, the work I submitted this term was reviewed by no less than two people before being committed. The feedback and criticism I received was always both constructive and to the point, and it commented on everything from high-level ideas to small style issues. I appreciate having these thorough reviews in place, since I believe it ultimately encourages people to accept only their best work. It is indicative of the high quality that already exists within every aspect of this project, and this commitment to quality is something that should continue to be honored as a core value. As I’ve discovered in some of my previous work terms, it is all too easy cut corners in the name of a deadline or changing priorities, but the fact that FreeBSD doesn’t need to make these types of compromises is a testament to the power of free software. It’s a small thing, but the quality and completeness of the FreeBSD documentation was hugely helpful throughout my term. Everything you might need to know about utilities, library functions, the kernel, and more can be found in a man page; and the handbook is a great resource as both an introduction to the operating system and a reference. I only wish I had taken some time earlier in the term to explore the different documents more thoroughly, as they cover a wide range of interesting and useful topics. The effort people put into writing and maintaining FreeBSD’s documentation is easy to overlook, but its value cannot be overstated. What I Learned Although there was a lot I enjoyed, there were certainly many struggles I faced throughout the term, and lessons to be learned from them. I expect that some of issues I faced may be specific to FreeBSD, while others may be common to open-source projects in general. I don’t have enough experience to speculate on which is which, so I will leave this to the reader. The first lesson can be summed up simply: you have to advocate for your own work. FreeBSD is made up in large part by volunteer efforts, and in many cases there is more work to go around than people available to do it. A consequence of this is that there will not be anybody there to check up on you. Even in my position where I actually had a direct supervisor, Ed often had his plate full with so many other things that the responsibility to find someone to look at my work fell to me. Admittedly, a couple of smaller changes I worked on got left behind or stuck in review simply because there wasn’t a clear person/place to reach out to. I think this is both a barrier of entry to FreeBSD and a mental hurdle that I needed to get over. If there’s a change you want to see included or reviewed, then you may have to be the one to push for it, and there’s nothing wrong with that. Perhaps this process should be easier for newcomers or infrequent contributors (the disconnect between Bugzilla and Phabricator definitely leaves a lot to be desired), but we also have to be aware that this simply isn’t the reality right now. Getting your work looked at may require a little bit more self-motivation, but I’d argue that there are much worse problems a project like FreeBSD could have than this. I understand this a lot better now, but it is still something I struggle with. I’m not naturally the type of person who easily connects with others or asks for help, so I see this as an area for future growth rather than simply a struggle I encountered and overcame over the course of this work term. Certainly it is an important skill to understand the value of your own work, and equally important is the ability to communicate that value to others. I also learned the importance of starting small. My first week or two on the job mainly involved getting set up and comfortable with the workflow. After this initial stage, I began exploring the project and found myself overwhelmed by its scale. With so many possible areas to investigate, and so much work happening at once, I felt quite lost on where to begin. Many of the potential projects I found were too far beyond my experience level, and most small bugs were picked up and fixed quickly by more experienced contributors before I could even get to them. It’s easy to make the mistake that FreeBSD is made up solely of a few rock-star committers that do everything. This is how it appears at face-value, as reading through commits, bug reports, and mailing lists yields a few of the same names over and over. The reality is that just as important are the hundreds of users and infrequent contributors who take the time to submit bug reports, patches, or feedback. Even though there are some people who would fall under the umbrella of a rock-star committer, they didn’t get there overnight. Rather, they have built their skills and knowledge through many years of involvement in FreeBSD and similar projects. As a student coming into this project and having high expectations of myself, it was easy to set the bar too high by comparing myself against those big committers, and feel that my work was insignificant, inadequate, and simply too infrequent. In reality, there is no reason I should have felt this way. In a way, this comparison is disrespectful to those who have reached this level, as it took them a long time to get there, and it’s a humbling reminder that any skill worth learning requires time, patience, and dedication. It is easy to focus on an end product and simply wish to be there, but in order to be truly successful one must start small, and find satisfaction in the struggle of learning something new. I take pride in the many small successes I’ve had throughout my term here, and appreciate the fact that my journey into FreeBSD and open-source software is only just beginning. Closing Thoughts I would like to close with some brief thank-you’s. First, to everyone at the Foundation for being so helpful, and allowing this position to exist in the first place. I am extremely grateful to have been given this unique opportunity to learn about and give back to the open-source world. I’d also like to thank my office mates; Ed: for being an excellent mentor, who offered an endless wealth of knowledge and willingness to share it. My classmate and fellow intern Arshan: for giving me a sense of camaraderie and the comforting reminder that at many moments he was as lost as I was. Finally, a quick thanks to everyone else I crossed paths with who offered reviews and advice. I appreciate your help and look forward to working with you all further. I am walking away from this co-op with a much greater appreciation for this project, and have made it a goal to remain involved in some capacity. I feel that I’ve gained a little bit of a wider perspective on my place in the software world, something I never really got from my previous co-ops. Whether it ends up being just a stepping stone, or the beginning of much larger involvement, I thoroughly enjoyed my time here. Recent Developments in FreeBSD Support for encrypted, compressed (gzip and zstd), and network crash dumps enabled by default on most platforms Intel Microcode Splitter Intel Spec Store Bypass Disable control Raspberry Pi 3B+ Ethernet Driver IBRS for i386 Upcoming: Microcode updater for AMD CPUs the RACK TCP/IP stack, from Netflix Voting in the FreeBSD Core Election begins today: DigitalOcean Digital Ocean Promo Link for BSD Now Listeners Running FreeNAS on a DigitalOcean Droplet Need to backup your FreeNAS offsite? Run a locked down instance in the cloud, and replicate to it The tutorial walks though the steps of converting a fresh FreeBSD based droplet into a FreeNAS Create a droplet, and add a small secondary block-storage device Boot the droplet, login, and download FreeNAS Disable swap, enable ‘foot shooting’ mode in GEOM use dd to write the FreeNAS installer to the boot disk Reboot the droplet, and use the FreeNAS installer to install FreeNAS to the secondary block storage device Now, reimage the droplet with FreeBSD again, to replace the FreeNAS installer Boot, and dd FreeNAS from the secondary block storage device back to the boot disk You can now destroy the secondary block device Now you have a FreeNAS, and can take it from there. Use the FreeNAS replication wizard to configure sending snapshots from your home NAS to your cloud NAS Note: You might consider creating a new block storage device to create a larger pool, that you can more easily grow over time, rather than using the boot device in the droplet as your main pool. News Roundup Network Manager Control for OpenBSD (Updated) Generalities I just remind the scope of this small tool: allow you to pre-define several cable or wifi connections let nmctl to connect automatically to the first available one allow you to easily switch from one network connection to an other one create openbox dynamic menus Enhancements in this version This is my second development version: 0.2. I've added performed several changes in the code: code style cleanup, to better match the python recommendations adapt the tool to allow to connect to an Open-wifi having blancs in the name. This happens in some hotels implement a loop as work-around concerning the arp table issue. The source code is still on the git of Sourceforge.net. You can see the files here And you can download the last version here Feedbacks after few months I'm using this script on my OpenBSD laptop since about 5 months. In my case, I'm mainly using the openbox menus and the --restart option. The Openbox menus The openbox menus are working fine. As explain in my previous blog, I just have to create 2 entries in my openbox's menu.xml file, and all the rest comes automatically from nmctl itself thanks to the --list and --scan options. I've not changed this part of nmctl since it works as expected (for me :-) ). The --restart option Because I'm very lazy, and because OpenBSD is very simple to use, I've added the command "nmctl --restart" in the /etc/apm/resume script. Thanks to apmd, this script will be used each time I'm opening the lid of my laptop. In other words, each time I'll opening my laptop, nmctl will search the optimum network connection for me. But I had several issues in this scenario. Most of the problems were linked to the arp table issues. Indeed, in some circumstances, my proxy IP address was associated to the cable interface instead of the wifi interface or vice-versa. As consequence I'm not able to connect to the proxy, thus not able to connect to internet. So the ping to google (final test nmctl perform) is failing. Knowing that anyhow, I'm doing a full arp cleanup, it's not clear for me from where this problem come from. To solve this situation I've implemented a "retry" concept. In other words, before testing an another possible network connection (as listed in my /etc/nmctl.conf file), the script try 3x the current connection's parameters. If you want to reduce or increase this figures, you can do it via the --retry parameter. Results of my expertise with this small tool Where ever I'm located, my laptop is now connecting automatically to the wifi / cable connection previously identified for this location. Currently I have 3 places where I have Wifi credentials and 2 offices places where I just have to plug the network cable. Since the /etc/apm/resume scripts is triggered when I open the lid of the laptop, I just have to make sure that I plug the RJ45 before opening the laptop. For the rest, I do not have to type any commands, OpenBSD do all what is needed ;-). I hotels or restaurants, I can just connect to the Open Wifi thanks to the openbox menu created by "nmctl --scan". Next steps Documentation The tool is missing lot of documentation. I appreciate OpenBSD for his great documentation, so I have to do the same. I plan to write a README and a man page at first instances. But since my laziness, I will do it as soon as I see some interest for this tool from other persons. Tests I now have to travel and see how to see the script react on the different situations. Interested persons are welcome to share with me the outcome of their tests. I'm curious how it work. OpenBSD 6.3 on EdgeRouter Lite simple upgrade method TL;DR OpenBSD 6.3 oceton upgrade instructions may not factor that your ERL is running from the USB key they want wiped with the miniroot63.fs image loaded on. Place the bsd.rd for OpenBSD 6.3 on the sd0i slice used by U-Boot for the kernel, and then edit the boot command to run it. a tiny upgrade The OpenBSD documentation is comprehensive, but there might be rough corners around what are probably edge cases in their user base. People running EdgeRouter Lite hardware for example, who are looking to upgrade from 6.2 to 6.3. The documentation, which gave us everything we needed last time, left me with some questions about how to upgrade. In INSTALL.octeon, the Upgrading section does mention: The best solution, whenever possible, is to backup your data and reinstall from scratch I had to check if that directive existed in the documentation for other architectures. I wondered if oceton users were getting singled out. We were not. Just simplicity and pragmatism. Reading on: To upgrade OpenBSD 6.3 from a previous version, start with the general instructions in the section "Installing OpenBSD". But that section requires us to boot off of TFTP or NFS. Which I don’t want to do right now. Could also use a USB stick with the miniroot63.fs installed on it. But as the ERL only has a single USB port, we would have to remove the USB stick with the current install on it. Once we get to the Install or Upgrade prompt, there would be nothing to upgrade. Well, I guess I could use a USB hub. But the ERL’s USB port is inside the case. With all the screws in. And the tools are neatly put away. And I’d have to pull the USB hub from behind a workstation. And it’s two am. And I cleaned up the cabling in the lab this past weekend. Looks nice for once. So I don’t want to futz around with all that. There must be an almost imperceptibly easier way of doing this than setting up a TFTP server or NFS share in five minutes… Right? iXsystems Boise Technology Show 2018 Recap OpenZFS User Conference Slides & Videos Thank you ZFS ZSTD Compression Pool Layout Considerations ZFS Releases Helping Developers Help You ZFS and MySQL on Linux Micron OSNEXUS ZFS at Six Feet Up Flexible Disk Use with OpenZFS Batch editing files with ed what’s ‘ed’? ed is this sort of terrifying text editor. A typical interaction with ed for me in the past has gone something like this: $ ed help ? h ? asdfasdfasdfsadf ? Basically if you do something wrong, ed will just print out a single, unhelpful, ?. So I’d basically dismissed ed as an old arcane Unix tool that had no practical use today. vi is a successor to ed, except with a visual interface instead of this ? surprise: Ed is actually sort of cool and fun So if Ed is a terrifying thing that only prints ? at you, why am I writing a blog post about it? WELL!!!! On April 1 this year, Michael W Lucas published a new short book called Ed Mastery. I like his writing, and even though it was sort of an april fool’s joke, it was ALSO a legitimate actual real book, and so I bought it and read it to see if his claims that Ed is actually interesting were true. And it was so cool!!!! I found out: how to get Ed to give you better error messages than just ? that the name of the grep command comes from ed syntax (g/re/p) the basics of how to navigate and edit files using ed All of that was a cool Unix history lesson, but did not make me want to actually use Ed in real life. But!!! The other neat thing about Ed (that did make me want to use it!) is that any Ed session corresponds to a script that you can replay! So if I know Ed, then I can use Ed basically as a way to easily apply vim-macro-like programs to my files. Beastie Bits FreeBSD Mastery: Jails -- Help make it happen Video: OpenZFS Basics presented by George Wilson and Matt Ahrens at Scale 16x back in March 2018 DragonFlyBSD’s IPFW gets highspeed lockless in-kernel NAT A Love Letter to OpenBSD New talks, and the F-bomb Practical UNIX Manuals: mdoc BSD Meetup in Zurich: May 24th BSD Meetup in Warsaw: May 24th MeetBSD 2018 Tarsnap Feedback/Questions Seth - First time poudriere Builder Farhan - Why we didn't go FreeBSD architech - Encryption Feedback Dave - Handy Tip on setting up automated coredump handling for FreeBSD Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
「30日でできる!OS自作入門」を小学5年生で手にして以来行ってきた低レイヤプログラミングについて話を伺いました。出演者: hikalium (@hikalium)、Rui Ueyama (@rui314) https://turingcomplete.fm/8 ハッシュタグは#tcfmです。 TCFMはサポーターの投げ銭によって収益を上げています。このコンテンツに課金してもいいよという方はぜひクリエイター支援サイトPatreonから登録してご協力ください。 イントロ (0:00) 小学5年生、OS自作入門本に入門する (0:50) アセンブリプログラミング (7:40) OSをフロッピーから起動 (9:45) 32ビットモードからBIOSを呼ぶために仮想86モードを使う (1) (13:25) 紙のIntelソフトウェアデベロッパマニュアル (17:55) 32ビットモードからBIOSを呼ぶために仮想86モードを使う (2) (20:12) CPUの機能でブレイクポイントをセットしてOSをデバグ (24:56) ページングとセグメンテーション (28:07) Virtual PCのバグを発見 (31:30) SourceForgeでマルウェアをインストールしてしまう (38:04) 自作OSを起動するとなにがあるのか (41:52) 大学で自分強すぎ問題 (44:58) RISC-Vプロセッサを使うのはまだ辛い (49:20) 大学のマイコン実験 (55:10) 海外の大学院にいくべきか問題 (1:00:38) Chromeのインターン (1:05:56) 30日でできる!OS自作入門 セルオートマトン ライフゲーム hikalium自作OS VESA BIOS Extensions Intel Software Developer Manuals Intel Software Developer Manualsの印刷版が注文できるサイト RISC-V SiFive 東大のCPU実験で自作コア上の自作OS上で自作シェルを動かした話 GomaのためのCプリプロセッサ Selection.addRange() Rustで書かれたNaglfarブラウザ
We read the FreeBSD Q3 status report, explore good and bad syscalls, list GOG Games for OpenBSD, and show you what devmatch can do. This episode was brought to you by Headlines FreeBSD Q3 Status Report 2017 (https://lists.freebsd.org/pipermail/freebsd-announce/2017-December/001818.html) FreeBSD Team Reports FreeBSD Release Engineering Team Ports Collection The FreeBSD Core Team The FreeBSD Foundation Projects FreeBSD CI Kernel Intel 10G iflib Driver Update Intel iWARP Support pNFS Server Plan B Architectures AMD Zen (family 17h) support Userland Programs Updates to GDB Ports FreeBSDDesktop OpenJFX 8 Puppet Documentation Absolute FreeBSD, 3rd Edition Manual Pages Third-Party Projects The nosh Project ####FreeBSD Foundation Q4 Update (https://www.freebsdfoundation.org/wp-content/uploads/2017/12/FreeBSD-Foundation-Q4-Update.pdf) *** ###11 syscalls that rock the world (https://www.cloudatomiclab.com/prosyscall/) 0. read > You cannot go wrong with a read. You can barely EFAULT it! On Linux amd64 it is syscall zero. If all its arguments are zero it returns zero. Cool! 1. pipe > The society for the preservation of historic calling conventions is very fond of pipe, as in many operating systems and architectures it preserves the fun feature of returning both of the file descriptors as return values. At least Linux MIPS does, and NetBSD does even on x86 and amd64. Multiple return values are making a comeback in languages like Lua and Go, but C has always had a bit of a funny thing about them, but they have long been supported in many calling conventions, so let us use them in syscalls! Well, one syscall. 2. kqueue > When the world went all C10K on our ass, and scaleable polling was a thing, Linux went epoll, the BSDs went kqueue and Solaris went /dev/poll. The nicest interface was kqueue, while epoll is some mix of edge and level triggered semantics and design errors so bugs are still being found. 3. unshare > Sounds like a selfish syscall, but this generous syscall call is the basis of Linux namespaces, allowing a process to isolate its resources. Containers are built from unshares. 4. setns > If you liked unshare, its younger but cooler friend takes file descriptors for namespaces. Pass it down a unix socket to another process, or stash it for later, and do that namespace switching. All the best system calls take file descriptors. 5. execveat > Despite its somewhat confusing name (FreeBSD has the saner fexecve, but other BSDs do not have support last time I checked), this syscall finally lets you execute a program just given a file descriptor for the file. I say finally, as Linux only implemented this in 3.19, which means it is hard to rely on it (yeah, stop using those stupid old kernels folks). Before that Glibc had a terrible userspace implementation that is basically useless. Perfect for creating sandboxes, as you can sandbox a program into a filesystem with nothing at all in, or with a totally controlled tree, by opening the file to execute before chroot or changing the namespace. 6. pdfork > Too cool for Linux, you have to head out to FreeBSD for this one. Like fork, but you get a file descriptor for the process not a pid. Then you can throw it in the kqueue or send it to another process. Once you have tried process descriptors you will never go back. 7. signalfd > You might detect a theme here, but if you have ever written traditional 1980s style signal handlers you know how much they suck. How about turning your signals into messages that you can read on, you guessed it, file descriptors. Like, usable. 8. wstat > This one is from Plan 9. It does the opposite of stat and writes the same structure. Simples. Avoids having chmod, chown, rename, utime and so on, by the simple expedient of making the syscall symmetric. Why not? 9. clonefile > The only cool syscall on OSX, and only supported on the new APFS filesystem. Copies whole files or directories on a single syscall using copy on write for all the data. Look on my works, copyfilerange and despair. 10. pledge > The little sandbox that worked. OpenBSD only here, they managed to make a simple sandbox that was practical for real programs, like the base OpenBSD system. Capsicum form FreeBSD (and promised for Linux for years but no sign) is a lovely design, and gave us pdfork, but its still kind of difficult and intrusive to implement. Linux has, well, seccomp, LSMs, and still nothing that usable for the average program. ###Eleven syscalls that suck (https://www.cloudatomiclab.com/antisyscall/) 0. ioctl > It can‘t decide if it‘s arguments are integers, strings, or some struct that is lost in the midst of time. Make up your mind! Plan 9 was invented to get rid of this. 1. fcntl > Just like ioctl but for some different miscellaneous operations, because one miscelleny is not enough. 2. tuxcall > Linux put a web server in the kernel! To win a benchmark contest with Microsoft! It had it‘s own syscall! My enum tux_reactions are YUK! Don‘t worry though, it was a distro patch (thanks Red Hat!) and never made it upstream, so only the man page and reserved number survive to taunt you and remind you that the path of the righteous is beset by premature optmization! 3. iosetup > The Linux asynchronous IO syscalls are almost entirely useless! Almost nothing works! You have to use ODIRECT for a start. And then they still barely work! They have one use, benchmarking SSDs, to show what speed you could get if only there was a usable API. Want async IO in kernel? Use Windows! 4. stat, and its friends and relatives > Yes this one is useful, but can you find the data structure it uses? We have oldstat, oldfstat, ustat, oldlstat, statfs, fstatfs, stat, lstat, fstat, stat64, lstat64, fstat64, statfs64, fstatfs64, fstatat64 for stating files and links and filesystems in Linux. A new bunch will be along soon for Y2038. Simplify your life, use a BSD, where they cleaned up the mess as they did the cooking! Linux on 32 bit platforms is just sucky in comparison, and will get worse. And don't even look at MIPS, where the padding is wrong. 5. Linux on MIPS > Not a syscall, a whole implemntation of the Linux ABI. Unlike the lovely clean BSDs, Linux is different on each architecture, system calls randomly take arguments in different orders, and constants have different values, and there are special syscalls. But MIPS takes the biscuit, the whole packet of biscuits. It was made to be binary compatible with old SGI machines that don't even exist, and has more syscall ABIs than I have had hot dinners. Clean it up! Make a new sane MIPS ABI and deprecate the old ones, nothing like adding another variant. So annoying I think I threw out all my MIPS machines, each different. 6. inotify, fanotify and friends > Linux has no fewer than three file system change notification protocols. The first, dnotify hopped on ioctl‘s sidekick fcntl, while the two later ones, inotify and fanotify added a bunch more syscalls. You can use any of them, and they still will not provide the notification API you want for most applications. Most people use the second one, inotify and curse it. Did you know kqueue can do this on the BSDs? 7. personality > Oozing in personality, but we just don't get along. Basically obsolete, as the kernel can decide what kind of system emulation to do from binaries directly, it stays around with some use cases in persuading ./configure it is running on a 32 bit system. But it can turn off ASLR, and let the CVEs right into your system. We need less persoanlity! 8. gettimeofday > Still has an obsolete timezone value from an old times when people thought timezones should go all the way to the kernel. Now we know that your computer should not know. Set its clock to UTC. Do the timezones in the UI based on where the user is, not the computer. You should use clock_gettime now. Don't even talk to me about locales. This syscall is fast though, don't use it for benchmarking, its in the VDSO. 9. splice and tee > These, back in 2005 were a quite nice idea, although Linux said then “it is incomplete, the interfaces are ugly, and it will oops the system if anything goes wrong”. It won't oops your system now, but usage has not taken off. The nice idea from Linus was that a pipe is just a ring buffer in the kernel, that can have a more general API and use cases for performant code, but a decade on it hasn't really worked out. It was also supposed to be a more general sendfile, which in many ways was the successor of that Tux web server, but I think sendfile is still more widely used. 10. userfaultfd > Yes, I like file descriptors. Yes CRIU is kind of cool. But userspace handling page faults? Is nothing sacred? I get that you can do this badly with a SIGSEGV handler, but talk about lipstick on a pig. *** ###OpenBSD 6.0 on an iMac G3 from 1999 (http://www.increasinglyadequate.com/macppc.html) > A while ago I spent $50 for an iMac G3 (aka the iMac,1). This iconic model restored Apple's fortunes in the late '90s. Since the iMac G3 can still boot Mac OSes 8 and 9, I mostly use the machine to indulge a nostalgia for childhood schooldays spent poking at the operating system and playing Escape Velocity. But before I got around to that, I decided to try out the software that the previous owner had left on the machine. The antiquated OSX 10.2 install and 12 year old versions of Safari and Internet Explorer were too slow and old to use for anything. Updating to newer software was almost impossible; a later OSX is required to run the little PowerPC-compatible software still languishing in forgotten corners of the Internet. This got me thinking: could this machine be used, really used, nowadays? Lacking a newer OSX disc, I decided to try the most recent OpenBSD release. (And, since then, to re-try with each new OpenBSD release.) Below are the results of this experiment (plus a working xorg.conf file) and a few background notes. Background > This iMac is a Revision D iMac G3 in grape. It's part of the iMac,1 family of computers. This family includes all tray-loading iMac G3s. (Later iMac G3s had a slot-loading CD drive and different components.) Save for a slightly faster processor, a dedicated graphics card, and cosmetic tweaks to the case, my iMac is identical to the prior year's line-launching Bondi Blue iMac. My machine has had its memory upgraded from 32 MB to 320 MB. Thank Goodness. > The Revision D iMac G3 shipped with Mac OS 8.5. It can run up to Mac OS 9.2.2 or OSX 10.3.9. Other operating systems that tout support for the iMac,1 include NetBSD, OpenBSD, and a shrinking number of Linux distributions. > OpenBSD is simple (by design) and well-maintained. In contrast, NetBSD seems rather more complex and featureful, and I have heard grumbling that despite its reputation for portability, NetBSD really only works well on amd64. I'd test that assertion if OpenBSD's macppc installation instructions didn't seem much simpler than NetBSD's. Linux is even more complicated, although most distros are put together in a way that you can mostly ignore that complexity (until you can't). In the end I went with OpenBSD because I am familiar with it and because I like it. Installing OpenBSD on the iMac,1 > Installing OpenBSD on this iMac was simple. It's the same procedure as installing OpenBSD on an amd64 rig. You put in the installation disc; you tell the machine to boot from it; and then you answer a few prompts, most of which simply ask you to press enter. In this case, OpenBSD recognizes all machine's hardware just fine, including sound and networking, though I had a little trouble with video. > The OpenBSD documentation says video should just work and that an xorg.conf file isn't necessary. As such, it no longer ships with an xorg.conf file. Though that's never posed a problem on my other OpenBSD machines, it does here. Video doesn't work out of the box on my iMac,1. startx just blanks the screen. Fortunately, because the BSDs use a centralized development model where each operating system is stored in one repository, OpenBSD's website provides a web interface to the source code going back to the early days. I was able to find the last version of the sample xorg.conf that used to ship on macppc. With a little tweaking, I transformed that file into this one (https://www.increasinglyadequate.com/files/xorg.conf), with which video works just fine. Just drop it into your iMac's /etc/X11 directory. You'll also need to remember to set the machdep.allowaperture sysctl to 2 (e.g., as root run sysctl machdep.allowaperture=2), although the installer will do that automatically if you answer yes to the question about whether you plan to run X. > All that being said, video performance is pretty poor. I am either doing something wrong, or OpenBSD doesn't have accelerated video for this iMac, or this machine is just really old! I will discuss performance below. Running OpenBSD on the iMac,1 > The machine performs okay under OpenBSD. You can expect to ably run minimalistic software under minimalistic window managers. I tried dillo, mrxvt, and cmus under cwm and fvwm. Performance here was just fine. I also tried Firefox 26, 33, and 34 under fvwm and cwm. Firefox ran, but "modern," Javascript-heavy sites were an exercise in frustration; the 2015 version of CNN.com basically froze Firefox for 30 seconds or more. A lighter browser like dillo is doable. > You'll notice that I used the past-tense to talk about Firefox. Firefox currently doesn't build on PowerPC on OpenBSD. Neither does Chromium. Neither do a fair number of applications. But whatever -- there's still a lot of lighter applications available, and it's these you'll use day-to-day on a decades-old machine. > Lightweight window managers work okay, as you'd expect. You can even run heavier desktop environments, such as xfce, though you'll give up a lot of performance. > I ran the Ubench benchmark on this iMac and two more modern machines also running OpenBSD. The benchmark seems like an old one; I don't know how (if at all) it accounts for hardware changes in the past 13 years. That is, I don't know if the difference in score accurately measures the difference in real-world performance. Here are the results anyway: Conclusion > Except for when I check to see if OpenBSD still works, I run Mac OS9 on this rig. I have faster and better machines for running OpenBSD. If I didn't -- if this rig were, improbably, all I had left, and I was waiting on the rush delivery of something modern -- then I would use OpenBSD on my iMac,1. I'd have to stick to lightweight applications, but at least they'd be up-to-date and running on a simple, stable, OS. *** ##News Roundup ###34th Chaos Communication Congress Schedule (https://events.ccc.de/congress/2017/Fahrplan/index.html) Many talks are streamed live (http://streaming.media.ccc.de/34c3), a good mixture of english and german talks May contain DTraces of FreeBSD (https://events.ccc.de/congress/2017/Fahrplan/events/9196.html) Are all BSDs created equally? (https://events.ccc.de/congress/2017/Fahrplan/events/8968.html) library operating systems (https://events.ccc.de/congress/2017/Fahrplan/events/8949.html) Hardening Open Source Development (https://events.ccc.de/congress/2017/Fahrplan/events/9249.html) *** ###OpenBSD 6.2 + CDE (https://jamesdeagle.blogspot.co.uk/2017/12/openbsd-62-cde.html) > If you've noticed a disruption in the time-space continuum recently, it is likely because I have finally been able to compile and install the Common Desktop Environment (CDE) in a current and actively-developed operating system (OpenBSD 6.2 in this case). > This comes after so many attempts (across multiple platforms) that ended up with the build process prematurely stopping itself in its own tracks for a variety of infinitesimal reasons that were beyond my comprehension as a non-programmer, or when there was success it was not without some broken parts. As for the latter, I've been able to build CDE on OpenIndiana Hipster, but with an end product where I'm unable to change the color scheme in dtstyle (because "useColorObj" is set to "False"), with a default color scheme that is low-res and unpleasant. As for changing "useColorObj" to "True", I tried every recommended trick I could find online, but nothing worked. > My recent attempts at installing CDE on OpenBSD (version 6.1) saw the process stop due to a number of errors that are pure gibberish to these naive eyes. While disappointing, it was par for the course within my miserable experience with trying to build this particular desktop environment. As I wrote in this space in November 2015, in the course of explaining part of my imperitive for installing Solaris 10: > And so I have come to think of building the recently open-sourced CDE as being akin to a coffee mug I saw many years ago. One side of the mug read "Turn the mug to see how to keep an idiot busy." On the other side, it read "Turn the mug to see how to keep an idiot busy." I'm through feeling like an idiot, which is partially why I'm on this one-week journey with Solaris 10. > While I thoroughly enjoyed running Solaris 10 on my ThinkPad T61p, and felt a devilish thrill at using it out in the open at my local MacBook- and iPhone-infested Starbucks and causing general befuddlement and consternation among the occasional prying yoga mom, I never felt like I could do much with it beyond explore the SunOS 5.10 command line and watch YouTube videos. While still supported by its current corporate owner (whose name I don't even want to type), it is no longer actively developed and is thus little more than a retro toy. I hated the idea of installing anything else over it, but productivity beckoned and it was time to tearfully and reluctantly drag myself off the dance floor. > In any case, just last week I noticed that the Sourceforge page for the OpenBSD build had some 6.2-specific notes by way of a series of four patches, and so I decided 'what the heck, let's give this puppy another whirl'. After an initial abortive attempt at a build, I surmised that I hadn't applied the four patches correctly. A day or two later, I took a deep breath and tried again, this time resolving to not proceed with the time make World build command until I could see some sign of a successful patch process. (This time around, I downloaded the patches and moved them into the directory containing the CDE makefiles, and issued each patch command as patch Once I had the thing up and running, and with a mind bursting with fruit flavor, I started messing about. The first order of business was to create a custom color scheme modelled after the default color scheme in UnixWare. (Despite any baggage that system carries from its previous ownership under SCO, I adored the aesthetics of UnixWare 7.1.4 two years ago when I installed the free one month trial version on my ThinkPad. For reasons that escape me now, I named my newly-created color scheme in honor of UnixWare 7.1.3.) > Like a proud papa, I immediately tweeted the above screenshot and risked irritating a Linux kid or two in the process, given SCO's anti-climatic anti-Linux patent trolling from way back when. (I'm not out to irritate penguinistas, I just sure like this color scheme.) Final Thoughts > It may look a little clunky at first, and may be a little bling-challenged, but the more I use CDE and adapt to it, the more it feels like an extension of my brain. Perhaps this is because it has a lot zip and behaves in a consistent and coherent manner. (I don't want to go too much further down that road here, as OSnews's Thom Holwerda already gave a good rundown about ten years ago.) > Now that I have succesfully paired my absolute favorite operating system with a desktop environment that has exerted an intense gravitational hold on me for many, many years, I don't anticipate distrohopping any time soon. And as I attain a more advanced knowledge of CDE, I'll be chronicling any new discoveries here for the sake of anyone following me from behind as I feel my way around this darkened room. *** ###devmatch(8) added to FreeBSD HEAD (https://www.mail-archive.com/svn-src-all@freebsd.org/msg154719.html) ``` Log: Match unattached devices on the system to potential kernel modules. devmatch(8) matchs up devices in the system device tree with drivers that may match them. For each unattached device in the system, it tries to find matching PNP info in the linker hints and prints modules to load to claim the devices. In --unbound mode, devmatch can look for drivers that have attached to devices in the device tree and have plug and play information, but for which no PNP info exists. This helps find drivers that haven't been converted yet that are in use on this system. In addition, the ability to dump out linker.hints is provided. Future commits will add hooks to devd.conf and rc.d to fully automate using this information. Added: head/usr.sbin/devmatch/ head/usr.sbin/devmatch/Makefile (contents, props changed) head/usr.sbin/devmatch/devmatch.8 (contents, props changed) head/usr.sbin/devmatch/devmatch.c (contents, props changed) Modified: head/usr.sbin/Makefile Modified: head/usr.sbin/Makefile ``` + Oh, you naughty committers: :-) https://www.mail-archive.com/svn-src-all@freebsd.org/msg154720.html Beastie Bits New FreeBSD Journal issue: Monitoring and Metrics (https://www.freebsdfoundation.org/journal/) OpenBSD Engine Mix available on GOG.com (https://www.gog.com/mix/openbsd_engine_available) OpenBSD Foundation reached their 2017 fundraising goal (http://www.openbsdfoundation.org/campaign2017.html) TrueOS 17.12 Review – An Easy BSD (https://www.youtube.com/watch?v=nKr1GCsV-gA) LibreSSL 2.6.4 Released (https://bsdsec.net/articles/libressl-2-6-4-released-fixed) *** ##Feedback/Questions Mike - BSD 217 & Winning over Linux Users (http://dpaste.com/3AB7J4P#wrap) JLR - Boot Environments Broken? (http://dpaste.com/2K0ZDH9#wrap) Kevr - ZFS question and suggestion (http://dpaste.com/04MXA5P#wrap) Ivan - FreeBSD read cache - ZFS (http://dpaste.com/1P9ETGQ#wrap) ***
En éste episodio tocamos varios temas en los cuales nos interesa tener sus opiniones. Seguinos en: Blog: https://www.neositelinux.com Twitter: @NeoSiteLinux Telegram: https://t.me/neositelinux Facebook: https://www.facebook.com/neositelinux Youtube: https://www.youtube.com/user/neositelinux
Fredrik låter lite klippt ibland, det är helt och hållet hans eget fel. Jocke citerar fel person, det är helt och hållet hans eget fel. 0 Örnsköldsvik, superdatorer, fiber och SAN 21:47: Datormagazin har en BBS igen! 30:17: IKEA Trådfri 35:58: Apple har bytt ikon för Kartor 36:25: Nya möjliga CMS för Macpro 44:11: Ett viktigt mejl, och sätt att sponsra podden. Vill du inte använda Patreon men vill donera pengar går det att höra av sig till oss för Swish-uppgifter 47:32: Fredrik har äntligen sett Mr Robot! Spoilervarning från 48:49. 52:59: Fredrik lyssnar på snack om blockkedjor och ser chans till bubblor 1:01:50: Discord kastar ut nazister, Trump är hemsk 1:10:19: Chris Lattner går till Google brain och appstorlekar är löjliga 1:15:05: Jocke försöker bygga nytt webbkluster 1:21:29: Jocke recenserar sin nya USB-hubb Länkar Nationellt superdatorcentrum SGI Origin 3200 Silicon graphics Cray Seymour Cray Den första datorn värd att kritisera verkar vara ett citat från Alan Kay Be och Beos Infiniband Fibre channel R12000-processorn Ernie Bayonne Jockes superdatorloot. Craylink - även känd som NUMAlink Promise thunderbold-fibre channel-adapter Ali - Ali express Datormagazin BBS är tillbaka! Fabbes BBS SUGA A590 Terrible fire Vampire-acceleratorerna FPGA Plipbox SD2IEC - 1541-emulatorn Jocke beställde från Polen. Satandisk IKEA Trådfri Artikeln på Macrumors AAPL:s nyhetsbrev Apple har bytt ikon för Kartor Grav Jekyll Bloxsom Sourceforge - där en del lade kod förr Ilir - tusen tack käre Oneplus 5-sponsor Man kan stödja podden på Patreon, men bara om man vill Mr Robot Incomparableavsnittet om Mr Robot Vi pratade lite milt om blockkedjor i avsnitt 67 Discord kastar ut nazister Cloudflare också Tim Cooks brev till de anställda Videoklippet där Anderson Cooper sakligt tar all heder av Trump Chris Lattner blörjar jobba på Google Brain Appstorlekar är fortfarande löjliga Kod är en deprimerande stor del av Facebook-appens filstorlek Acorn Alpine Linux PHP-FPM Nginx WP super cache Varnish Docker Openbsd Ballmer peak Jocke recenserar sin USB-hubb Henge dock Jockes USB-grafikkort Fullständig avsnittsinformation finns här: https://www.bjoremanmelin.se/podcast/avsnitt-90-superdatorer-med-inbyggda-soffor.html.
YouTube is starting to censor content that doesn't break its content rules, preparing for the eclipse may be preparing for disaster, and researchers hack signs to confuse autonomous vehicles. Links from this episode: - YouTube will suppress some controversial content — even if it doesn’t violate policies - Steven Crowder - Young Turks - ADL: Hate on Display™ Hate Symbols Database - Cornell Law School: First Amendment - What is open source? - GitHub - SourceForge - Freakonomics Radio: Trust Me (Rebroadcast) - Book of Mormon Geography - Book of Mormon Map - This woman built a house using YouTube tutorials - Vimeo - Google Fined Record $2.7 Billion in E.U. Antitrust Ruling - DuckDuckGo - Tor Browser - SRWare Iron Browser - KSL.com: Eclipse crowds may overwhelm some modern technology - Newsweek: Authorities are Treating August's Solar Eclipse, a First in 99 Years, Like it's the End of the World - Travel John 5 Pack * - Biffy Bag 3 Pack * - Goplus 500LBS Steel Cargo Carrier Luggage Basket 2" Receiver Hitch Hauler * - Which Is Greener: Idle, or Stop and Restart? - Concealed Carry Permit Reciprocity Maps - NASA: Sunspots Today - Reddit: Cell phones and ATMs to go down during the eclipse!! Hype or real issue? What about enough restrooms? - How simple sticker graffiti on road signs can easily 'confuse' driverless cars and cause deadly accidents - chAIR -Manned multirotor Part 20 -First Flight! Axel Borg - Volocopter
FreeBSD 11.1-Beta1 is out, we discuss Kernel address randomized link (KARL), and explore the benefits of daily OpenBSD source code reading This episode was brought to you by Headlines FreeBSD 11.1-Beta1 now available (https://lists.freebsd.org/pipermail/freebsd-stable/2017-June/087242.html) Glen Barber, of the FreeBSD release engineering team has announced that FreeBSD 11.1-Beta1 is now available for the following architectures: 11.1-BETA1 amd64 GENERIC 11.1-BETA1 i386 GENERIC 11.1-BETA1 powerpc GENERIC 11.1-BETA1 powerpc64 GENERIC64 11.1-BETA1 sparc64 GENERIC 11.1-BETA1 armv6 BANANAPI 11.1-BETA1 armv6 BEAGLEBONE 11.1-BETA1 armv6 CUBIEBOARD 11.1-BETA1 armv6 CUBIEBOARD2 11.1-BETA1 armv6 CUBOX-HUMMINGBOARD 11.1-BETA1 armv6 GUMSTIX 11.1-BETA1 armv6 RPI-B 11.1-BETA1 armv6 RPI2 11.1-BETA1 armv6 PANDABOARD 11.1-BETA1 armv6 WANDBOARD 11.1-BETA1 aarch64 GENERIC Note regarding arm/armv6 images: For convenience for those without console access to the system, a freebsd user with a password of freebsd is available by default for ssh(1) access. Additionally, the root user password is set to root. It is strongly recommended to change the password for both users after gaining access to the system. The full schedule (https://www.freebsd.org/releases/11.1R/schedule.html) for 11.1-RELEASE is here, the final release is expected at the end of July It was also announced there will be a 10.4-RELEASE scheduled for October (https://www.freebsd.org/releases/10.4R/schedule.html) *** KARL – kernel address randomized link (https://marc.info/?l=openbsd-tech&m=149732026405941&w=2) Over the last three weeks I've been working on a new randomization feature which will protect the kernel. The situation today is that many people install a kernel binary from OpenBSD, and then run that same kernel binary for 6 months or more. We have substantial randomization for the memory allocations made by the kernel, and for userland also of course. Previously, the kernel assembly language bootstrap/runtime locore.S was compiled and linked with all the other .c files of the kernel in a deterministic fashion. locore.o was always first, then the .c files order specified by our config(8) utility and some helper files. In the new world order, locore is split into two files: One chunk is bootstrap, that is left at the beginning. The assembly language runtime and all other files are linked in random fashion. There are some other pieces to try to improve the randomness of the layout. As a result, every new kernel is unique. The relative offsets between functions and data are unique. It still loads at the same location in KVA. This is not kernel ASLR! ASLR is a concept where the base address of a module is biased to a random location, for position-independent execution. In this case, the module itself is perturbed but it lands at the same location, and does not need to use position-independent execution modes. LLDB: Sanitizing the debugger's runtime (https://blog.netbsd.org/tnf/entry/lldb_sanitizing_the_debugger_s) The good Besides the greater enhancements this month I performed a cleanup in the ATF ptrace(2) tests again. Additionally I have managed to unbreak the LLDB Debug build and to eliminate compiler warnings in the NetBSD Native Process Plugin. It is worth noting that LLVM can run tests on NetBSD again, the patch in gtest/LLVM has been installed by Joerg Sonnenberg and a more generic one has been submitted to the upstream googletest repository. There was also an improvement in ftruncate(2) on the LLVM side (authored by Joerg). Since LLD (the LLVM linker) is advancing rapidly, it improved support for NetBSD and it can link a functional executable on NetBSD. I submitted a patch to stop crashing it on startup anymore. It was nearly used for linking LLDB/NetBSD and it spotted a real linking error... however there are further issues that need to be addressed in the future. Currently LLD is not part of the mainline LLDB tasks - it's part of improving the work environment. This linker should reduce the linking time - compared to GNU linkers - of LLDB by a factor of 3x-10x and save precious developer time. As of now LLDB linking can take minutes on a modern amd64 machine designed for performance. Kernel correctness I have researched (in pkgsrc-wip) initial support for multiple threads in the NetBSD Native Process Plugin. This code revealed - when running the LLDB regression test-suite - new kernel bugs. This unfortunately affects the usability of a debugger in a multithread environment in general and explains why GDB was never doing its job properly in such circumstances. One of the first errors was asserting kernel panic with PT*STEP, when a debuggee has more than a single thread. I have narrowed it down to lock primitives misuse in the doptrace() kernel code. The fix has been committed. The bad Unfortunately this is not the full story and there is further mandatory work. LLDB acceleration The EV_SET() bug broke upstream LLDB over a month ago, and during this period the debugger was significantly accelerated and parallelized. It is difficult to declare it definitely, but it might be the reason why the tracer's runtime broke due to threading desynchronization. LLDB behaves differently when run standalone, under ktruss(1) and under gdb(1) - the shared bug is that it always fails in one way or another, which isn't trivial to debug. The ugly There are also unpleasant issues at the core of the Operating System. Kernel troubles Another bug with single-step functions that affects another aspect of correctness - this time with reliable execution of a program - is that processes die in non-deterministic ways when single-stepped. My current impression is that there is no appropriate translation between process and thread (LWP) states under a debugger. These issues are sibling problems to unreliable PTRESUME and PTSUSPEND. In order to be able to appropriately address this, I have diligently studied this month the Solaris Internals book to get a better image of the design of the NetBSD kernel multiprocessing, which was modeled after this commercial UNIX. Plan for the next milestone The current troubles can be summarized as data races in the kernel and at the same time in LLDB. I have decided to port the LLVM sanitizers, as I require the Thread Sanitizer (tsan). Temporarily I have removed the code for tracing processes with multiple threads to hide the known kernel bugs and focus on the LLDB races. Unfortunately LLDB is not easily bisectable (build time of the LLVM+Clang+LLDB stack, number of revisions), therefore the debugging has to be performed on the most recent code from upstream trunk. d2K17 Hackathon Reports d2k17 Hackathon Report: Ken Westerback on XSNOCCB removal and dhclient link detection (http://undeadly.org/cgi?action=article&sid=20170605225415) d2k17 Hackathon Report: Antoine Jacoutot on rc.d, syspatch, and more (http://undeadly.org/cgi?action=article&sid=20170608074033) d2k17 Hackathon Report: Florian Obser on slaacd(8) (http://undeadly.org/cgi?action=article&sid=20170609013548) d2k17 Hackathon Report: Stefan Sperling on USB audio, WiFi Progress (http://undeadly.org/cgi?action=article&sid=20170602014048) News Roundup Multi-tenant router or firewall with FreeBSD (https://bsdrp.net/documentation/examples/multi-tenant_router_and_firewall) Setting-up a virtual lab Downloading BSD Router Project images Download BSDRP serial image (prevent to have to use an X display) on Sourceforge. Download Lab scripts More information on these BSDRP lab scripts available on How to build a BSDRP router lab (https://bsdrp.net/documentation/examples/how_to_build_a_bsdrp_router_lab). Start the lab with full-meshed 5 routers and one shared LAN, on this example using bhyve lab script on FreeBSD: [root@FreeBSD]~# tools/BSDRP-lab-bhyve.sh -i BSDRP-1.71-full-amd64-serial.img.xz -n 5 -l 1 Configuration Router 4 (R4) hosts the 3 routers/firewalls for each 3 customers. Router 1 (R1) belongs to customer 1, router 2 (R2) to customer 2 and router 3 (R3) to customer 3. Router 5 (R5) simulates a simple Internet host Using pf firewall in place of ipfw pf need a little more configuration because by default /dev/pf is hidden from jail. Then, on the host we need to: In place of loading the ipfw/ipfw-nat modules we need to load the pf module (but still disabling pf on our host for this example) Modify default devd rules for allowing jails to see /dev/pf (if you want to use tcpdump inside your jail, you should use bpf device too) Replacing nojail tag by nojailvnet tag into /etc/rc.d/pf (already done into BSDRP (https://github.com/ocochard/BSDRP/blob/master/BSDRP/patches/freebsd.pf.rc.jail.patch)) Under the hood: jails-on-nanobsd BSDRP's tenant shell script (https://github.com/ocochard/BSDRP/blob/master/BSDRP/Files/usr/local/sbin/tenant) creates jail configuration compliant with a host running nanobsd. Then these jails need to be configured for a nanobsd: Being nullfs based for being hosted on a read-only root filesystem Have their /etc and /var into tmpfs disks (then we need to populate these directory before each start) Configuration changes need to be saved with nanobsd configuration tools, like “config save” on BSDRP And on the host: autosave daemon (https://github.com/ocochard/BSDRP/blob/master/BSDRP/Files/usr/local/sbin/autosave) need to be enabled: Each time a customer will issue a “config save” inside a jail, his configuration diffs will be save into host's /etc/jails/. And this directory is a RAM disk too, then we need to automatically save hosts configuration on changes. *** OpenBSD Daily Source Reading (https://blog.tintagel.pl/2017/06/09/openbsd-daily.html) Adam Wołk writes: I made a new year's resolution to read at least one C source file from OpenBSD daily. The goal was to both get better at C and to contribute more to the base system and userland development. I have to admit that initially I wasn't consistent with it at all. In the first quarter of the year I read the code of a few small base utilities and nothing else. Still, every bit counts and it's never too late to get better. Around the end of May, I really started reading code daily - no days skipped. It usually takes anywhere between ten minutes (for small base utils) and one and a half hour (for targeted reads). I'm pretty happy with the results so far. Exploring the system on a daily basis, looking up things in the code that I don't understand and digging as deep as possible made me learn a lot more both about C and the system than I initially expected. There's also one more side effect of reading code daily - diffs. It's easy to spot inconsistencies, outdated code or an incorrect man page. This results in opportunities for contributing to the project. With time it also becomes less opportunitstic and more goal oriented. You might start with a https://marc.info/?l=openbsd-tech&m=149591302814638&w=2 (drive by diff to kill) optional compilation of an old compatibility option in chown that has been compiled in by default since 1995. Soon the contributions become more targeted, for example using a new API for encrypting passwords in the htpasswd utility after reading the code of the utility and the code for htpasswd handling in httpd. Similarly it can take you from discussing a doas feature idea with a friend to implementing it after reading the code. I was having a lot of fun reading code daily and started to recommend it to people in general discussions. There was one particular twitter thread that ended up starting something new. This is still a new thing and the format is not yet solidified. Generally I make a lot of notes reading code, instead of slapping them inside a local file I drop the notes on the IRC channel as I go. Everyone on the channel is encouraged to do the same or share his notes in any way he/she seems feasable. Check out the logs from the IRC discussions. Start reading code from other BSD projects and see whether you can replicate their results! *** Become FreeBSD User: Find Useful Tools (https://bsdmag.org/become-freebsd-user-find-useful-tools/) BSD Mag has the following article by David Carlier: If you're usually programming on Linux and you consider a potential switch to FreeBSD, this article will give you an overview of the possibilities. How to Install the Dependencies FreeBSD comes with either applications from binary packages or compiled from sources (ports). They are arranged according to software types (programming languages mainly in lang (or java specifically for Java), libraries in devel, web servers in www …) and the main tool for modern FreeBSD versions is pkg, similar to Debian apt tools suite. Hence, most of the time if you are looking for a specific application/library, simply pkg search without necessarily knowing the fully qualified name of the package. It is somehow sufficient. For example pkg search php7 will display php7 itself and the modules. Furthermore, php70 specific version and so on. Web Development Basically, this is the easiest area to migrate to. Most Web languages do not use specific platform features. Thus, most of the time, your existing projects might just be “drop-in” use cases. If your language of choice is PHP, you are lucky as this scripting language is workable on various operating systems, on most Unixes and Windows. In the case of FreeBSD, you have even many different ports or binary package versions (5.6 to 7.1). In this case, you may need some specific PHP modules enabled, luckily they are available atomically, or if the port is the way you chose, it is via the www/php70-extensions's one. Of course developing with Apache (both 2.2 and 2.4 series are available, respectively www/apache22 and www/apache24 packages), or even better with Nginx (the last stable or the latest development versions could be used, respectively www/nginx and www/nginx-devel packages) via php-fpm is possible. In terms of databases, we have the regular RDMBS like MySQL and PostgreSQL (client and server are distinct packages … databases/(mysql/portgresql)-client, and databases/(mysql/postgresql)-server). Additionally, a more modern concept of NoSQL with CouchDB, for example (databases/couchdb), MongoDB (databases/mongodb), and Cassandra (databases/cassandra), to name but a few. Low-level Development The BSDs are shipped with C and C++ compilers in the base. In the case of FreeBSD 11.0, it is clang 3.8.0 (in x86 architectures) otherwise, modern versions of gcc exist for developing with C++11. Examples are of course available too (lang/gcc … until gcc 7.0 devel). Numerous libraries for various topics are also present, web services SOAP with gsoap through User Interfaces with GTK (x11-toolkits/gtk), QT4 or QT 5 (devel/qt), malware libraries with Yara (security/yara), etc. Android / Mobile Development To be able to do Android development, to a certain degree, the Linux's compatibility layer (aka linuxulator) needs to be enabled. Also, x11-toolkits/swt and linux-f10-gtk2 port/package need to be installed (note that libswt-gtk-3550.so and libswt-pi-gtk-3550.so are necessary. The current package is versioned as 3557 and can be solved using symlinks). In the worst case scenario, remember that bhyve (or Virtualbox) is available, and can run any Linux distribution efficiently. Source Control Management FreeBSD comes in base with a version of subversion. As FreeBSD source is in a subversion repository, a prefixed svnlite command prevents conflicts with the package/port. Additionally, Git is present but via the package/port system with various options (with or without a user interface, subversion support). Conclusion FreeBSD has made tremendous improvements over the years to fill the gap created by Linux. FreeBSD still maintains its interesting specificities; hence there will not be too much blockers if your projects are reasonably sized to allow a migration to FreeBSD. Notes from project Aeronix, part 10 (https://martin.kopta.eu/blog/#2017-06-11-16-07-26) Prologue It is almost two years since I finished building Aeronix and it has served me well during that time. Only thing that ever broke was Noctua CPU fan, which I have replaced with the same model. However, for long time, I wanted to run Aeronix on OpenBSD instead of GNU/Linux Debian. Preparation I first experimented with RAID1 OpenBSD setup in VirtualBox, plugging and unplugging drives and learned that OpenBSD RAID1 is really smooth. When I finally got the courage, I copied all the data on two drives outside of Aeronix. One external HDD I regulary use to backup Aeronix and second internal drive in my desktop computer. Copying the data took about two afternoons. Aeronix usually has higher temperatures (somewhere around 55°C or 65°C depending on time of the year), and when stressed, it can go really high (around 75°C). During full speed copy over NFS and to external drive it went as high as 85°C, which made me a bit nervous. After the data were copied, I temporarily un-configured computers on local network to not touch Aeronix, plugged keyboard, display and OpenBSD 6.1 thumb drive. Installing OpenBSD 6.1 on full disk RAID1 was super easy. Configuring NFS Aeronix serves primarily as NAS, which means NFS and SMB. NFS is used by computers in local network with persistent connection (via Ethernet). SMB is used by other devices in local network with volatile connection (via WiFi). When configuring NFS, I expected similar configuration to what I had in Debian, but on OpenBSD, it is very different. However, after reading through exports(5), it was really easy to put it together. Putting the data back Copying from the external drive took few days, since the transfer speed was something around 5MB/s. I didn't really mind. It was sort of a good thing, because Aeronix wasn't overheating that way. I guess I need to figure new backup strategy though. One interesting thing happened with one of my local desktops. It was connecting Aeronix with default NFS mount options (on Archlinux) and had really big troubles with reading anything. Basically it behaved as if the network drive had horrible access times. After changing the default mount options, it started working perfectly. Conclusion Migrating to OpenBSD was way easier than I anticipated. There are various benefits like more security, realiable RAID1 setup (which I know how will work when drive dies), better documentation and much more. However, the true benefit for me is just the fact I like OpenBSD and makes me happy to have one more OpenBSD machine. On to the next two years of service! Beastie Bits Running OpenBSD on Azure (http://undeadly.org/cgi?action=article&sid=20170609121413&mode=expanded&count=0) Mondieu - portable alternative for freebsd-update (https://github.com/skoef/mondieu) Plan9-9k: 64-bit Plan 9 (https://bitbucket.org/forsyth/plan9-9k) Installing OpenBSD 6.1 on your laptop is really hard (not) (http://sohcahtoa.org.uk/openbsd.html) UbuntuBSD is dead (http://www.ubuntubsd.org/) OPNsense 17.1.8 released (https://opnsense.org/opnsense-17-1-8-released/) *** Feedback/Questions Patrick - Operating System Textbooks (http://dpaste.com/2DKXA0T#wrap) Brian - snapshot retention (http://dpaste.com/3CJGW22#wrap) Randy - FreeNAS to FreeBSD (http://dpaste.com/2X3X6NR#wrap) Florian - Bootloader Resolution (http://dpaste.com/1AE2SPS#wrap) ***
Cette semaine, la fine équipe discute de l’actualité lourde en nouvelles sur le langage Java - yeah. On parle aussi des framework nouveaux et anciens autour des microservices, il faut bien faire le buzz. Enregistré le 14 mars 2016 Téléchargement de l’épisode LesCastCodeurs-Episode–143.mp3 News Devoxx Discussion sur Devoxx Langages Java : proposition de factory pour les collections Proposition var/val dans Java 9? Soudage sur la proposition var/val Point sur Jigsaw Mettre Java dans son docker c’est comme cracher dans son Yop… pour Oracle Reza fait une sortie flamboyante Comparaison Rust Java Attaque de sécurité sur JavaScript grâce à ses règles laxistes Présentation WAT JavaScript Xamarin joining Microsoft Middleware Lightbend Lagom : un framework pour les microservices Reactor 2.5 Amélioration au coeur de Spring dans 4.3 Play 2.5 Hibernate Search et Elasticsearch Ratpack 1.2 Infrastructure SQLServeur sur Linux Retour d’expérience de Google sur le déploiement de containers Outillage RedPen, le checkstyle de la doc Big Data Kafka Streams Debezium Design One API, many facades Sécurité L’attaque du DROWN Bugs de sécurité sur Apache Tomcat Méthodologie Chat de groupe : la plaie ? Mon monolithe majestueux Communauté Le coup de baton de la communauté à GitHub Réponse de Github sous forme de Pull Request Issue templates SourceForge et Slashdot rachetés Divers 19 lois du développement logiciel Startup as a Service Débat Même les jeux de société, en Open Source La GED et la GEX, c’est quoi, on utilise quoi ? PlantUML DITA Rubrique débutant Google Summer of Code Passage par valeur vs passage par référence. Outil de l’épisode Git submodules Noizio Conférences Breizhcamp 23–26 mars Devoxx France 20/22 avril Mix-IT 21 et 22 avril EclipseCon entre le 7 et le 9 juin à Toulouse, le cfp est ouvert Riviera DEV se tiendra le 16 et 17 juin à Sophia Antipolis. Le CfP est ouvert Tech2days 15–17 juin à Nantes. CfP jusqu’à fin mars. Nous contacter Contactez-nous via twitter https://twitter.com/lescastcodeurs sur le groupe Google https://groups.google.com/group/lescastcodeurs ou sur le site web https://lescastcodeurs.com/ Flattr-ez nous (dons) sur https://lescastcodeurs.com/ En savoir plus sur le sponsoring? sponsors@lescastcodeurs.com
Clinton Parker, Action! Welcome to this special interview edition of Antic, the Atari 8-bit computer podcast. All of our interviews are special in some way and we appreciate the time that the interviewees donate to the Atari 8-bit community at large. This interview is a much-anticipated one due to the beloved nature of the software provided by the interviewee and due to the fact that the he has been away from the Atari 8-bit community for some time. The software I’m talking about is the Action! programming language and the author is Clinton Parker. Action! was released in 1983 by Optimized Systems Software (better known as OSS). It quickly became one of the favorite programming languages ever produced for the Atari 8-bits and was used in the development of some commercial products. The 6502 source code for Action! was made available under the GNU General Public License by the author in 2015. This interview took place on September 6, 2015 via Skype. Teaser Quotes “It was an opportunity for me having a platform, which is what the Atari was to me. It provided a platform where I could sit down and literally design a language that I liked and that had the features I liked.” “It was selling well enough that I was able to for several years to pretty much make a living off the royalties of the sales of it.” Links Action! Review in ANALOG - http://www.cyberroach.com/analog/an16/action.htm HI-RES VOL. 1, NO. 4 / MAY/JUNE 1984 / PAGE 72 - http://www.atarimagazines.com/hi-res/v1n4/action.php Action! at SourceForge - http://sourceforge.net/projects/atari-action/ Action! Source at Archive.org - https://archive.org/details/ActionVersion36_SourceCode
This episode is part 3 of a 3-part series on the Tandy Color Computer, also known as the CoCo. I have special guest hosts to help me again this month: John Linville and Neil Blanchard of the “CoCo Crew Podcast”. Join us as we discuss Coco magazines, books, software, modern upgrades, emulation, Web sites and much more. I also go over my new acquisitions, tell you about upcoming vintage computer shows and cover some podcast feedback. Finally, we also have audio segments from no less than 4 different CoCo fans who share with us their memories and thoughts about the Tandy Color Computer. Thank you to Michael Moore, Rick Adams, Jon Day, and Tony Cappellini for your contributions. Note that Floppy Days now has a Facebook page where you can discuss the show or vintage computers in general. Search for “Floppy Days” on Facebook. Links Mentioned in the Show: Intro Bill Degnan’s Historical Computers and Vintage Computer Restoration - http://www.vintagecomputer.net “History of Commodore Computers” poster by Bill Degnan - http://vintagecomputer.net/poster_detail.cfm New Acquisitions/What I’ve been Up To Floppy Days Podcast Facebook Page - https://www.facebook.com/groups/1556099237981844/ Floppy Days Discussion Thread on AtariAge - http://atariage.com/forums/topic/238436-floppy-days-podcast-discussion-thread/ Upcoming Floppy Days Show Schedule - http://floppydays.libsyn.com/webpage/category/Upcoming%20Show%20Schedule Intellivision Entertainment Computer System (ECS) - http://www.intellivisionlives.com/bluesky/games/credits/ecs.shtml Upcoming Shows AmiWest 2015 - http://www.amiwest.net/ Oct. 14-18, Sacramento, CA Chicago TI International World Faire - http://www.chicagotiug.com/tiki-index.php?page=Faire October 31, 2015, Evanston, IL World of Commodore - http://www.tpug.ca/category/woc/ Dec. 4-6, 2015, Toronto, Ontario, Canada VCF SE 4.0 - April 2nd and 3rd, 2016, Roswell, GA Feedback Bill Degnan’s GE television for vintage computers - http://vintagecomputer.net/ge/GE_Computer-Monitor-Television.jpg George Phillips’ Doctor Who Video done on a TRS-80 4P - http://members.shaw.ca/gp2000/drwho.html Vintage Computer Forum for the TRS-80 - http://www.vintage-computer.com/vcforum/forumdisplay.php?46-Tandy-Radio-Shack Magazines/Newsletters Chromasette - https://en.wikipedia.org/wiki/Chromasette Compute! - http://www.atarimagazines.com/compute/ Hot CoCo - http://www.colorcomputerarchive.com/coco/Documents/Magazines/Hot%20CoCo%20(Searchable%20image)/ Micro 80 - http://www.classic-computers.org.nz/system-80/literature_magazines_micro-80_archive.htm TRS-80 Microcomputer News - http://www.os9projects.com/MAGAZINES/MicroNews/MicroNews.html 80 Micro - https://archive.org/details/80-microcomputing-magazine The Color Computer News The Color Computer Magazine - https://archive.org/details/color-computer-magazine CoCo Nutz! Magazine - http://www.os9projects.com/MAGAZINES/NUTZ/Nutz.html Rainbow Magazine - https://archive.org/details/rainbowmagazine Books book archive with a lot of the mentioned books - https://sites.google.com/a/aaronwolfe.com/cococoding/home/docs Getting Started with Color BASIC (Tandy) Going Ahead with Extended Color BASIC (Tandy) TRS-80 Color Computer Assembly Language Programming by William Barden, Jr. Color Computer Graphics by William Barden, Jr. TRS-80 Color Computer Graphics by Don Inman “Unravelled” series by Spectral Associates: Color BASIC Disk BASIC Extended BASIC Super Extended BASIC The Complete Rainbow Guide to OS-9 by Dale L. Puckett and Peter Dibble The Complete Rainbow Guide to OS-9 Level II by Dale L. Puckett and Peter Dibble Tandy’s Little Wonder by F.G. Swygert - http://www.cocopedia.com/wiki/index.php/Tandy's_Little_Wonder “CoCo: The Colorful History of Tandy's Underdog Computer” by Boisy G Pitre, Bill Loguidice - http://www.amazon.com/dp/1466592478/?tag=flodaypod-20 Software Dungeons of Daggorath - https://www.facebook.com/DungeonsOfDaggorathForum Castle of Tharoggad - http://www.cocopedia.com/wiki/index.php/Castle_of_Tharoggad Downland - http://www.cocopedia.com/wiki/index.php/Downland Cave Walker - http://www.mobygames.com/game/cave-walker Rescue on Fractalis! - http://www.cocopedia.com/wiki/index.php/Rescue_on_Fractalus Koronis Rift - http://www.cocopedia.com/wiki/index.php/Koronis_Rift FahrFall - https://archive.org/details/softwarelibrary_coco_fahrfall Editor/Assembler - http://www.cocopedia.com/wiki/index.php/EDTASM%2B Logo - http://www.colorcomputerarchive.com/coco/Documents/Manuals/Programming/TRS-80%20Color%20Logo%20(Tandy).pdf Super Logo - http://www.colorcomputerarchive.com/coco/Disks/Programming/Super%20Logo%20(Tandy).zip eForth - http://www.colorcomputerarchive.com/coco/Disks/Programming/eFORTH%20(Frank%20Hogg%20Laboratory).zip Color File - https://archive.org/details/Color_File_1981_Tandy Color Scripsit - http://www.colorcomputerarchive.com/coco/Documents/Manuals/Applications/Color%20Scripsit%20(Tandy).pdf Spectaculator - https://archive.org/details/coco2cart_Spectaculator_1983_26-3104_Tandy Robot Odyssey - https://en.wikipedia.org/wiki/Robot_Odyssey Deskmate - https://www.youtube.com/watch?v=ptiEv_Sh-NI Ads Isaac Asimov - “Get an out of this world deal on my favorite color computer” - http://www.vintagecomputing.com/index.php/archives/300/retro-scan-of-the-week-isaac-asimovs-favorite-color-computer User Groups and Shows RainbowFest - https://www.youtube.com/watch?v=oEnPg1qPegI Glenside CoCo Club - http://www.glensideccc.com/ Last Chicago CoCoFest - April 23 & 24, 2016, Lombard, IL - http://www.glensideccc.com/cocofest/index.shtml Modern Upgrades Triad 512K RAM Upgrade - http://www.cloud9tech.com/ CoCo SDC - http://cocosdc.blogspot.com/ 6309 upgrade - http://richg42.blogspot.com/2014/02/coco-3-upgrades-hitachi-6309-cpu-512kb.html NitrOS9 - http://sourceforge.net/projects/nitros9/ Donkey Kong modified for the 6309 and 512K (Sock Master) - http://users.axess.com/twilight/sock/dk/ PS/2 keyboard adapter - http://www.cloud9tech.com/ mini-Flash - 4 16K banks of Flash - http://www.cloud9tech.com/ CoCo3FPGA - https://groups.yahoo.com/neo/groups/CoCo3FPGA/info Connectivity to Modern Computers Drivewire - https://sites.google.com/site/drivewire4/ Emulation MESS (Mac/Windows) - Multi-Emulator Super System - http://www.mess.org/ Jeff Vavasour’s CoCo 2 and CoCo 3 emulators (DOS) - http://www.vavasour.ca/jeff/trs80.html#coco3 XROAR (Dragon and CoCo 1/2) - http://www.6809.org.uk/xroar/ VCC (Windows) - http://www.coco4.com/vcc/index.shtml by Joseph Forgione VCC at SourceForge - http://sourceforge.net/projects/vcce/ Version 2.0 8/26/15 Mocha - http://www.haplessgenius.com/mocha/mocha_original.html Community Facebook - https://www.facebook.com/groups/2359462640/ Mailing List - Coco@maltedmedia.com , https://pairlist5.pair.net/mailman/listinfo/coco AtariAge CoCo Forum - http://atariage.com/forums/forum/174-tandy-computers/ Forum at tandycoco.com - http://www.tandycoco.com/forum/index.php The CoCo Crew Podcast - http://cococrew.org/ Coordinated CoCo Conference - http://www.coco4.com/podcasts/cococo.shtml Google+ (with links to YouTube videos) - https://plus.google.com/+Tandycoco6809/videos Current Web Sites TandyCoCo.com - http://www.tandycoco.com/ The TRS-80/Tandy Color Computer COCO SuperSite! - http://www.coco3.com/community/ CoCoCoding Website (Aaron Wolfe) - https://sites.google.com/a/aaronwolfe.com/cococoding/home/ The NitrOS9 Project - http://sourceforge.net/projects/nitros9/ OpenCoCo.net - http://opencoco.net/gf/ A set of Coco 1/2/3 web pages by L. Curtis Boyle - http://www.lcurtisboyle.com/nitros9/index.html CoCo4 Website (Steve Bjork) - http://www.coco4.com/index.shtml CoCo/OS9 Archive - http://os9projects.com/index.html Cocopedia - http://www.cocopedia.com/wiki/index.php/Main_Page Tandy Color Computer Games - http://www.lcurtisboyle.com/nitros9/coco_game_list.html Ira Goldklang's TRS-80 Revived Site - http://www.trs-80.com/wordpress/ TRS-80 Color Computer Archive - http://www.colorcomputerarchive.com/ TOSEC (The Old School Emulation Center) software collection at archive.org - https://archive.org/details/Tandy_TRS80_Color_Computer_TOSEC_2012_04_23 CoCo Central - http://coco-central.com/ Sock Master - http://users.axess.com/twilight/sock/ Cloud9 - http://www.cloud9tech.com/ Wikipedia - https://en.wikipedia.org/wiki/TRS-80_Color_Computer
Continuamos hablando de Raspberry Pi y sus posibilidades, en el capítulo de hoy veremos Raspberry Pi como servidor web. Es una funcionalidad muy interesante que también nos servirá como herramienta para aprender sobre implantación de proyectos web y sobre sistemas.Si tienes alguna duda sobre Raspberry Pi o algún otro tema de tecnología puedes contactar con nosotros a través del formulario de contacto, en Twitter y en Facebook. También tenemos a tu disposición una lista de distribución.Como ya hablamos en anteriores capítulos la Raspberry Pi puede servirnos como un excepcional y polivalente servidor casero. En este caso vamos a hablar de utilizar dicho dispositivo como servidor web, obviamente el rendimiento que puede ofrecer Raspberry Pi no es el que se espera en un entorno empresarial cuyo volumen de uso pueda ser importante pero como servidor aplicaciones domésticas o para uso formativo es realmente interesante, además nos ayudará a comprender cómo nos pueden ayudar las tecnologías empleadas para cualquier proyectoVamos a proponer varios posibles escenarios abarcando diferentes tecnologías. Gracias a Linux tenemos para elegir, vamos allá:Escenario 1: Servidor web Apache + PHP ¿Porqué usar este escenario? Cuando hablamos de PHP más bien tendríamos que preguntarnos porqué no usarlo. Gracias a esta plataforma podremos emplear los CMS más afamados como Wordpress, Joomla o Prestashop. Además por supuesto podremos desarrollar aplicaciones avanzadas y empleando las últimas tecnologías, de hecho grandes monstruos de internet como Yahoo, Facebook, Sourceforge o Flickr han confiado en esta tecnología.¿Qué necesito? Los que conozcan estas tecnologías conocerán el paquete LAMP(Linux, Apache, Mysql y PHP) y aunque para Raspberry Pi no tenemos constancia de que exista dicho paquete, podemos instalar independientemente cada uno de los paquetes que lo componen. Por ejemplo, si usamos Raspbian: con la conocidísima herramienta de gestión de paquetes apt-get tan solo necesitaremos instalar los paquetes apache2, mysql-server, php y php-mysql para disponer de un servidor web PHP y Mysql completo.Escenario 2: Servidor web Node.JS¿Porqué usar este escenario? Parace que la tendencia a usar Node.JS es importante, ha calado muy bien en la comunidad de desarrolladores y parece que poco a poco va a tener una presencia importante en el mundo web. Podemos recomendar Express, un framework para realizar páginas y servicios web REST de forma muy sencilla(hablamos algo más sobre esta tecnología en este artículo).¿Qué necesito? Simplemente instalar Node en tu Raspberry Pi. Tengamos en cuenta que desde el propio proceso que programemos en Node.JS tendremos que programar que nuestra salida de datos se va a producir mediante una página web levantando dicho servicio web en un puerto, tal y como ya comentamos en este artículo que ya hemos mencionado donde también hablamos de cómo instalar Node.JS en Raspberry Pi en la distribución Raspbian.Escenario 3: Servidor web Java¿Porqué usar este escenario? No podemos obviar la opción de Java ya que sigue siendo uno de los lenguajes de programación más conocidos y utilizados en todo el mundo. Java también ofrece un mundo de posibilidades en entorno web, en este caso vamos a poner el ejemplo de un Framework MVC muy usado: Spring MVC. Quien esté familiarizado con MVC se sentirá cómodo con este framework que permite el desarrollo de aplicaciones web con este patrón de diseño de software.¿Qué necesito? Pues en este caso tendremos que instalar el JDK de Java( si utilizamos cualquier distribución proviniente de Debian podremos utilizar la herramienta que ya hemos mencionado apt-get para instalar el paquete oracle-java7-jdk) y además instalar el servidor Tomcat (también utilizando apt-get, instalando el paquete tomcat7). Una vez instalado todo y con unos ajustes de configuración ya podremos implantar nuestras aplicaciones web en Java.Escenario 4: ASP.NET con mono y pronto con Net Core!Como muchos ya sabréis, en el equipo de programarfácil hay más de un integrante que trabaja diariamente con entornos .NET y gracias al cambio que ha experimentado Microsoft pronto podremos disfrutar de su conocido Framework para desarrollar aplicaciones web ASP.NET MVC en plataformas no Windows, como puede ser el caso de Raspberry Pi con algunas de las distribuciones Linux disponibles para dicho dispositivo. Ahora mismo podemos hacerlo con Mono pero pronto podremos hacerlo de forma oficial.Un momento... ¿Qué es Mono? Mono es una implementación libre que ofrece soporte a aplicaciones .NET en entorns Linux. No es un proyecto oficial de Microsoft sino de la comunidad Open Source. Como aliciente esta circunstancia hará que puedas disfrutar cacharreando un poco con ambas tecnologías.¿Porqué usar este ASP.NET con Mono? Si te gusta la plataforma .NET y no quieres bajo ningún concepto dejar de usarlo, esta es la opción a día de hoy nos guste o no. Aquí puedes obtener toda la información sobre este proyecto.En un futuro cercano... ASP.NET con .NET Core¿.NET Core? Ya hablamos en este post sobre NET Core, un subconjunto de .NET que tiene el objetivo que, de forma nativa, podamos implementar los diferentes desarrollos realizados con estas tecnologías. Ya existe la beta de ASP.NET 5 versión 7 que incluye .NET executing enviroment para linux y Mac. ¿Te atreves a probarla? Aquí te dejamos información oficial de Microsoft.Como muchas veces decimos: estas son nuestras propuestas, pero puede haber muchas más ¿Crees que hay alguna más interesante? Por favor no dejéis de plantearlo en los diferentes medios que tenemos.A continuación te dejamos los enlaces de los que hacemos mención en este podcast:Proyecto Mono..NET executing enviroment para Linux y Mac.Como muchas veces decimos: estas son nuestras propuestas, pero puede haber muchas más ¿Crees que hay alguna más interesante? Por favor no dejéis de plantearlo en los diferentes medios que tenemos.Recurso del díaFiddlerEs un proxy de depuración web gratuito para cualquier navegador, sistema o plataforma. Permite depurar el tráfico producido por aplicaciones web en Windows, Linux, Mac y dispositivos móviles, hacer pruebas de rendimiento, análisis de tráfico registrando todas las transacciones HTTP y comprobar la seguridad de tus aplicaciones para que no te lleves ningún susto.Muchas gracias a todos por los comentarios y valoraciones que nos hacéis en iVoox, iTunes y en Spreaker, nos dan mucho ánimo para seguir con este proyecto.
Portable apps let you bring your favorite programs with you where ever you go. Find out how to use portable apps on Windows, Mac OS X, and Linux. Materials List PC or Mac USB Key Internet connection Time & patience Portable Apps for WindowsPortableApps.com is a very user friendly way to put portable apps on a USB key. Installation of applications does take some time, so while it might look like your computer has locked up, it may still be installing. Be patient. Chrome with Flash in your portable versionInstructions: Download NPSWF32.DLL Open your Portable Apps drive in Windows Explorer Go to GoogleChromePortableAppChrome-bin Create a new subdirectory with the name "plugins" (without the quotation marks) Copy the DLL to GoogleChromePortableAppChrome-binplugins Portable Apps for Mac OS XFreesmug.org/download is great resource for picking up portable apps. After downloading the DMG, open it and copy the app to your USB Key. If you are using OS X Lion or higher, you'll have to download PAppsLionPatch. Portable Apps for LinuxYour best source for portable apps for Linux is Sourceforge.com/projects/portable Download your file, put it on a drive Viewer Challenge!If you can figure out how to create a portable version of Chrome with Adobe Flash, let us know. Make a video, upload it to YouTube or your favorite video sharing site and send us the link! Hosts: Leo Laporte and Iyaz Akhtar Download or subscribe to this show at https://twit.tv/shows/know-how. Contribute to our show! Send us an email at knowhow@twit.tv or leave us a voicemail at 408-800-KNOW (408-800-5669). Sponsor: audible.com
Portable apps let you bring your favorite programs with you where ever you go. Find out how to use portable apps on Windows, Mac OS X, and Linux. Materials List PC or Mac USB Key Internet connection Time & patience Portable Apps for WindowsPortableApps.com is a very user friendly way to put portable apps on a USB key. Installation of applications does take some time, so while it might look like your computer has locked up, it may still be installing. Be patient. Chrome with Flash in your portable versionInstructions: Download NPSWF32.DLL Open your Portable Apps drive in Windows Explorer Go to GoogleChromePortableAppChrome-bin Create a new subdirectory with the name "plugins" (without the quotation marks) Copy the DLL to GoogleChromePortableAppChrome-binplugins Portable Apps for Mac OS XFreesmug.org/download is great resource for picking up portable apps. After downloading the DMG, open it and copy the app to your USB Key. If you are using OS X Lion or higher, you'll have to download PAppsLionPatch. Portable Apps for LinuxYour best source for portable apps for Linux is Sourceforge.com/projects/portable Download your file, put it on a drive Viewer Challenge!If you can figure out how to create a portable version of Chrome with Adobe Flash, let us know. Make a video, upload it to YouTube or your favorite video sharing site and send us the link! Hosts: Leo Laporte and Iyaz Akhtar Download or subscribe to this show at https://twit.tv/shows/know-how. Contribute to our show! Send us an email at knowhow@twit.tv or leave us a voicemail at 408-800-KNOW (408-800-5669). Sponsor: audible.com
Karen and Bradley discuss recent debates about the value of non-profit organizations for Free Software. Show Notes: Segment 0 (00:34) Fontana (and other Red Hat employees) pointed out some imprecision in what Bradley said in Episode 0x1D about Debian non-free. (01:07) A call for participation has been announced for the Legal and Policy Issues DevRoom at FOSDEM 2012. Please submit a proposal by 30 December 2011 (04:30) A recent debate about non-profits started, initiated by a blog post called Apache Considered Harmful. (12:55) Karen and Bradley briefly mentioned that some now believe that Considered Harmful Considered Harmful (13:16) A long thread on this issue occurred on the FLOSS Foundations mailing list (13:45) Bradley made an official Conservancy Blog post about the value of non-profits for Free Software (14:17) Sourceforge became proprietary software in 2001, as is well-described in this by The Sourceforge proprietarization debacle is well described in an article by Loïc Dachary. (19:19) Bradley mentioned FaiFCast Episode 0x11, which discussed the OpenOffice.org/Apache/LibreOffice situation. (44:35) Bradley pointed out that this debate conflates a lot of different issues, and tried to list all the conflated questions here: Should a non-profit home decide what technical infrastructure is used for a software freedom project? And if so, what should it be? If the projects doesn't provide technological services, should non-profits allow their projects to rely on for-profits for technological or other services? Should a non-profit home set political and social positions that must be followed by the projects? If so, how strictly should they be enforced? Should copyrights be held by the non-profit home of the project, or with the developers, or a mix of the two? Should the non-profit dictate licensing requirements on the project? If so, how many licenses are ok? Should a non-profit dictate strict copyright provenance requirements on their projects? If not, should the non-profit at least provide guidelines and recommendations? Send feedback and comments on the cast to . You can keep in touch with Free as in Freedom on our IRC channel, #faif on irc.freenode.net, and by following Conservancy on identi.ca and and Twitter. Free as in Freedom is produced by Dan Lynch of danlynch.org. Theme music written and performed by Mike Tarantino with Charlie Paxson on drums. The content of this audcast, and the accompanying show notes and music are licensed under the Creative Commons Attribution-Share-Alike 4.0 license (CC BY-SA 4.0).
Episode 21: OLF recap and there will be rants! Intro Events - [1DevDayDetroit Nov 4/5th](# "1DevDayDetroit") OhioLinuxfest Recap: Thanks for stopping by the booth everyone! [Sourceforge](http://sourceforge.net) Interview - [Elizabeth Naramore](http://www.naramore.net/blog/), Community Developer Manager Links of the week - [A Gnome OS? Would you use it?](http://bobthegnome.blogspot.com/2011/09/gnome-os.html) - [Ubuntu with a rolling monthy release, what might work/what will fail](http://arstechnica.com/open-source/news/2011/09/ubuntu-technical-board-member-proposes-monthly-ubuntu-release-cycle.ars) - [Mark Shuttleworth on Cloud APIs are like HTTP, don't screw it up.](http://www.markshuttleworth.com/archives/765) - [Application design, Launchpad thinks smaller services](https://dev.launchpad.net/ArchitectureGuide/Services) Rick's Mini Rant: Your anti pattern is called a tool - [Original Article: ORM is an anti-patter](http://seldo.com/weblog/2011/08/11/orm_is_an_antipattern) - [SqlAlchemy](http://sqlalchemy.org) Books - Rick: [Different: Escaping the Competitive Herd](http://www.amazon.com/gp/product/B0036S4CNE/ref=as_li_ss_tl?ie=UTF8&tag=mitechie-20&linkCode=as2&camp=217145&creative=399373&creativeASIN=B0036S4CNE) - Craig: [Mouse Guard](http://mouseguard.net) by David Petersen Music - [Горсти талого снега by Калевала from Кукушкины дети](http://www.jamendo.com/en/album/94940) - [Human Core by HiHate from Against All](http://www.jamendo.com/en/album/97774) - [Natural 20s by Dual Core from Next Level](http://dualcoremusic.com/nerdcore/) - [Eighties Dance Music by The West Exit from Unearthed](http://magnatune.com/artists/albums/westexit-unearthed/) - [Imported by Irate Architect from Visitors](http://www.jamendo.com/en/album/80765) - [The Four Seasons: Concerto in E Major, Op. 8/1, RV 269 - 'Spring': III. Allegro by Lara St. John from Vivaldi - The Four Seasons](http://magnatune.com/artists/albums/lara-fourseasons/) - [Never Happy by Drop Alive from Drop Alive](http://www.jamendo.com/en/album/6357)
Pycon 2011 Sprints Interview: Mark Ramm from Sourceforge and Allura, the new forge Lococast got a chance to sit down and ask Mark Ramm about the new open source project forge, Allura. They managed to get it out to the public right before PyCon and were working hard on it during the sprints at PyCon 2011. Find out what makes it unique and what they're up to.
C:>WINWelcome to Show #107! This week's topic: Windows 3.0! If you're a Spectrum enthusiast, then be sure to check out the update of the Free Unix Spectrum Emulator (FUSE), hosted at SourceForge!Be sure to send any comments, questions or feedback to retrobits@gmail.com. For online discussions on Retrobits Podcast topics, check out the Retrobits Podcast forum on the PETSCII Forums page! Our Theme Song is "Sweet" from the "Re-Think" album by Galigan. Thanks for listening! - Earl This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 2.5 License.
Add mixed retro. Stir and Serve.Welcome to Show 055! This week's Topic: Retro Mix (A Variety Episode)! Topics and links discussed in the podcast... Check out A2Central for info on the recent KansasFest 2006, and all your Apple II needs!Also check out the official KFest site - it's never too early to plan for 2007...Here's a presentation on SymbOS by the creator (Google Video - note that the intro is in Spanish, but the presentation is in English).The History Of Computing project - from 30,000 years ago to today, computing is exciting!ADTPro - the latest incarnation of Apple Disk Transfer - check it out today on their SourceForge project page... Be sure to send any comments, questions or feedback to retrobits@gmail.com. For online discussions on Retrobits Podcast topics, check out the Retrobits Podcast forum on the PETSCII Forums page! Our Theme Song is "Sweet" from the "Re-Think" album by Galigan. Thanks for listening! - Earl