Podcasts about Joyent

  • 40PODCASTS
  • 65EPISODES
  • 1h 7mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 11, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Joyent

Latest podcast episodes about Joyent

Software Lifecycle Stories
Conversation with a Cloud Computing Pioneer, David Young

Software Lifecycle Stories

Play Episode Listen Later Apr 11, 2025 48:12


My guest today is David Young,Founder at Federated Computer. He has been a pioneer in cloud computing even before the term was invented. In fact, he and his team helped invent the term.In this conversation we could recall many of the developments of a couple of decades ago as well as what keeps him engaged now, at Federated computer.During this conversation, David touches upon:Introduction and Early Career JourneyTransition to Silicon Valley and StartupsContrast Between Corporate and Startup LifeFinding Problems to SolveCustomer Empathy and Team DynamicsChallenges with Open Source AdoptionAI, Open Source, and Future PotentialCareer Tips for IT and Open Source EnthusiastsDavid shares his practices and tips to Stay GroundedAbout David:CEO and founder of Joyent — the folks who invented node.js, helped stand up Twitter and the Facebook developer platform, and brought containers to market leading to the Kubernetes revolution. Seven patents. Very experienced raising and deploying venture capital. Sold Joyent to Samsung in 2016.Started an ultra-premium ice cream company (Honeymoon Brands). Invented unique manufacturing products and processes to put ice cream in glass jars, got in to 700 grocery stores in the West, and learned grocery was a blast. Sold the company in 2018.I've recently started an agency (Endurancy: https://www.endurancy.com) to take the marketing success I developed at Honeymoon and offer small and medium-sized brands AI-based marketing capabilities.I'd love to work with a promising company as it develops and grows. I can bring a wealth of experience, mistakes, learnings in fundraising, product and marketing strategy, business and corporate development, engineering development to the company to help it go faster and smarter.You can reach him @ https://www.linkedin.com/in/davidpaulyoung/

DealMakers
David Young On Selling A Company For $170 Million To Samsung And Now Providing Global Businesses With Economical Open Source Software

DealMakers

Play Episode Listen Later Mar 14, 2025 32:34


David Young's journey to becoming a tech entrepreneur is anything but conventional. His career path took him from studying ancient Greek at Indiana University to Wall Street, and eventually to Silicon Valley, where he founded and scaled successful technology startups, including Joyent. David's latest venture, Federated Computer has attracted funding from top-tier investor, Lightning Ventures.

Oxide and Friends
Rebooting a datacenter: A decade later

Oxide and Friends

Play Episode Listen Later May 30, 2024 100:34 Transcription Available


Back in May 2014 Joyent accidentally rebooted an entire datacenter (not just the handful of node as intended!). That incident--traumatic was it was--informed many aspects of the Oxide product. Bryan and Adam were joined by members of that former Joyent team to discuss, commiserate, and--perhaps--get some things off their chests. a live show weekly on Mondays at 5p for about an hour, and recording them all; here is the recording.In addition to Bryan Cantrill and Adam Leventhal, speakers included Josh Clulow, Brian Bennett, Robert Mustacchi, and Steve Tuck.Some of the topics we hit on, in the order that we hit them:The Register: Fat-fingered admin downs entire Joyent data centerBryan's talk: Debugging Under FireOxide and Friends on the Oakland BallersThe Ur AgentJoyent post-mortemPRs needed!If we got something wrong or missed something, please file a PR! Our next show will likely be on Monday at 5p Pacific Time on our Discord server; stay tuned to our Mastodon feeds for details, or subscribe to this calendar. We'd love to have you join us, as we always love to hear from new speakers!

The Changelog
From Sun to Oxide (Interview)

The Changelog

Play Episode Listen Later May 22, 2024 152:33


Bryan Cantrill, Co-founder and CTO of Oxide Computer Company, joins Adam to share his journey from Sun to Oxide – from Sun and Fishworks, to DTrace, to ZFS, to Joyent and Node.js, and now working to build on-prem cloud servers as they should be at Oxide.

Changelog Master Feed
From Sun to Oxide (Changelog Interviews #592)

Changelog Master Feed

Play Episode Listen Later May 22, 2024 152:33


Bryan Cantrill, Co-founder and CTO of Oxide Computer Company, joins Adam to share his journey from Sun to Oxide – from Sun and Fishworks, to DTrace, to ZFS, to Joyent and Node.js, and now working to build on-prem cloud servers as they should be at Oxide.

Screaming in the Cloud
Building Computers for the Cloud with Steve Tuck

Screaming in the Cloud

Play Episode Listen Later Sep 21, 2023 42:18


Steve Tuck, Co-Founder & CEO of Oxide Computer Company, joins Corey on Screaming in the Cloud to discuss his work to make modern computers cloud-friendly. Steve describes what it was like going through early investment rounds, and the difficult but important decision he and his co-founder made to build their own switch. Corey and Steve discuss the demand for on-prem computers that are built for cloud capability, and Steve reveals how Oxide approaches their product builds to ensure the masses can adopt their technology wherever they are. About SteveSteve is the Co-founder & CEO of Oxide Computer Company.  He previously was President & COO of Joyent, a cloud computing company acquired by Samsung.  Before that, he spent 10 years at Dell in a number of different roles. Links Referenced: Oxide Computer Company: https://oxide.computer/ On The Metal Podcast: https://oxide.computer/podcasts/on-the-metal TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is brought to us in part by our friends at RedHat. As your organization grows, so does the complexity of your IT resources. You need a flexible solution that lets you deploy, manage, and scale workloads throughout your entire ecosystem. The Red Hat Ansible Automation Platform simplifies the management of applications and services across your hybrid infrastructure with one platform. Look for it on the AWS Marketplace.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. You know, I often say it—but not usually on the show—that Screaming in the Cloud is a podcast about the business of cloud, which is intentionally overbroad so that I can talk about basically whatever the hell I want to with whoever the hell I'd like. Today's guest is, in some ways of thinking, about as far in the opposite direction from Cloud as it's possible to go and still be involved in the digital world. Steve Tuck is the CEO at Oxide Computer Company. You know, computers, the things we all pretend aren't underpinning those clouds out there that we all use and pay by the hour, gigabyte, second-month-pound or whatever it works out to. Steve, thank you for agreeing to come back on the show after a couple years, and once again suffer my slings and arrows.Steve: Much appreciated. Great to be here. It has been a while. I was looking back, I think three years. This was like, pre-pandemic, pre-interest rates, pre… Twitter going totally sideways.Corey: And I have to ask to start with that, it feels, on some level, like toward the start of the pandemic, when everything was flying high and we'd had low interest rates for a decade, that there was a lot of… well, lunacy lurking around in the industry, my own business saw it, too. It turns out that not giving a shit about the AWS bill is in fact a zero interest rate phenomenon. And with all that money or concentrated capital sloshing around, people decided to do ridiculous things with it. I would have thought, on some level, that, “We're going to start a computer company in the Bay Area making computers,” would have been one of those, but given that we are a year into the correction, and things seem to be heading up into the right for you folks, that take was wrong. How'd I get it wrong?Steve: Well, I mean, first of all, you got part of it right, which is there were just a litany of ridiculous companies and projects and money being thrown in all directions at that time.Corey: An NFT of a computer. We're going to have one of those. That's what you're selling, right? Then you had to actually hard pivot to making the real thing.Steve: That's it. So, we might as well cut right to it, you know. This is—we went through the crypto phase. But you know, our—when we started the company, it was yes, a computer company. It's on the tin. It's definitely kind of the foundation of what we're building. But you know, we think about what a modern computer looks like through the lens of cloud.I was at a cloud computing company for ten years prior to us founding Oxide, so was Bryan Cantrill, CTO, co-founder. And, you know, we are huge, huge fans of cloud computing, which was an interesting kind of dichotomy. Instead of conversations when we were raising for Oxide—because of course, Sand Hill is terrified of hardware. And when we think about what modern computers need to look like, they need to be in support of the characteristics of cloud, and cloud computing being not that you're renting someone else's computers, but that you have fully programmable infrastructure that allows you to slice and dice, you know, compute and storage and networking however software needs. And so, what we set out to go build was a way for the companies that are running on-premises infrastructure—which, by the way, is almost everyone and will continue to be so for a very long time—access to the benefits of cloud computing. And to do that, you need to build a different kind of computing infrastructure and architecture, and you need to plumb the whole thing with software.Corey: There are a number of different ways to view cloud computing. And I think that a lot of the, shall we say, incumbent vendors over in the computer manufacturing world tend to sound kind of like dinosaurs, on some level, where they're always talking in terms of, you're a giant company and you already have a whole bunch of data centers out there. But one of the magical pieces of cloud is you can have a ridiculous idea at nine o'clock tonight and by morning, you'll have a prototype, if you're of that bent. And if it turns out it doesn't work, you're out, you know, 27 cents. And if it does work, you can keep going and not have to stop and rebuild on something enterprise-grade.So, for the small-scale stuff and rapid iteration, cloud providers are terrific. Conversely, when you wind up in the giant fleets of millions of computers, in some cases, there begin to be economic factors that weigh in, and for some on workloads—yes, I know it's true—going to a data center is the economical choice. But my question is, is starting a new company in the direction of building these things, is it purely about economics or is there a capability story tied in there somewhere, too?Steve: Yeah, it's actually economics ends up being a distant third, fourth, in the list of needs and priorities from the companies that we're working with. When we talk about—and just to be clear we're—our demographic, that kind of the part of the market that we are focused on are large enterprises, like, folks that are spending, you know, half a billion, billion dollars a year in IT infrastructure, they, over the last five years, have moved a lot of the use cases that are great for public cloud out to the public cloud, and who still have this very, very large need, be it for latency reasons or cost reasons, security reasons, regulatory reasons, where they need on-premises infrastructure in their own data centers and colo facilities, et cetera. And it is for those workloads in that part of their infrastructure that they are forced to live with enterprise technologies that are 10, 20, 30 years old, you know, that haven't evolved much since I left Dell in 2009. And, you know, when you think about, like, what are the capabilities that are so compelling about cloud computing, one of them is yes, what you mentioned, which is you have an idea at nine o'clock at night and swipe a credit card, and you're off and running. And that is not the case for an idea that someone has who is going to use the on-premises infrastructure of their company. And this is where you get shadow IT and 16 digits to freedom and all the like.Corey: Yeah, everyone with a corporate credit card winds up being a shadow IT source in many cases. If your processes as a company don't make it easier to proceed rather than doing it the wrong way, people are going to be fighting against you every step of the way. Sometimes the only stick you've got is that of regulation, which in some industries, great, but in other cases, no, you get to play Whack-a-Mole. I've talked to too many companies that have specific scanners built into their mail system every month looking for things that look like AWS invoices.Steve: [laugh]. Right, exactly. And so, you know, but if you flip it around, and you say, well, what if the experience for all of my infrastructure that I am running, or that I want to provide to my software development teams, be it rented through AWS, GCP, Azure, or owned for economic reasons or latency reasons, I had a similar set of characteristics where my development team could hit an API endpoint and provision instances in a matter of seconds when they had an idea and only pay for what they use, back to kind of corporate IT. And what if they were able to use the same kind of developer tools they've become accustomed to using, be it Terraform scripts and the kinds of access that they are accustomed to using? How do you make those developers just as productive across the business, instead of just through public cloud infrastructure?At that point, then you are in a much stronger position where you can say, you know, for a portion of things that are, as you pointed out, you know, more unpredictable, and where I want to leverage a bunch of additional services that a particular cloud provider has, I can rent that. And where I've got more persistent workloads or where I want a different economic profile or I need to have something in a very low latency manner to another set of services, I can own it. And that's where I think the real chasm is because today, you just don't—we take for granted the basic plumbing of cloud computing, you know? Elastic Compute, Elastic Storage, you know, networking and security services. And us in the cloud industry end up wanting to talk a lot more about exotic services and, sort of, higher-up stack capabilities. None of that basic plumbing is accessible on-prem.Corey: I also am curious as to where exactly Oxide lives in the stack because I used to build computers for myself in 2000, and it seems like having gone down that path a bit recently, yeah, that process hasn't really improved all that much. The same off-the-shelf components still exist and that's great. We always used to disparagingly call spinning hard drives as spinning rust in racks. You named the company Oxide; you're talking an awful lot about the Rust programming language in public a fair bit of the time, and I'm starting to wonder if maybe words don't mean what I thought they meant anymore. Where do you folks start and stop, exactly?Steve: Yeah, that's a good question. And when we started, we sort of thought the scope of what we were going to do and then what we were going to leverage was smaller than it has turned out to be. And by that I mean, man, over the last three years, we have hit a bunch of forks in the road where we had questions about do we take something off the shelf or do we build it ourselves. And we did not try to build everything ourselves. So, to give you a sense of kind of where the dotted line is, around the Oxide product, what we're delivering to customers is a rack-level computer. So, the minimum size comes in rack form. And I think your listeners are probably pretty familiar with this. But, you know, a rack is—Corey: You would be surprised. It's basically, what are they about seven feet tall?Steve: Yeah, about eight feet tall.Corey: Yeah, yeah. Seven, eight feet, weighs a couple 1000 pounds, you know, make an insulting joke about—Steve: Two feet wide.Corey: —NBA players here. Yeah, all kinds of these things.Steve: Yeah. And big hunk of metal. And in the cases of on-premises infrastructure, it's kind of a big hunk of metal hole, and then a bunch of 1U and 2U boxes crammed into it. What the hyperscalers have done is something very different. They started looking at, you know, at the rack level, how can you get much more dense, power-efficient designs, doing things like using a DC bus bar down the back, instead of having 64 power supplies with cables hanging all over the place in a rack, which I'm sure is what you're more familiar with.Corey: Tremendous amount of weight as well because you have the metal chassis for all of those 1U things, which in some cases, you wind up with, what, 46U in a rack, assuming you can even handle the cooling needs of all that.Steve: That's right.Corey: You have so much duplication, and so much of the weight is just metal separating one thing from the next thing down below it. And there are opportunities for massive improvement, but you need to be at a certain point of scale to get there.Steve: You do. You do. And you also have to be taking on the entire problem. You can't pick at parts of these things. And that's really what we found. So, we started at this sort of—the rack level as sort of the design principle for the product itself and found that that gave us the ability to get to the right geometry, to get as much CPU horsepower and storage and throughput and networking into that kind of chassis for the least amount of wattage required, kind of the most power-efficient design possible.So, it ships at the rack level and it ships complete with both our server sled systems in Oxide, a pair of Oxide switches. This is—when I talk about, like, design decisions, you know, do we build our own switch, it was a big, big, big question early on. We were fortunate even though we were leaning towards thinking we needed to go do that, we had this prospective early investor who was early at AWS and he had asked a very tough question that none of our other investors had asked to this point, which is, “What are you going to do about the switch?”And we knew that the right answer to an investor is like, “No. We're already taking on too much.” We're redesigning a server from scratch in, kind of, the mold of what some of the hyperscalers have learned, doing our own Root of Trust, we're doing our own operating system, hypervisor control plane, et cetera. Taking on the switch could be seen as too much, but we told them, you know, we think that to be able to pull through all of the value of the security benefits and the performance and observability benefits, we can't have then this [laugh], like, obscure third-party switch rammed into this rack.Corey: It's one of those things that people don't think about, but it's the magic of cloud with AWS's network, for example, it's magic. You can get line rate—or damn near it—between any two points, sustained.Steve: That's right.Corey: Try that in the data center, you wind into massive congestion with top-of-rack switches, where, okay, we're going to parallelize this stuff out over, you know, two dozen racks and we're all going to have them seamlessly transfer information between each other at line rate. It's like, “[laugh] no, you're not because those top-of-rack switches will melt and become side-of-rack switches, and then bottom-puddle-of-rack switches. It doesn't work that way.”Steve: That's right.Corey: And you have to put a lot of thought and planning into it. That is something that I've not heard a traditional networking vendor addressing because everyone loves to hand-wave over it.Steve: Well so, and this particular prospective investor, we told him, “We think we have to go build our own switch.” And he said, “Great.” And we said, “You know, we think we're going to lose you as an investor as a result, but this is what we're doing.” And he said, “If you're building your own switch, I want to invest.” And his comment really stuck with us, which is AWS did not stand on their own two feet until they threw out their proprietary switch vendor and built their own.And that really unlocked, like you've just mentioned, like, their ability, both in hardware and software to tune and optimize to deliver that kind of line rate capability. And that is one of the big findings for us as we got into it. Yes, it was really, really hard, but based on a couple of design decisions, P4 being the programming language that we are using as the surround for our silicon, tons of opportunities opened up for us to be able to do similar kinds of optimization and observability. And that has been a big, big win.But to your question of, like, where does it stop? So, we are delivering this complete with a baked-in operating system, hypervisor, control plane. And so, the endpoint of the system, where the customer meets is either hitting an API or a CLI or a console that delivers and kind of gives you the ability to spin up projects. And, you know, if one is familiar with EC2 and EBS and VPC, that VM level of abstraction is where we stop.Corey: That, I think, is a fair way of thinking about it. And a lot of cloud folks are going to pooh-pooh it as far as saying, “Oh well, just virtual machines. That's old cloud. That just treats the cloud like a data center.” And in many cases, yes, it does because there are ways to build modern architectures that are event-driven on top of things like Lambda, and API Gateway, and the rest, but you take a look at what my customers are doing and what drives the spend, it is invariably virtual machines that are largely persistent.Sometimes they scale up, sometimes they scale down, but there's always a baseline level of load that people like to hand-wave away the fact that what they're fundamentally doing in a lot of these cases, is paying the cloud provider to handle the care and feeding of those systems, which can be expensive, yes, but also delivers significant innovation beyond what almost any company is going to be able to deliver in-house. There is no way around it. AWS is better than you are—whoever you happen to—be at replacing failed hard drives. That is a simple fact. They have teams of people who are the best in the world of replacing failed hard drives. You generally do not. They are going to be better at that than you. But that's not the only axis. There's not one calculus that leads to, is cloud a scam or is cloud a great value proposition for us? The answer is always a deeply nuanced, “It depends.”Steve: Yeah, I mean, I think cloud is a great value proposition for most and a growing amount of software that's being developed and deployed and operated. And I think, you know, one of the myths that is out there is, hey, turn over your IT to AWS because we have or you know, a cloud provider—because we have such higher caliber personnel that are really good at swapping hard drives and dealing with networks and operationally keeping this thing running in a highly available manner that delivers good performance. That is certainly true, but a lot of the operational value in an AWS is been delivered via software, the automation, the observability, and not actual people putting hands on things. And it's an important point because that's been a big part of what we're building into the product. You know, just because you're running infrastructure in your own data center, it does not mean that you should have to spend, you know, 1000 hours a month across a big team to maintain and operate it. And so, part of that, kind of, cloud, hyperscaler innovation that we're baking into this product is so that it is easier to operate with much, much, much lower overhead in a highly available, resilient manner.Corey: So, I've worked in a number of data center facilities, but the companies I was working with, were always at a scale where these were co-locations, where they would, in some cases, rent out a rack or two, in other cases, they'd rent out a cage and fill it with their own racks. They didn't own the facilities themselves. Those were always handled by other companies. So, my question for you is, if I want to get a pile of Oxide racks into my environment in a data center, what has to change? What are the expectations?I mean, yes, there's obviously going to be power and requirements at the data center colocation is very conversant with, but Open Compute, for example, had very specific requirements—to my understanding—around things like the airflow construction of the environment that they're placed within. How prescriptive is what you've built, in terms of doing a building retrofit to start using you folks?Steve: Yeah, definitely not. And this was one of the tensions that we had to balance as we were designing the product. For all of the benefits of hyperscaler computing, some of the design center for you know, the kinds of racks that run in Google and Amazon and elsewhere are hyperscaler-focused, which is unlimited power, in some cases, data centers designed around the equipment itself. And where we were headed, which was basically making hyperscaler infrastructure available to, kind of, the masses, the rest of the market, these folks don't have unlimited power and they aren't going to go be able to go redesign data centers. And so no, the experience should be—with exceptions for folks maybe that have very, very limited access to power—that you roll this rack into your existing data center. It's on standard floor tile, that you give it power, and give it networking and go.And we've spent a lot of time thinking about how we can operate in the wide-ranging environmental characteristics that are commonplace in data centers that focus on themselves, colo facilities, and the like. So, that's really on us so that the customer is not having to go to much work at all to kind of prepare and be ready for it.Corey: One of the challenges I have is how to think about what you've done because you are rack-sized. But what that means is that my own experimentation at home recently with on-prem stuff for smart home stuff involves a bunch of Raspberries Pi and a [unintelligible 00:19:42], but I tend to more or less categorize you the same way that I do AWS Outposts, as well as mythical creatures, like unicorns or giraffes, where I don't believe that all these things actually exist because I haven't seen them. And in fact, to get them in my house, all four of those things would theoretically require a loading dock if they existed, and that's a hard thing to fake on a demo signup form, as it turns out. How vaporware is what you've built? Is this all on paper and you're telling amazing stories or do they exist in the wild?Steve: So, last time we were on, it was all vaporware. It was a couple of napkin drawings and a seed round of funding.Corey: I do recall you not using that description at the time, for what it's worth. Good job.Steve: [laugh]. Yeah, well, at least we were transparent where we were going through the race. We had some napkin drawings and we had some good ideas—we thought—and—Corey: You formalize those and that's called Microsoft PowerPoint.Steve: That's it. A hundred percent.Corey: The next generative AI play is take the scrunched-up, stained napkin drawing, take a picture of it, and convert it to a slide.Steve: Google Docs, you know, one of those. But no, it's got a lot of scars from the build and it is real. In fact, next week, we are going to be shipping our first commercial systems. So, we have got a line of racks out in our manufacturing facility in lovely Rochester, Minnesota. Fun fact: Rochester, Minnesota, is where the IBM AS/400s were built.Corey: I used to work in that market, of all things.Steve: Really?Corey: Selling tape drives in the AS/400. I mean, I still maintain there's no real mainframe migration to the cloud play because there's no AWS/400. A joke that tends to sail over an awful lot of people's heads because, you know, most people aren't as miserable in their career choices as I am.Steve: Okay, that reminds me. So, when we were originally pitching Oxide and we were fundraising, we [laugh]—in a particular investor meeting, they asked, you know, “What would be a good comp? Like how should we think about what you are doing?” And fortunately, we had about 20 investor meetings to go through, so burning one on this was probably okay, but we may have used the AS/400 as a comp, talking about how [laugh] mainframe systems did such a good job of building hardware and software together. And as you can imagine, there were some blank stares in that room.But you know, there are some good analogs to historically in the computing industry, when you know, the industry, the major players in the industry, were thinking about how to deliver holistic systems to support end customers. And, you know, we see this in the what Apple has done with the iPhone, and you're seeing this as a lot of stuff in the automotive industry is being pulled in-house. I was listening to a good podcast. Jim Farley from Ford was talking about how the automotive industry historically outsourced all of the software that controls cars, right? So, like, Bosch would write the software for the controls for your seats.And they had all these suppliers that were writing the software, and what it meant was that innovation was not possible because you'd have to go out to suppliers to get software changes for any little change you wanted to make. And in the computing industry, in the 80s, you saw this blow apart where, like, firmware got outsourced. In the IBM and the clones, kind of, race, everyone started outsourcing firmware and outsourcing software. Microsoft started taking over operating systems. And then VMware emerged and was doing a virtualization layer.And this, kind of, fragmented ecosystem is the landscape today that every single on-premises infrastructure operator has to struggle with. It's a kit car. And so, pulling it back together, designing things in a vertically integrated manner is what the hyperscalers have done. And so, you mentioned Outposts. And, like, it's a good example of—I mean, the most public cloud of public cloud companies created a way for folks to get their system on-prem.I mean, if you need anything to underscore the draw and the demand for cloud computing-like, infrastructure on-prem, just the fact that that emerged at all tells you that there is this big need. Because you've got, you know, I don't know, a trillion dollars worth of IT infrastructure out there and you have maybe 10% of it in the public cloud. And that's up from 5% when Jassy was on stage in '21, talking about 95% of stuff living outside of AWS, but there's going to be a giant market of customers that need to own and operate infrastructure. And again, things have not improved much in the last 10 or 20 years for them.Corey: They have taken a tone onstage about how, “Oh, those workloads that aren't in the cloud, yet, yeah, those people are legacy idiots.” And I don't buy that for a second because believe it or not—I know that this cuts against what people commonly believe in public—but company execs are generally not morons, and they make decisions with context and constraints that we don't see. Things are the way they are for a reason. And I promise that 90% of corporate IT workloads that still live on-prem are not being managed or run by people who've never heard of the cloud. There was a decision made when some other things were migrating of, do we move this thing to the cloud or don't we? And the answer at the time was no, we're going to keep this thing on-prem where it is now for a variety of reasons of varying validity. But I don't view that as a bug. I also, frankly, don't want to live in a world where all the computers are basically run by three different companies.Steve: You're spot on, which is, like, it does a total disservice to these smart and forward-thinking teams in every one of the Fortune 1000-plus companies who are taking the constraints that they have—and some of those constraints are not monetary or entirely workload-based. If you want to flip it around, we were talking to a large cloud SaaS company and their reason for wanting to extend it beyond the public cloud is because they want to improve latency for their e-commerce platform. And navigating their way through the complex layers of the networking stack at GCP to get to where the customer assets are that are in colo facilities, adds lag time on the platform that can cost them hundreds of millions of dollars. And so, we need to think behind this notion of, like, “Oh, well, the dark ages are for software that can't run in the cloud, and that's on-prem. And it's just a matter of time until everything moves to the cloud.”In the forward-thinking models of public cloud, it should be both. I mean, you should have a consistent experience, from a certain level of the stack down, everywhere. And then it's like, do I want to rent or do I want to own for this particular use case? In my vast set of infrastructure needs, do I want this to run in a data center that Amazon runs or do I want this to run in a facility that is close to this other provider of mine? And I think that's best for all. And then it's not this kind of false dichotomy of quality infrastructure or ownership.Corey: I find that there are also workloads where people will come to me and say, “Well, we don't think this is going to be economical in the cloud”—because again, I focus on AWS bills. That is the lens I view things through, and—“The AWS sales rep says it will be. What do you think?” And I look at what they're doing and especially if involves high volumes of data transfer, I laugh a good hearty laugh and say, “Yeah, keep that thing in the data center where it is right now. You will thank me for it later.”It's, “Well, can we run this in an economical way in AWS?” As long as you're okay with economical meaning six times what you're paying a year right now for the same thing, yeah, you can. I wouldn't recommend it. And the numbers sort of speak for themselves. But it's not just an economic play.There's also the story of, does this increase their capability? Does it let them move faster toward their business goals? And in a lot of cases, the answer is no, it doesn't. It's one of those business process things that has to exist for a variety of reasons. You don't get to reimagine it for funsies and even if you did, it doesn't advance the company in what they're trying to do any, so focus on something that differentiates as opposed to this thing that you're stuck on.Steve: That's right. And what we see today is, it is easy to be in that mindset of running things on-premises is kind of backwards-facing because the experience of it is today still very, very difficult. I mean, talking to folks and they're sharing with us that it takes a hundred days from the time all the different boxes land in their warehouse to actually having usable infrastructure that developers can use. And our goal and what we intend to go hit with Oxide as you can roll in this complete rack-level system, plug it in, within an hour, you have developers that are accessing cloud-like services out of the infrastructure. And that—God, countless stories of firmware bugs that would send all the fans in the data center nonlinear and soak up 100 kW of power.Corey: Oh, God. And the problems that you had with the out-of-band management systems. For a long time, I thought Drax stood for, “Dell, RMA Another Computer.” It was awful having to deal with those things. There was so much room for innovation in that space, which no one really grabbed onto.Steve: There was a really, really interesting talk at DEFCON that we just stumbled upon yesterday. The NVIDIA folks are giving a talk on BMC exploits… and like, a very, very serious BMC exploit. And again, it's what most people don't know is, like, first of all, the BMC, the Baseboard Management Controller, is like the brainstem of the computer. It has access to—it's a backdoor into all of your infrastructure. It's a computer inside a computer and it's got software and hardware that your server OEM didn't build and doesn't understand very well.And firmware is even worse because you know, firmware written by you know, an American Megatrends or other is a big blob of software that gets loaded into these systems that is very hard to audit and very hard to ascertain what's happening. And it's no surprise when, you know, back when we were running all the data centers at a cloud computing company, that you'd run into these issues, and you'd go to the server OEM and they'd kind of throw their hands up. Well, first they'd gaslight you and say, “We've never seen this problem before,” but when you thought you've root-caused something down to firmware, it was anyone's guess. And this is kind of the current condition today. And back to, like, the journey to get here, we kind of realized that you had to blow away that old extant firmware layer, and we rewrote our own firmware in Rust. Yes [laugh], I've done a lot in Rust.Corey: No, it was in Rust, but, on some level, that's what Nitro is, as best I can tell, on the AWS side. But it turns out that you don't tend to have the same resources as a one-and-a-quarter—at the moment—trillion-dollar company. That keeps [valuing 00:30:53]. At one point, they lost a comma and that was sad and broke all my logic for that and I haven't fixed it since. Unfortunate stuff.Steve: Totally. I think that was another, kind of, question early on from certainly a lot of investors was like, “Hey, how are you going to pull this off with a smaller team and there's a lot of surface area here?” Certainly a reasonable question. Definitely was hard. The one advantage—among others—is, when you are designing something kind of in a vertical holistic manner, those design integration points are narrowed down to just your equipment.And when someone's writing firmware, when AMI is writing firmware, they're trying to do it to cover hundreds and hundreds of components across dozens and dozens of vendors. And we have the advantage of having this, like, purpose-built system, kind of, end-to-end from the lowest level from first boot instruction, all the way up through the control plane and from rack to switch to server. That definitely helped narrow the scope.Corey: This episode has been fake sponsored by our friends at AWS with the following message: Graviton Graviton, Graviton, Graviton, Graviton, Graviton, Graviton, Graviton, Graviton. Thank you for your l-, lack of support for this show. Now, AWS has been talking about Graviton an awful lot, which is their custom in-house ARM processor. Apple moved over to ARM and instead of talking about benchmarks they won't publish and marketing campaigns with words that don't mean anything, they've let the results speak for themselves. In time, I found that almost all of my workloads have moved over to ARM architecture for a variety of reason, and my laptop now gets 15 hours of battery life when all is said and done. You're building these things on top of x86. What is the deal there? I do not accept that if that you hadn't heard of ARM until just now because, as mentioned, Graviton, Graviton, Graviton.Steve: That's right. Well, so why x86, to start? And I say to start because we have just launched our first generation products. And our first-generation or second-generation products that we are now underway working on are going to be x86 as well. We've built this system on AMD Milan silicon; we are going to be launching a Genoa sled.But when you're thinking about what silicon to use, obviously, there's a bunch of parts that go into the decision. You're looking at the kind of applicability to workload, performance, power management, for sure, and if you carve up what you are trying to achieve, x86 is still a terrific fit for the broadest set of workloads that our customers are trying to solve for. And choosing which x86 architecture was certainly an easier choice, come 2019. At this point, AMD had made a bunch of improvements in performance and energy efficiency in the chip itself. We've looked at other architectures and I think as we are incorporating those in the future roadmap, it's just going to be a question of what are you trying to solve for.You mentioned power management, and that is kind of commonly been a, you know, low power systems is where folks have gone beyond x86. Is we're looking forward to hardware acceleration products and future products, we'll certainly look beyond x86, but x86 has a long, long road to go. It still is kind of the foundation for what, again, is a general-purpose cloud infrastructure for being able to slice and dice for a variety of workloads.Corey: True. I have to look around my environment and realize that Intel is not going anywhere. And that's not just an insult to their lack of progress on committed roadmaps that they consistently miss. But—Steve: [sigh].Corey: Enough on that particular topic because we want to keep this, you know, polite.Steve: Intel has definitely had some struggles for sure. They're very public ones, I think. We were really excited and continue to be very excited about their Tofino silicon line. And this came by way of the Barefoot networks acquisition. I don't know how much you had paid attention to Tofino, but what was really, really compelling about Tofino is the focus on both hardware and software and programmability.So, great chip. And P4 is the programming language that surrounds that. And we have gotten very, very deep on P4, and that is some of the best tech to come out of Intel lately. But from a core silicon perspective for the rack, we went with AMD. And again, that was a pretty straightforward decision at the time. And we're planning on having this anchored around AMD silicon for a while now.Corey: One last question I have before we wind up calling it an episode, it seems—at least as of this recording, it's still embargoed, but we're not releasing this until that winds up changing—you folks have just raised another round, which means that your napkin doodles have apparently drawn more folks in, and now that you're shipping, you're also not just bringing in customers, but also additional investor money. Tell me about that.Steve: Yes, we just completed our Series A. So, when we last spoke three years ago, we had just raised our seed and had raised $20 million at the time, and we had expected that it was going to take about that to be able to build the team and build the product and be able to get to market, and [unintelligible 00:36:14] tons of technical risk along the way. I mean, there was technical risk up and down the stack around this [De Novo 00:36:21] server design, this the switch design. And software is still the kind of disproportionate majority of what this product is, from hypervisor up through kind of control plane, the cloud services, et cetera. So—Corey: We just view it as software with a really, really confusing hardware dongle.Steve: [laugh]. Yeah. Yes.Corey: Super heavy. We're talking enterprise and government-grade here.Steve: That's right. There's a lot of software to write. And so, we had a bunch of milestones that as we got through them, one of the big ones was getting Milan silicon booting on our firmware. It was funny it was—this was the thing that clearly, like, the industry was most suspicious of, us doing our own firmware, and you could see it when we demonstrated booting this, like, a year-and-a-half ago, and AMD all of a sudden just lit up, from kind of arm's length to, like, “How can we help? This is amazing.” You know? And they could start to see the benefits of when you can tie low-level silicon intelligence up through a hypervisor there's just—Corey: No I love the existing firmware I have. Looks like it was written in 1984 and winds up having terrible user ergonomics that hasn't been updated at all, and every time something comes through, it's a 50/50 shot as whether it fries the box or not. Yeah. No, I want that.Steve: That's right. And you look at these hyperscale data centers, and it's like, no. I mean, you've got intelligence from that first boot instruction through a Root of Trust, up through the software of the hyperscaler, and up to the user level. And so, as we were going through and kind of knocking down each one of these layers of the stack, doing our own firmware, doing our own hardware Root of Trust, getting that all the way plumbed up into the hypervisor and the control plane, number one on the customer side, folks moved from, “This is really interesting. We need to figure out how we can bring cloud capabilities to our data centers. Talk to us when you have something,” to, “Okay. We actually”—back to the earlier question on vaporware, you know, it was great having customers out here to Emeryville where they can put their hands on the rack and they can, you know, put your hands on software, but being able to, like, look at real running software and that end cloud experience.And that led to getting our first couple of commercial contracts. So, we've got some great first customers, including a large department of the government, of the federal government, and a leading firm on Wall Street that we're going to be shipping systems to in a matter of weeks. And as you can imagine, along with that, that drew a bunch of renewed interest from the investor community. Certainly, a different climate today than it was back in 2019, but what was great to see is, you still have great investors that understand the importance of making bets in the hard tech space and in companies that are looking to reinvent certain industries. And so, we added—our existing investors all participated. We added a bunch of terrific new investors, both strategic and institutional.And you know, this capital is going to be super important now that we are headed into market and we are beginning to scale up the business and make sure that we have a long road to go. And of course, maybe as importantly, this was a real confidence boost for our customers. They're excited to see that Oxide is going to be around for a long time and that they can invest in this technology as an important part of their infrastructure strategy.Corey: I really want to thank you for taking the time to speak with me about, well, how far you've come in a few years. If people want to learn more and have the requisite loading dock, where should they go to find you?Steve: So, we try to put everything up on the site. So, oxidecomputer.com or oxide.computer. We also, if you remember, we did [On the Metal 00:40:07]. So, we had a Tales from the Hardware-Software Interface podcast that we did when we started. We have shifted that to Oxide and Friends, which the shift there is we're spending a little bit more time talking about the guts of what we built and why. So, if folks are interested in, like, why the heck did you build a switch and what does it look like to build a switch, we actually go to depth on that. And you know, what does bring-up on a new server motherboard look like? And it's got some episodes out there that might be worth checking out.Corey: We will definitely include a link to that in the [show notes 00:40:36]. Thank you so much for your time. I really appreciate it.Steve: Yeah, Corey. Thanks for having me on.Corey: Steve Tuck, CEO at Oxide Computer Company. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this episode, please leave a five-star review on your podcast platform of choice, along with an angry ranting comment because you are in fact a zoology major, and you're telling me that some animals do in fact exist. But I'm pretty sure of the two of them, it's the unicorn.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Oxide and Friends
Shipping the first Oxide rack: Your questions answered!

Oxide and Friends

Play Episode Listen Later Jul 4, 2023 122:53


On this week's show, Adam Leventhal posed questions from Hacker News (mostly) to Oxide founders Bryan Cantrill and Steve Tuck. Stick around until the end to hear about the hardest parts of building Oxide--great, surprising answers from both Bryan and Steve.They were also joined by Steve Klabnik.Questions for Steve and Bryan:[@6:38] Q:Congrats to the team, but after hearing about Oxide for literal years since the beginning of the company and repeatedly reading different iterations of their landing page, I still don't know what their product actually is. It's a hypervisor host? Maybe? So I can host VMs on it? And a network switch? So I can....switch stuff? (*)A:Steve: A rack-scale computer; "A product that allows the rest of the market that runs on-premises IT access to cloud computing."Bryan: agrees[@8:46] Q:It's like an on prem AWS for devs. I don't understand the use case but the hardware is cool. (*)I didn't understand the business opportunity of Oxide at all. Didn't make sense to me.However if they're aiming at the companies parachuting out of the cloud back to data centers and on prem then it makes a lot of sense.It's possible that the price comparison is not with comparable computing devices, but simply with the 9 cents per gigabyte egress fee from major clouds. (*)A:Bryan: "Elastic infrastructure is great and shouldn't be cloistered to the public cloud"; Good reasons to run on-prem: compliance, security, risk management, latency, economics; "Once you get to a certain size, it really makes sense to own"Steve: As more things move onto the internet, need for on-prem is going to grow; you should have the freedom to own[@13:31] Q:Somebody help me understand the business value. All the tech is cool but I don't get the business model, it seems deeply impractical. You buy your own servers instead of renting, which is what most people are doing now. They argue there's a case for this, but it seems like a shrinking market. Everything has gone cloud. Even if there are lots of people who want to leave the cloud, all their data is there. That's how they get you -- it costs nothing to bring data in and a lot to transfer it out. So high cost to switch. AWS and others provide tons of other services in their clouds, which if you depend on you'll have to build out on top of Oxide. So even higher cost to switch. Even though you bought your own servers, you still have to run everything inside VMs, which introduce the sort of issues you would hope to avoid by buying your own servers! Why is this? Because they're building everything on Illumos (Solaris) which is for all practical purposes is dead outside Oxide and delivering questionable value here. Based on blogs/twitter/mastodon they have put a lot of effort into perfecting these weird EE side quests, but they're not making real new hardware (no new CPU, no new fabric, etc). I am skeptical any customers will notice or care and would have not noticed had they used off the shelf hardware/power setups. So you have to be this ultra-bizarre customer, somebody who wants their own servers, but doesn't mind VMs, doesn't need to migrate out of the cloud but wants this instead of whatever hardware they manage themselves now, who will buy a rack at a time, who doesn't need any custom hardware, and is willing to put up with whatever off-the-beaten path difficulties are going to occur because of the custom stuff they've done that's AFAICT is very low value for the customer. Who is this? Even the poster child for needing on prem, the CIA is on AWS now.I don't get it, it just seems like a bunch of geeks playing with VC money?(*)A:Bryan: "EE side quests" rant; you can't build robust, elastic infrastructure on commodity hardware at scale; "The minimum viable product is really, really big"; Example: monitoring fan power draw, tweaking reference desgins doesn't cut it Example: eliminating redundant AC power suppliesSteve: "Feels like I'm dealing with my divorced parents" post[@32:24] Q (Chat):It would be nice to see what this thing is like before having to write a big checkSteve: We are striving to have lab infrastructure available for test drives[@32:56] Q (Chat):I want to know about shipping insurance, logistics, who does the install, ...Bryan: "Next week we'll be joined by the operations team" we want to have an indepth conversation about those topics[@34:40] Q:Seems like Oxide is aiming to be the Apple of the enterprise hardware (which isn't too surprising given the background of the people involved - Sun used to be something like that as were other fully-integrated providers, though granted that Sun didn't write Unix from scratch). Almost like coming to a full circle from the days where the hardware and the software was all done in an integrated fashion before Linux turned-up and started to run on your toaster. (*)A:Bryan: We find things to emulate in both Apple and Sun, e.g., integrated hard- and software; AS/400Steve: "It's not hardware and software together for integration sake", it's required to deliver what the customer wants; "You can't control that experience when you only do half the equation"[@42:38] Q:I truly and honestly hope you succeed. I know for certain that the market for on-prem will remain large for certain sectors for the forseeable future. However. The kind of customer who spends this type of money can be conservative. They already have to go with on an unknown vendor, and rely on unknown hardware. Then they end up with a hypervisor virtually no one else in the same market segment uses.Would you say that KVM or ESXi would be an easier or harder sell here?Innovation budget can be a useful concept. And I'm afraid it's being stretched a lot. (*)A:Bryan: We can deliver more value with our own hypervisor; we've had a lot of experience in that domain from Joyent. There are a lot of reasons that VMware et al. are not popular with their own customers; Intel vs. AMDSteve: "We think it's super important that we're very transparent with what we're building"[@56:05] Q:what is the interface I get when I turn this $$$ computer on? What is the zero to first value when I buy this hardware? (*)A:Steve: "You roll the rack in, you have to give it power, and you have give it networking [...] and you are then off on starting the software experience"; Large pool of infrastructure reosources for customers/devs/SREs/... in a day or less; Similar experience to public cloud providers[@01:02:06] Q:One of my concerns when buying a complete solution like an iPhone (or an Oxide rack

Oxide and Friends
Get You a State Machine for Great Good

Oxide and Friends

Play Episode Listen Later Mar 28, 2023 68:22


Andrew Stone of Oxide Engineering joined Bryan, Adam, and the Oxide Friends to talk about his purpose-built, replay debugger for the Oxide setup textual UI. Andrew borrowed a technique from his extensive work with distributed systems to built a UI that was well-structured... and highly amenable to debuggability. He built a custom debugger "in a weekend"!Some of the topics we hit on, in the order that we hit them: tui-rs Crossterm The reedline crate Episode about the "Sidecar" switch Elm time-travel debugging Replay.io Devtools.fm episode on Replay.io AADEBUG conference California horse meat law The (lightly) edited live chat from the show: MattCampbell: I'm gathering that this is more like the fancy pseudo-GUI style of TUI, which is possibly bad for accessibility ahl: we are also building with accessibility in mind, stripping away some of the non-textual elements optionally MattCampbell: oh, cool ahl: Episode about the "Sidecar" switch: https://github.com/oxidecomputer/oxide-and-friends/blob/master/2021_11_29.md MattCampbell: ooh! That kind of recording is definitely better for accessibility than a video. uwaces: Were you inspired by Elm? (The programming language for web browsers?) bcantrill: Here's Andrew's PR for this, FWIW: oxidecomputer/omicron#2682 uwaces: Elm has a very similar model. They have even had a debugger that let you run events in reverse: https://elm-lang.org/news/time-travel-made-easy bch: I'm joining late - 1) does this state-machine replay model have a name 2) expand on (describe ) the I/o logic separation distinction? ahl: http://dtrace.org/blogs/ahl/2015/06/22/first-rust-program-pain/ zk: RE: logic separation in consensus protocols: the benefit of seperating out the state machine into a side-effect free function allows you to write a formally verified implementation in a pure FP lang or theorem prover, and then extract a reference program from the proof. we're going to the zoo: lol i'm a web dev && we do UI tests via StorybookJS + snapshots of each story + snapshots of the end state of an interaction ig: At that point you could turn the recording into an “expect test”. https://blog.janestreet.com/the-joy-of-expect-tests/ we're going to the zoo: TOFU but for tests

The Changelog
Oxide builds servers (as they should be)

The Changelog

Play Episode Listen Later Jul 8, 2022 92:54 Transcription Available


Today we have a special treat: Bryan Cantrill, co-founder and CTO of Oxide Computer! You may know Bryan from his work on DTrace. He worked at Sun for many years, then Oracle, and finally Joyent before starting Oxide. We dig deep into their company's mission/principles/values, hear how it it all started with a VC's blank check that turned out to be anything but, and learn how Oxide's integrated approach to hardware & software sets them up to compete with the established players by building servers as they should be.

Changelog Master Feed
Oxide builds servers (as they should be) (The Changelog #496)

Changelog Master Feed

Play Episode Listen Later Jul 8, 2022 92:54 Transcription Available


Today we have a special treat: Bryan Cantrill, co-founder and CTO of Oxide Computer! You may know Bryan from his work on DTrace. He worked at Sun for many years, then Oracle, and finally Joyent before starting Oxide. We dig deep into their company's mission/principles/values, hear how it it all started with a VC's blank check that turned out to be anything but, and learn how Oxide's integrated approach to hardware & software sets them up to compete with the established players by building servers as they should be.

Oxide and Friends
Engineering Culture

Oxide and Friends

Play Episode Listen Later Feb 22, 2022 104:08


Oxide and Friends Twitter Space: February 21st, 2022Engineering CultureWe've been holding a Twitter Space weekly on Mondays at 5p for about an hour. Even though it's not (yet?) a feature of Twitter Spaces, we have been recording them all; here is the recording for our Twitter Space for February 21st, 2022.In addition to Bryan Cantrill and Adam Leventhal, speakers on February 21st included Tom Lyon, Tom Killalea, Ian, Antranig Vartanian, Matt Campbell, Simeon Miteff, Matt Ranney and Aaron Hartwig. (Did we miss your name and/or get it wrong? Drop a PR!)Some of the topics we hit on, in the order that we hit them: Alex Heath's tweet on FB meeting about updated values: “meta, metamates, me” [@4:44](https://youtu.be/w9MQJbC26h4?t=284) Can an established company “change its values” in any sense? [@8:43](https://youtu.be/w9MQJbC26h4?t=523) Draw the owl > Twilio CEO: Yes, it was a meme, but it's a great representation of our job. > There is no instruction book and no one is going to tell us how to do our work. > It's now woven into our culture and used as a cheeky, but encouraging reply to > those who email colleagues at Twilio asking how to do something. [@12:42](https://youtu.be/w9MQJbC26h4?t=762) How do you establish engineering culture? Copy-paste values? [@20:44](https://youtu.be/w9MQJbC26h4?t=1244) When are values set down in a company's history?  Amazon's brand image, expanding beyond books Assessing values when hiring [@27:51](https://youtu.be/w9MQJbC26h4?t=1671) Principles vs values  Principles are absolutes, cannot be taken too far Values are about relative importance, in balance with other values ACM Code of Ethics Relative importance of values. Can some values be learned, while others cannot? [@45:11](https://youtu.be/w9MQJbC26h4?t=2711) “Turn-around CEOs”, trying to change an established company culture [@47:39](https://youtu.be/w9MQJbC26h4?t=2859) Sun culture, early days [@54:32](https://youtu.be/w9MQJbC26h4?t=3272) Connection between values and business model Urgency in context, requires nuance [@1:03:37](https://youtu.be/w9MQJbC26h4?t=3817) Values on the wall. When are values simply ignored?  Jack Handey wiki, Deep Thoughts recurring SNL short sketches, eg Thanksgiving ~30secs “Sharpen fast” [@1:13:49](https://youtu.be/w9MQJbC26h4?t=4429) What are the important things to get set early? Bryan and Adam on Joyent and Delphix [@1:22:05](https://youtu.be/w9MQJbC26h4?t=4925) Matt Ranney on his time at Uber  Trying to shape an established culture Leadership's values vs engineers Business ethics [@1:35:47](https://youtu.be/w9MQJbC26h4?t=5747) GE Thomas Gryta and Ted Mann (2020) Lights Out: Pride, Delusion, and the Fall of General Electric book [@1:37:03](https://youtu.be/w9MQJbC26h4?t=5823) Conclusions  Adam: Get it right first, but it's not a lost cause if you don't. Bryan: Look for value alignment in organizations you might want to join, it's tough to change course after the fact. Matt: generous compensation has an effect on how closely one cares to scrutinize their organization's values ¯_(ツ)_/¯ If we got something wrong or missed something, please file a PR! Our next Twitter space will likely be on Monday at 5p Pacific Time; stay tuned to our Twitter feeds for details. We'd love to have you join us, as we always love to hear from new speakers!

Screaming in the Cloud
Drawing from the Depths of Experience with Deirdré Straughan

Screaming in the Cloud

Play Episode Listen Later Jan 25, 2022 41:12


About DeirdréFor over 35 years, Deirdré Straughan has been helping technologies grow and thrive through marketing and community. Her product experience spans consumer apps and devices, cloud services and technologies, and kernel features. Her toolkit includes words, websites, blogs, communities, events, video, social, marketing, and more. She has written and edited technical books and blog posts, filmed and produced videos, and organized meetups, conferences, and conference talks. She just started a new gig heading up open source community at Intel. You can find her @deirdres on Twitter, and she also shares her opinions on beginningwithi.comLinks: “Marketing Your Tech Talent”: https://youtu.be/9pGSIE7grSs Personal Webpage: https://beginningwithi.com Twitter: https://twitter.com/deirdres TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by LaunchDarkly. Take a look at what it takes to get your code into production. I'm going to just guess that it's awful because it's always awful. No one loves their deployment process. What if launching new features didn't require you to do a full-on code and possibly infrastructure deploy? What if you could test on a small subset of users and then roll it back immediately if results aren't what you expect? LaunchDarkly does exactly this. To learn more, visit launchdarkly.com and tell them Corey sent you, and watch for the wince.Corey: This episode is sponsored in part by our friends at Rising Cloud, which I hadn't heard of before, but they're doing something vaguely interesting here. They are using AI, which is usually where my eyes glaze over and I lose attention, but they're using it to help developers be more efficient by reducing repetitive tasks. So, the idea being that you can run stateless things without having to worry about scaling, placement, et cetera, and the rest. They claim significant cost savings, and they're able to wind up taking what you're running as it is, in AWS, with no changes, and run it inside of their data centers that span multiple regions. I'm somewhat skeptical, but their customers seem to really like them, so that's one of those areas where I really have a hard time being too snarky about it because when you solve a customer's problem, and they get out there in public and say, “We're solving a problem,” it's very hard to snark about that. Multus Medical, Construx.ai, and Stax have seen significant results by using them, and it's worth exploring. So, if you're looking for a smarter, faster, cheaper alternative to EC2, Lambda, or batch, consider checking them out. Visit risingcloud.com/benefits. That's risingcloud.com/benefits, and be sure to tell them that I said you because watching people wince when you mention my name is one of the guilty pleasures of listening to this podcast.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. One of the best parts about running this podcast has been that I can go through old notes of conferences I've went to, and the people whose talks I've seen, the folks who have done interesting things that back when I had no idea what I was doing—as if I do now—and these are people I deeply admire. And now I have an excuse to reach out to them and drag them onto this show to basically tell them that until they blush. And today is no exception for that. Deirdré Straughan has had a career that has spanned three decades, I believe, if I'm remembering correctly.Deirdré: A bit more, even.Corey: Indeed. And you've been in I want to say marketing, but I'm scared to frame it that way, not because that's not what you've been doing, but because so few people do marketing to technical audiences well, that the way you do it is so otherworldly good compared to what is out there that it almost certainly gives the wrong impression. So, first things first. Thank you for joining me.Deirdré: Very happy to. Thank you for having me. It's always a delight to talk with you.Corey: So, what is it you'd say it is you do, exactly? Because I'm doing a very weak job of explaining it in a way that is easy for folks who have never heard of you before—which is a failing—to contextualize?Deirdré: Um, well, there's one—you know, I was until recently working for AWS, and one of the—went to an internal conference once at which they said—it was a marketing conference, and they said, “As the marketing organization, our job is to educate.” Now, you can discuss whether or not we think AWS does that well, but I deeply agree with that statement, that as marketers, our job is to educate people. You know, the classical marketing is to educate people about the benefits of your product. You know, “Here's why ours is better.” The Kathy Sierra approach to that, which I think is very, very wise is, don't market your product by telling people how wonderful the product is. Tell them how they can kick ass with it.Corey: How do you wind up disambiguating between that and, let's just say it's almost a trope at this point where someone will talk about something, be it a product, be it an entire Web3 thing, whatever, and when someone comes back and says, “Well, I don't think that's a great idea.” The response is, “Oh, no, no. You just need to be educated properly about it.” Or, “Do your own research.” That sort of thing. And that is to be clear, not anything I've ever seen you say, do, or imply. But that almost feels like the wrong direction to take that in, of educating folks.Deirdré: Well, yeah, I mean, the way it's used in those terms, it sounds condescending. In my earliest, earlier part of my career, I was dealing with consumer software. So, this was in the early days of CD recording. We were among the pioneering CD recording products, and the idea was to make it—my Italian boss saw this market coming because he was doing recording CDs as a service, like, you were a law firm that needed to store a lot of data, and he would cut a CD for you, and you would store that. And you know, this was on a refrigerator-sized thing with a command-line interface, very difficult to use, very easy to waste these $100 blank CDs.But he was following the market, and he saw that there was going to be these half-height CD-ROM drives. And he said, “Well, what we need to go with that is software that is actually usable by the consumer.” And that's what we did; we created that software. And so in that case, there were things the customer still had to know about CDR, but my approach was that, you know, I do the documentation, I have to explain this stuff, but I should have to explain less and less. More and more of that should be driven into the interface and just be so obvious and intuitive that nobody ever has to read a manual. So, education can be any of those things. Your software can be educating the customer while they're using it.Corey: I wish that were one of those things we could point out and say, “Well, yeah, years later, it's blindingly obvious to everyone.” Except for the part where it's not, where every once in a while on Twitter, I will go and try a new service some cloud company launches, or something else I've heard about, and I will, effectively, screenshot and then live tweet my experiences with it. And very often—I'll get accused of people saying, “Ahh, you're pretending to be dumb and not understanding that's how that interface works.” No, I'm not. It turns out that the failure mode of bad interfaces and of not getting this right is not that people look at it and say, “Ah, that product is crap.” It's that, “Oh, I'm dumb, and no one ever told me about it.”That's why I'm so adamant about this. Because if I'm looking at an interface and I get something wrong, it is extremely unlikely that I'm the only person who ever has. And it goes beyond interfaces, it goes out to marketing as well with poor messaging around a product—when I say marketing, I'm talking the traditional sense of telling a story, and here's a press release. “Great. You've told me what it does, you told me about big customers and the rest, but you haven't told me what painful problem do I have that it solves? And why should I care about it?” Almost like that's the foregone conclusion.No, no. We're much more interested in making sure that they get the company name and history right in the ‘About Us' at the bottom of the press release. And it's missing the forest for the trees, in many respects. It's—Deirdré: Yeah.Corey: —some level—it suffers from a similar problem of sales, where you have an entire field that is judged based upon some of the worst examples out there. And on the technical side of the world—and again, all these roles are technical, but the more traditional, ‘I write code for a living' types, there's almost a condescension or a dismissiveness that is brought toward people who work in sales, or in marketing, or honestly, anything that doesn't spend all their time staring into an IDE for a living. You know, the people who get to do something that makes them happy, as opposed to this misery that the coder types that we sometimes find ourselves trapped into. How have you seen that?Deirdré: Yeah. And it's also a condescension towards customers.Corey: Oh absolutely.Deirdré: I have seen so many engineers who will, you know, throw something out there and say, “This is the most beautiful, sexy, amazing thing I've ever done.” And there have been a few occasions when I've looked at it and gone, you know, “Yes, I can see how from a technical point of view, that's beautiful and amazing and sexy, but no customer is ever going to use it.” Either because they don't need it or because they won't understand it. There's no way in that context to have that make sense. And so yeah, you can do beautiful, brilliant engineering, but if you never sell it and no one ever uses it, what's the point?Corey: One am I of the ways that I've always found to tell a story that resonates—and it sometimes takes people by surprise when they're doing a sponsorship or something I do, or whatnot, and they're sitting there talking about how awesome everything is, and hey, let's do a webinar together. And it's cool, we can do that, but I'd rather talk to one of your customers because you can say anything you want about your product, and I can sit here and make fun of it because I have deep-seated personality problems, and that's great. But when a customer says, “I have this problem, and this is the thing that I pay money for to fix that problem,” it is much harder for people to dismiss that because you're voting with your dollars. You're not saying this because if your product succeeds, you get to go buy a car or something. Now, someone instead is saying this because, “I had a painful point, and not only am I willing to pay money to make this painful thing go away, but then I want to go out in public and talk about that.”That is an incredibly hard thing to refute, bordering on the impossible, in some circumstances. That's what always moved me. If you have a customer telling stories about how great something is, I will listen. If you have your own internal employees talking about great something is, I have some snark for you.Deirdré: And that is another thing AWS gets right, is they—Corey: Oh, very much so.Deirdré: —work very hard to get the customer in front of the audience. Although, with a new technology service, et cetera, there was a point before you may have those customers in which the other kind of talk, where you have a highly technical engineer speaking to a highly technical audience and saying, “Here's our shiny new thing and here's what you can do with it,” then you get the customers who will come along later and say, “Yes, we did thing with the shiny new thing, and it was great.” An engineer talking about what they did is not always to be overlooked.Corey: Your career trajectory has been fascinating to me in a variety of different ways. You were at Sun Microsystems. And I guess personally, I just hope that when you decide to write your memoirs, you title it, The Sun Also Crashes. You know, it's such a great title; I haven't seen anything use it yet, and I hope I live to see someone doing that.And then you were at Oracle for ten months—wonder how that happened? For those who are unaware, there was an acquisition story—and then you went to spend three-and-a-half years running educational programs and community at Joyent, back before. Community architect—which is what you were at the time—was really a thing. Community was just the people that showed up to talk about the technology that you've done. You were one of the first people that I can think of in this industry when I've been paying attention, who treated it as something more than that. How do you get there?Deirdré: So, my early career, I was living in Italy because I was married to an Italian at the time, and I had already been working in tech before I left the United States, and enjoyed it and wanted to continue it. But there was not much happening in tech in Italy then. And I just got very, very lucky; I fell in with this Italian software entrepreneur—absolute madman—and he was extremely unusual in Italy in those days. He was basically doing a Silicon Valley-style software startup in Milan. And self-funded, partly funded by his wealthy girlfriend. You know, we were small, scrappy, all of that. And so he decided that he could make better software to do CD recording, as these CD-ROM drives were becoming cheaper, and he could foresee that there would be a consumer market for them.Corey: What era was this? Because I remember—Deirdré: This—Corey: —back when I was in school, basically when I was failing out of college, burning a bunch of CDRs to play there, and every single tool I ever used was crap. You're right. This was a problem.Deirdré: So, we started on that software in, ohh, '91.Corey: Yeah.Deirdré: Yeah. His goal was, “I'm going to make the leading CD recording software for the Windows market.” Hired a bunch of smart engineers, of which there are plenty in Italy, and started building this thing. I had done a project for him, documenting another OCR—Optical Character Recognition—product, and he said, “How would you like to write a book together about CD recording?” And it's like, “Okay, sure.”So, we wrote this book, and, you know, it was like, basically, me reading and him explaining to me the various color book specs from Philips and Sony that explain, you know, right down to the pits and lands, how CD recording works, and then me translating it into layman's terms. And so the book got published in January of 1993 by Random House. It's one of the first books, if not the first book in the world to actually be published with a CD included.Corey: Oh, so you're ultimately the person who's responsible—indirectly—for hey, you could send CDs out, and then the sea of AOL mailers showing up—basically the mini-frisbee plague that lasted a decade or so, for the rest of us?Deirdré: Yeah. And this was all marketing. For him, the whole idea of writing a book was a marketing ploy because on the CD, we included a trial version of the software. And that was all he wanted to put on there, but I thought, “Well, let's take this a step further.” This was—I had been also doing a little bit of work in journalism, just to scrape by in Italy.I was actually an Italian computer journalist, and I was getting sent to conferences, including the launch of Adobe PDF. Like, they sent me to Scotland to learn about PDFs. Like, “Okay.” But then it wasn't quite ready at the time, so I ended up using FrameMaker instead. But I made an entire hypertext version of that book and put it on that CD, which was launched in early '93 when the internet was barely becoming a thing.So, we launched the book, sold the book. Turned out the CD had been manufactured wrong and did not work.Corey: Oh, dear.Deirdré: And I was just dying. And the publisher said, “Well, you know, if you can get ahold of the readers, the people”—you know, because they were getting complaints—they said, “If you can reach the readers somehow and let them know, there's a number they can call and we'll send them a replacement disk.” We had put our CompuServe email address in the book. It's like, “Hey, we'd love to hear from you. Write to us at”—Corey: Weren't those the long string of numbers as a username.Deirdré: Yeah.Corey: Yeah.Deirdré: Mm-hm. You could reach it via external email at the time, I believe. And we didn't really expect that many people would bother. But, you know, because there was this problem, we were getting a lot of contacts. And so I was like, I was determined I was going to solve this situation, and I was interacting with them.And those were my first experiences with interacting with customers, especially online. You know, and we did have a solution; we were able to defuse the situation and get it fixed, but, you know, so that was when I realized it was very powerful because I could communicate very quickly with people anywhere in the world, and—quickly over whatever the modem speed was [laugh] at that time, you know, 1800 baud or something. And so I got intr—I had already been using CompuServe when I was in college, and so I was interested in how do you communicate with people in this new medium.And I started applying that to my work. And then I went and applied it everywhere. It's like, “Okay, well, there's this new thing coming, you know, called the internet. Well, how can I use that?” Publishing a paper manual seems kind of stupid in this day and age, so I can update them much more quickly if I have it on a website.So, by that time, the company had been acquired by Adaptec. Adaptec had a website, which was mostly about their cables and things, and so I just, kind of, made a section of the website. It was like, “Here is all about CDR.” And it got to where it was driving 70% of the traffic to Adaptec, even though our products were a small percentage of the revenue. And at the same time, I was interacting with customers on the Usenet and by email.Corey: And then later, mailing lists, and the rest. And now it—we take it for granted, but it used to be that so much of this was unidirectional, where at an absolute high level, the best you could hope for in some cases is, “I really have something to say to this author. I'm going to write a letter and mail it to the publisher and hope that they forward it.” And you never really know if it's going to wind up landing or not? Now it's, “I'm going to jump on Twitter and tell this person what I think.”And whether that's a good or bad change, it has changed the world. And it's no longer unidirectional where your customers just silent masses anymore, regardless of what you wind up doing or selling. And I sell consulting services. Yeah, I deal with customers a lot; we have high bandwidth conversations, but I also do an annual charity t-shirt drive and I get a lot of feedback and a lot of challenges with deliveries in the rest toward the end of the year. And that is something else. We have to do it. It's not what it used to be just mail a self-addressed stamped envelope to somewhere, and hope for the best. And we'll blame the post office if it doesn't work. The world changed, and it's strange that happens in your own lifetime.Deirdré: Yeah. And there were people who saw it coming, early on. I became aware of The Cluetrain Manifesto because a customer wrote to me and said, I think you're the best example I see out there of people actually living this. And The Cluetrain Manifesto said, “The internet is going to change how companies interact with customers. You are going to have to be part of a conversation, rather than just, we talk to you and tell you what's what.” And I was already embracing that.And then it has had profound implications. It's, in some ways, a democratization of companies and their products because people can suddenly be very vociferous about what they think about your product and what they want improved, and features they'd like added, and so forth. And I never said the customer is always right, but the customer should always be treated politely. And so I just developed this—it was me, but it was a persona which was true to me, where I am out here, I'm interacting with people, I am extremely forthcoming and honest—Corey: That you are, which is always appreciated, to be clear. I have a keen appreciation for folks who I know beyond the shadow of a doubt will tell me where I stand with them. I've never been a fan of folks who will, “I can't stand that guy. Oh, great, here he comes. Hi.” No.There is something very refreshing about the way that you approach honesty, and that you have always had that. And it manifests in different forms. You are one of those people where if you say something in public, be it in writing, be it on stage, be it in your work, you believe it. There has never been a shadow of doubt in my mind that someone could pay you to say something or advocate for something in which you do not believe.Deirdré: Thanks. Yeah, it's just partly because I've never been good at lying. It just makes me so deeply uncomfortable that I can't do it. [laugh].Corey: That's what a good liar would say, let's be very clear here. Like, what's the old joke? Like, “If you can only be good at one thing, be good at lying because then you're good at everything.” No.Deirdré: [laugh].Corey: It's a terrible way to go through life.Deirdré: Yeah. And the earn trust thing was part of my… portfolio from very early on. Which was hilarious because in those days, as now, there were people whose knee-jerk reaction was, if you're out here representing a company, you automatically must be lying to me, or about to lie to me, or have lied to me. But because I had been so out there and so honest, I had dozens of supporters who would pile in and say, “No, no, no. That's not who she is.” And so it was, yeah, it was interesting. I had my trolls but I also had lots of defenders.Corey: The real thing that I've seen as well sometimes is when someone is accused of something like that, people will chime in—look, like, I get this myself. People like you. I don't generally have that problem—but people will chime in with, like, “I don't like Corey, but no, he's generally right about these things.” That's, okay, great. It's like, the backhanded compliment. And I'll take what I can get.I want to fast-forward in time a little bit from the era of mailing books with CDs in them, and then having to talk to people via other ways to get them in CompuServe to 2013 when you gave a talk at one of—no, I'm not going to say, ‘one of.' It is the best community conference of which I am aware. Monktoberfest as put on by our friends at RedMonk. It was called “Marketing Your Tech Talent” and it's one of those videos it's worth the watch. If you're listening to this, and you haven't seen it, you absolutely should fix that. Tell me about it. Where did the talk come from?Deirdré: As you can see in the talk, it was stuff I had been doing. It actually started earlier than that. When I joined Sun Microsystems as a contractor in 2007, my remit was to try to get Sun engineers to communicate. Like, Sun had done this big push around blogging, they'd encourage everybody to open up your own blog. Here's our blogging platform, you can say whatever you want.And there were, like, 3000 blogs, about half of which were just moribund; they had put out one or two posts, and then nothing ever again. And for some reason—I don't know who decided—but they decided that engineers had goals around this and engineering teams had to start producing content in this way, which was a strange idea. So, I was brought on. It's, like, you know, “Help these engineers communicate. Help them with blogging, and somehow find a way to get them doing it.”And so I did a whole bunch of things from, like, running competitions to just going and talking to people. But we finally got to where Dan Maslowski, who was the manager who hired me in, he said, “Well, we've got this conference. It was the SNIA, the Storage Networking Industries Association Conference. We're a big sponsor, we've got, like, ten talks. And why don't you just go—you know, I'm going to buy you a video camera, go record this thing.”And I'd used a video camera a little bit, but, you know, it's like, never in this context, so it's like, okay, let's figure out, you know, what kind of mic do I need? And so I went off to the conference with my video blogging rig, and videoed all those talks. And then the idea was like, “Okay, we'll put them up on”—you know, Sun had its own video channels and things—“We'll put it out there, and this information will then be available to more people; it'll help the engineers communicate what they're doing.”And the funny part was, I run into with Sun, the professional video people wanted nothing to do with it. Like, “Your stuff is not high enough quality. You don't meet our branding guidelines. You cannot put this on the Sun channels.” Okay, fine. So, I started putting it on YouTube, which in those days meant splitting it into ten-minute segments because that was all they would give you. [laugh]. And so it was like, everything I was doing was guerilla marketing because I was always in the teeth on somebody in the corporation who wanted to—it's like, “Oh, we're not going to put out video unless it can be slickly produced in the studio, and we're only going to do that for VPs, not for engineers.”Corey: Oh, yeah. The little people, as it were. This talk, in many ways—I don't know if ever told you this story or not—but it did shape how I approached building out my entire approach: The sponsorship side of the business that I have, how I approach communicating with people. And it's where in many ways, the newsletter has taken its ethos. One of the things that you mentioned in that talk was, first, you were actually the first time that I ever saw someone explicitly comparing the technical talent slash DevRel—which is not a term I would call it, but all right—to the Hollywood model, where you have this idea that there's an agent that winds up handling these folks that are freelancers. They are named talent. They're the ones that have the draw; that's what people want, so we have to develop this.Okay, what why is it important to develop this? Because you absolutely need to have your technical people writing technical content, not folks who are divorced from that entire side of the world because it doesn't resonate, it doesn't land. This is I think, what DevRel was sort of been turned into; it's, what it DevRel? Well, it's special marketing because engineers need special handling to handle these things. No, I think it's everyone needs to be marketed to in a way that has authenticity that meets them where they are, and that's a little harder to do with people who spend their lives writing code than it would be someone who is it was at a more accessible profession.But I don't think that a lot of it's being done right. This was the first encouragement that I'd gotten early on that maybe I am onto something here because here's someone I deeply respect saying a lot of the same things—from a slightly different angle; like I was never doing this as part of a large technology company—but it was still, there's something here. And for better or worse. I think I've demonstrated by now that there is some validity there. But back then it was transformational.Deirdré: Well, thank you.Corey: It still kind of is in many respects. This is all new to someone.Deirdré: Yeah. I felt, you know, I'd been putting engineers in front of the public and found it was powerful, and engineers want to hear from other engineers. And especially for companies like Sun and Oracle and Joyent, we're selling technology to other technologists. So, there's a limited market for white papers because VPs and CEOs want to read those, but really, your main market is other technologists and that's who you need to talk to and talk to them in their own way, in their own language. They weren't even comfortable with slickly produced videos. Neither being on the camera nor watching it.Corey: Yeah, at some point, it was like, “I look too good.” It's like, “Oh, yeah. It's—oh, you're going to do a whole video production thing? Great.” “Okay. [unintelligible 00:24:13] the makeup artists coming in.” Like, “What do you mean makeup?” And it's—Deirdré: Oh, it was worse at Sun. We wasted so much money because you would get an engineer and put him in the studio under all these lights with these great big cameras, and they would just freeze.Corey: Mmm.Deirdré: And it's like, you know, “Well, hurry up, hurry up. We've got half an hour of studio time. Get your thing; say it.” And, [frantic noise]. You know, whereas I would take them in some back conference room and just set up a camera and be sitting in a chair opposite. It's like, “Relax. Tell me what you want to tell me. If we have to do ten takes, it's fine.” Yeah, video quality wasn't great, but the content was great.Corey: It seems like there is a new security breach every day. Are you confident that an old SSH key or a shared admin account isn't going to come back and bite you? If not, check out Teleport. Teleport is the easiest, most secure way to access all of your infrastructure. The open source Teleport Access Plane consolidates everything you need for secure access to your Linux and Windows servers—and I assure you there is no third option there. Kubernetes clusters, databases, and internal applications like AWS Management Console, Yankins, GitLab, Grafana, Jupyter Notebooks, and more. Teleport's unique approach is not only more secure, it also improves developer productivity. To learn more visit: goteleport.com. And no, that is not me telling you to go away, it is: goteleport.com.Corey: Speaking of content, one more topic I want to cover a little bit here is you recently left your job at AWS. And even if you had not told me that, I would have known because your blog has undergone something of a renaissance—beginningwithi.com for those who want to follow along, and of course, we'll put links to this in the [show notes 00:25:08]—you've been suddenly talking about a lot of different things. And I want to be clear, I don't recall any of these posts being one of those, “I just left a company, I'm going to set them on fire now.”It's been about a variety of different topics, though, that have been very top-of-mind for folks. You talk about things like equal work for equal pay. You talk about remote work versus cost of commuting a fair bit. And as of this recording, you most recently wound up talking specifically about problematic employers in tech. But what you're talking about is also something that this happened during the days of the Sun acquisition through Oracle.So, people are thinking, like, “Wait a minute, is she subtweeting what happened today”—no. These things rhyme and they repeat. I'm super thrilled whenever I see this in my RSS reader, just because it is so… they oh, good. I get I'm going to read something now that I'm going to enjoy, so let me put this in distraction-free mode and really dig into it. Because your writing is a joy.What is it that has inspired you to bring that back to life? Is it just to having a whole bunch of free time, and well, I'm not writing marketing stocks anymore, so I guess I'm going to write blog posts instead.Deirdré: My blog, if you looked at our calendar, over the years, it sort of comes and goes depending what else is going on in my life. I actually was starting to do a little bit more writing, and I even did a few little TikTok videos before I quit AWS. I'm starting to think about some of the more ancient history parts of my career. It's partly just because of what's been going on in the world. [Brendan 00:26:35] and I moved to Australia a year ago, and it was something that had been planned for a long time.We did not actually expect that we would be able to move our jobs the way we did. And then, you know, with pandemic, everything changed; that actually accelerated our departure timeline because we've been planning initially to let our son stay in school in California, through until he finished elementary, but then he wasn't in school, so there seems no point, whereas in Australia, he could be in a classroom. And so, you know, the whole world is changing, and the working world is changing, but also, we all started working from home. I've been working from home—mostly—since 1993. And I was working very remotely because I was working from Italy for a California company.And because I was one of the first people doing it, the people in California did not know what to make of me. And I would get people who would just completely ignore any emails I sent. It was like as if I did not exist because they had never seen me in person. So, I would just go to California four times a year and spend a few weeks, and then I would get the face time, and after that it was easy to interact any way I needed to.Corey: It feels like it's almost the worst kind of remote because you have most people at office, and then you have a few outliers, and that tends to, in my experience at least, lead to a really weird team dynamics where you have almost a second class of folks who aren't taken nearly as seriously. It's why when we started our company here, it was everyone is going to be remote all the time. We were distributed. There is no central office because as soon as you do, that's where things are disastrous. My business partner and I live a couple states apart.Deirdré: Yeah. And I think that's the fairest way to do it. In companies that have already existed, where they do have headquarters, and you know, there's that—Corey: Yeah, you can't suddenly sell your office space, and all 300,000 employees [laugh] are now working from home. That's a harder thing, too.Deirdré: Yeah. But I think it's interesting that the argument is being framed as like, “Oh, people work better in the office, people learn more in the office.” And we've even had the argument trotted out here that people should be forced back to the office because the businesses in the central business district depend on that. It's like—Corey: Mmm.Deirdré: —well, what about the businesses that have since, you know in the meantime sprung up in the more suburban centers? Now, you've got some thriving little cafes out there now? Are we supposed to just screw them over? It's ultimately people making economic arguments that have nothing to do with the well-being of employees. And the pandemic at least has—I think, a lot of people have come to realize that life is just too short to put up with a lot of bullshit, and by and large, commuting is bullshit. [laugh].Corey: It's a waste of time, it's not great for the environment, there's—yeah, and again, I'm not sitting here saying the entire world should do a particular thing. I don't think that there's one-size-fits-everyone solutions possible in this space. Some companies, it makes sense for the people involved to be in the same room. In some cases, it's not even optional. For others, there's no value to it, but getting there is hard.And again, different places need to figure out what's right for them. But it's also the world is changing, and trying to pretend that it hasn't, it just feels regressive, and I don't think that's going to align with where the industry and where people are going. Especially in full remote situations we've had the global pandemic, some wit on Twitter recently opined that it's never been easier for a company to change jobs. You just have to wait for the different the new laptop to show up, and then you just join a different Zoom link, and you're in your new job. It's like, “You know, you're not that far from wrong here.”Deirdré: [laugh]. Yep.Corey: There's no, like, “Well, where's the office? What's the”—no. It is, my day-to-day looks remarkably similar, regardless of where I work.Deirdré: Yeah.Corey: That means something.Deirdré: I was one of the early beneficiaries as well of this work-life balance, that I could take my kid to school in the morning, and then work, and then pick her up from school in the afternoon and spend time with her. And then California would be waking up for meetings, so after dinner, I'd be having meetings. Yeah, sometimes it was pain, but it was workable, and it gave me more flexibility, you know, whereas the times I had to commute to an office… tended to be hellish. I think part of the reason the blog has had a lot more activities I've just been in sort of a more reflective phase. I've gotten to this very privileged position where I suddenly realized, I actually have enough money to retire on, I have a husband who is extremely supportive of whatever I want to do, and I'm in a country that has a public health care system, if it doesn't completely crumble under COVID in the next few weeks.Corey: Hopefully, we'll get this published before that happens.Deirdré: Yes. And so I don't have to work. It's like, up to this point in my career, I have always desperately needed that next job. I don't think I have ever been in the position of having competing offers. You know, there's people who talk about, you know, you can always go find a better offer. It's like, no, when you're a weirdo like me and you're a middle-aged woman, is not that easy.Corey: People saying that invariably—“So, what is your formal job?” Like, “Oh, SDE3.” Like, okay, great. So, that means that they're are mul—not just, they don't probably need to hire you; they need to hire so many of you that they need to start segregating them with Roman numerals. Great.Maybe that doesn't apply to everyone. Maybe that particular skill set right now is having its moment in the sun, but there's a lot of other folks who don't neatly fit into those boxes. There's something to be said for empathy. Because this is my lived experience does not mean it is yours. And trying to walk a mile in someone else's shoes is almost increasingly—especially in the world of social media—a bit of a lost skill.Deirdré: [laugh]. I mean, it's partly that recruiters are not always the sharpest tools in the shed, and/or they're very young, very new to it all. It's just people like to go for what's easy. And like, for example, me at the moment, it's easy to put me in that product marketing manager box. It's like, “Oh, I need somebody to fill that slot. You look like that person. Let's talk.” Whereas before, people would just look at my resume and go, “I don't know what she is.”Corey: I really think the fact that you've never had competing offers just shows an extreme lack of vision from a number of companies around what marketing effectively to a technical audience can really be. It's nice to see that what you have been advocating for and doing the work for, for your entire career is really coming into its own now.Deirdré: Yeah. We'll see what happens next. It's been interesting. Yeah, I've never had so much attention from recruiters as when I got AWS on my resume. And then even more once it said, product marketing manager because, you know, “Okay. You've got the FAANG and you've got a title we recognize. Let's talk to you.”Corey: Exactly. That's, “Oh, yay. You fit in that box, finally.” Because it's always been one of those. Yeah, like, “What is it you actually do?” There's a reason that I've built what I do now into the last job I'll ever have. Because I don't even know where to begin describing me to what I do and how I do it. Even at cocktail parties, there's nothing I can say that doesn't sound completely surreal. “I make fun of Amazon for a living.” It's true, but it also sounds psychotic, and here we are. It's—Deirdré: Well, it's absolutely brilliant marketing, and it's working very well for you. So [laugh].Corey: The realization that I had was that if this whole thing collapsed and I had to get a job again, what would I be doing? It probably isn't engineering. It's almost certainly much more closely aligned with marketing. I just hope I never have to find out because, honestly, I'm having way too much fun.Deirdré: Yeah. And that's another thing I think is changing. I think more and more of us are realizing working for other people has its limitations. You know, it can be fun, it can be exciting, depending on the company, and the team, and so on. But you're very much beholden to the culture of the company, or the team, or whatever.I grew up in Asia, as a child, of American expats. So, I'm what is called a third culture kid, which means I'm not totally American, even though my parents were. I'm not—you know, I grew up in Thailand, but I'm not Thai. I grew up in India, but I'm not Indian. You're something in between.And your tribe is actually other people like you, even if they don't share the specific countries. Like, one of my best friends in Milan was a woman who had grown up in Brazil and France. It's like, you know, no countries in common, but we understood that experience. And something I've been meaning to write about for a long time is that third culture kids tend to be really good at adapting to any culture, which can include corporate cultures.So, every time I go into a new company, I'm treating that as a new cultural experience. It's like, Ericsson was fascinating. It's this very old Swedish telecom, with this wild old history, and a footprint in something like 190 countries. That makes it amazingly unique and fascinating. The thing I tripped over was I did not know anything about Swedish culture because they give cultural training to the people who are actually going to be moving to Sweden.Corey: But not the people working elsewhere, even though you're at a—Deirdré: Yeah.Corey: Yeah, it's like, well, dealing with New Yorkers is sort of its own skill, or dealing with Israelis, which is great; they have great folks, but it's a fun culture of management by screaming, in my experience, back when I had family living out there. It was great.Deirdré: One of my favorite people at AWS is Israeli. [laugh].Corey: Exactly. And it's, you have to understand some cultural context here. And now to—even if you're not sitting in the same place. Yeah, we're getting better as an industry, bit by bit, brick by brick. I just hope that will wind up getting there within my lifetime, at least.I really want to thank you for taking the time to come on the show. If people want to learn more, where can they find you?Deirdré: Oh. Well, as you said, my website beginningwithi.com, and I am on Twitter as @deirdres. That's D-E-I-R-D-R-E-S. [laugh]. So.Corey: And we will, of course, include links to that in the [show notes 00:36:23].Deirdré: So yeah, I'm pretty out there, pretty easy to find, and happy to chat with people.Corey: Which I highly recommend. Thank you again, for being so generous with your time, not just now, but over the course of your entire career.Deirdré: Well, I'm at a point where sometimes I can help people, and I really like to do that. The reason I ever aspired to high corporate office—which I've now clearly I'm not ever going to make—was because I wanted to be in a position to make a difference. And so, even if all the difference I'm making is a small one, it's still important to me to try to do that.Corey: Thank you again. I really do appreciate your time.Deirdré: Okay. Well, it was great talking to you. As always.Corey: Likewise. Deirdré Straughan, currently gloriously unemployed. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry insulting comment that you mailed to me on a CDR that doesn't read.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Perfectly Boring
Innovating in Hardware, Software, and the Public Cloud with Steve Tuck, CEO/Co-Founder of Oxide Computer

Perfectly Boring

Play Episode Listen Later Sep 27, 2021 53:14


In this episode, we cover:00:00:00 - Reflections on the Episode/Introduction 00:03:06 - Steve's Bio00:07:30 - The 5 W's of Servers and their Future00:14:00 - Hardware and Software00:21:00 - Oxide Computer 00:30:00 - Investing in Oxide and the Public Cloud00:36:20 - Oxide's Offerings to Customers 00:43:30 - Continious Improvement00:49:00 - Oxide's Future and OutroLinks: Oxide Computer: https://oxide.computer Perfectlyboring.com: https://perfectlyboring.com TranscriptJason: Welcome to the Perfectly Boring podcast, a show where we talk to the people transforming the world's most boring industries. I'm Jason Black, general partner at RRE ventures.Will: And I'm Will Coffield, general partner at Riot Ventures.Jason: Today's boring topic of the day: servers.Will: Today, we've got Steve Tuck, the co-founder and CEO of Oxide Computer, on the podcast. Oxide is on a mission to fundamentally transform the private cloud and on-premise data center so that companies that are not Google, or Microsoft, or Amazon can have hyper scalable, ultra performant infrastructure at their beck and call. I've been an investor in the company for the last two or three years at this point, but Jason, this is your first time hearing the story from Steve and really going deep on Oxide's mission and place in the market. Curious what your initial thoughts are.Jason: At first glance, Oxide feels like a faster horse approach to an industry buying cars left and right. But the shift in the cloud will add $140 billion in new spend every year over the next five years. But one of the big things that was really interesting in the conversation was that it's actually the overarching pie that's expanding, not just demand for cloud but at the same rate, a demand for on-premise infrastructure that's largely been stagnant over the years. One of the interesting pivot points was when hardware and software were integrated back in the mainframe era, and then virtual machines kind of divorced hardware and software at the server level. Opening up the opportunity for a public cloud that reunified those two things where your software and hardware ran together, but the on-premises never really recaptured that software layer and have historically struggled to innovate on that domain.Will: Yeah, it's an interesting inflection point for the enterprise, and for basically any company that is operating digitally at this point, is that you're stuck between a rock and a hard place. You can scale infinitely on the public cloud but you make certain sacrifices from a performance security and certainly from an expense standpoint, or you can go to what is available commercially right now and you can cobble together a Frankenstein-esque solution from a bunch of legacy providers like HP, and Dell, and SolarWinds, and VMware into a MacGyvered together on-premise data center that is difficult to operate for companies where infrastructure isn't, and they don't want it to be, their core competency. Oxide is looking to step into that void and provide a infinitely scalable, ultra-high-performance, plug-and-play rack-scale server for everybody to be able to own and operate without needing to rent it from Google, or AWS, or Microsoft.Jason: Well, it doesn't sound very fun, and it definitely sounds [laugh] very boring. So, before we go too deep, let's jump into the interview with Steve.Will: Steve Tuck, founder and CEO of Oxide Computer. Thank you for joining us today.Steve: Yeah, thanks for having me. Looking forward to it.Will: And I think maybe a great way to kick things off here for listeners would be to give folks a baseline of your background, sort of your bio, leading up to founding Oxide.Steve: Sure. Born and raised in the Bay Area. Grew up in a family business that was and has been focused on heating and air conditioning over the last 100-plus years, Atlas. And went to school and then straight out of school, went into the computer space. Joined Dell computer company in 1999, which was a pretty fun and exciting time at Dell.I think that Dell had just crossed over to being the number one PC manufacturer in the US. I think number two worldwide at Compaq. Really just got to take in and appreciate the direct approach that Dell had taken in a market to stand apart, working directly with customers not pushing everything to the channel, which was customary for a lot of the PC vendors at the time. And while I was there, you had the emergence of—in the enterprise—hardware virtualization company called VMware that at the time, had a product that allowed one to drive a lot more density on their servers by way of virtualizing the hardware that people were running. And watching that become much more pervasive, and working with companies as they began to shift from single system, single app to virtualized environments.And then at the tail end, just watching large tech companies emerge and demand a lot different style computers than those that we had been customarily making at Dell. And kind of fascinated with just what these companies like Facebook, and Google, and Amazon, and others were doing to reimagine what systems needed to look like in their hyperscale environments. One of the companies that was in the tech space, Joyent, a cloud computing company, is where I went next. Was really drawn in just to velocity and the innovation that was taking place with these companies that were providing abstractions on top of hardware to make it much easier for customers to get access to the compute, and the storage, and the networking that they needed to build and deploy software. So, spent—after ten years at Dell, I was at Joyent for ten years. That is where I met my future co-founders, Bryan Cantrill who was at Joyent, and then also Jess Frazelle who we knew working closely while she was at Docker and other stops.But spent ten years as a public cloud infrastructure operator, and we built that service out to support workloads that ran the gamut from small game developers up to very large enterprises, and it was really interesting to learn about and appreciate what this infrastructure utility business looked like in public cloud. And that was also kind of where I got my first realization of just how hard it was to run large fleets of the systems that I had been responsible for providing back at Dell for ten years. We were obviously a large customer of Dell, and Supermicro, and a number of switch manufacturers. It was eye-opening just how much was lacking in the remaining software to bind together hundreds or thousands of these machines.A lot of the operational tooling that I wished had been there and how much we were living at spreadsheets to manage and organize and deploy this infrastructure. While there, also got to kind of see firsthand what happened as customers got really, really big in the public cloud. And one of those was Samsung, who was a very large AWS customer, got so large that they needed to figure out what their path on-premise would look like. And after going through the landscape of all the legacy enterprise solutions, deemed that they had to go buy a cloud company to complete that journey. And they bought Joyent. Spent three years operating the Samsung cloud, and then that brings us to two years ago, when Jess, Bryan, and I started Oxide Computer.Will: I think maybe for the benefit of our listeners, it would be interesting to have you define—and what we're talking about today is the server industry—and to maybe take a step back and in your own words, define what a server is. And then it would be really interesting to jump into a high-level history of the server up until today, and maybe within that, where the emergence of the public cloud came from.Steve: You know, you'll probably get different definitions of what a server is depending on who you ask, but at the highest level, a server differs from a typical PC that you would have in your home in a couple of ways, and more about what it is being asked to do that drives the requirements of what one would deem a server. But if you think about a basic PC that you're running in your home, a laptop, a desktop, a server has a lot of the same components: they have CPUs, and DRAM memory that is for non-volatile storage, and disks that are storing things in a persistent way when you shut off your computer that actually store and retain the data, and a network card so that you can connect to either other machines or to the internet. But where servers start to take on a little bit different shape and a little bit different set of responsibilities is the workloads that they're supporting. Servers, the expectations are that they are going to be running 24/7 in a highly reliable and highly available manner. And so there are technologies that have gone into servers, that ECC memory to ensure that you do not have memory faults that lose data, more robust components internally, ways to manage these things remotely, and ways to connect these to other servers, other computers.Servers, when running well, are things you don't really need to think about, are doing that, are running in a resilient, highly available manner. In terms of the arc of the server industry, if you go back—I mean, there's been servers for many, many, many, many decades. Some of the earlier commercially available servers were called mainframes, and these were big monolithic systems that had a lot of hardware resources at the time, and then were combined with a lot of operational and utilization software to be able to run a variety of tasks. These were giant, giant machines; these were extraordinarily expensive; you would typically find them only running in universities or government projects, maybe some very, very large enterprises in the'60s and'70s. As more and more software was being built and developed and run, the market demand and need for smaller, more accessible servers that were going to be running this common software, were driving machines that were coming out—still hardware plus software—from the likes of IBM and DEC and others.Then you broke into this period in the '80s where, with the advent of x86 and the rise of these PC manufacturers—the Dells and Compaqs and others—this transition to more commodity server systems. A focus, really a focus on hardware only, and building these commodity x86 servers that were less expensive, that were more accessible from an economics perspective, and then ultimately that would be able to run arbitrary software, so one could run any operating system or any body of software that they wanted on these commodity servers. When I got to Dell in 1999, this is several years into Dell's foray into the server market, and you would buy a server from Dell, or from HP, or from Compaq, or IBM, then you would go find your software that you were going to run on top of that to stitch these machines together. That was, kind of, that server virtualization era, in the '90s, 2000s. As I mentioned, technology companies were looking at building more scalable systems that were aggregating resources together and making it much easier for their customers to access the storage, the networking that they needed, that period of time in which the commodity servers and the software industry diverged, and you had a bunch of different companies that were responsible for either hardware or the software that would bring these computers together, these large hyperscalers said, “Well, we're building purpose-built infrastructure services for our constituents at, like, a Facebook. That means we really need to bind this hardware and software together in a single product so that our software teams can go very quickly and they can programmatically access the resources that they need to deploy software.”So, they began to develop systems that looked more monolithic, kind of, rack-level systems that were driving much better efficiency from a power and density perspective, and hydrating it with software to provide infrastructure services to their businesses. And so you saw, what started out in the computer industry is these monolithic hardware plus software products that were not very accessible because they were so expensive and so large, but real products that were much easier to do real work on, to this period where you had a disaggregation of hardware and software where the end-user bore the responsibility of tying these things together and binding these into those infrastructure products, to today, where the largest hyperscalers in the market have come to the realization that building hardware and software together and designing and developing what modern computers should look like, is commonplace, and we all know that well or can access that as public cloud computing.Jason: And what was the driving force behind that decoupling? Was it the actual hardware vendors that didn't want to have to deal with the software? Or is that more from a customer-facing perspective where the customers themselves felt that they could eke out the best advantage by developing their own software stack on top of a relatively commodity unopinionated hardware stack that they could buy from a Dell or an HP?Steve: Yeah, I think probably both, but one thing that was a driver is that these were PC companies. So, coming out of the'80s companies that were considered, quote-unquote, “The IBM clones,” Dell, and Compaq, and HP, and others that were building personal computers and saw an opportunity to build more robust personal computers that could be sold to customers who were running, again, just arbitrary software. There wasn't the desire nor the DNA to go build that full software stack and provide that out as an opinionated appliance or product. And I think then, part of it was also like, hey, if we just focus on the hardware, then got this high utility artifact that we can go sell into all sorts of arbitrary software use cases. You know, whether this is going to be a single server or three servers that's going to go run in a closet of cafe, or it's going to be a thousand servers that are running in one of these large enterprise data centers, we get to build the same box, and that box can run underneath any different type of software. By way of that, what you ultimately get in that scenario is you do have to boil things down to the lowest common denominators to make sure that you've got that compatibility across all the different software types.Will: Who were the primary software vendors that were helping those companies take commodity servers and specialize into particular areas? And what's their role now and how has that transformed in light of the public cloud and the offerings that are once again generalized, but also reintegrated from a hardware and software perspective, just not maybe in your own server room, but in AWS, or Azure, or GCP?Steve: Yeah, so you have a couple layers of software that are required in the operation of hardware, and then all the way up through what we would think about as running in a rack, a full rack system today. You've first got firmware, and this is the software that runs on the hardware to be able to connect the different hardware components, to boot the system, to make sure that the CPU can talk to its memory, and storage, and the network. That software may be a surprise to some, but that firmware that is essential to the hardware itself is not made by the server manufacturer themselves. That was part of this outsourcing exercise in the '80s where not only the upstack software that runs on server systems but actually some of the lower-level downstack software was outsourced to these third-party firmware shops that would write that software. And at the time, probably made a lot of sense and made things a lot easier for the entire ecosystem.You know, the fact that's the same model today, and given how proprietary that is and, you know, where that can actually lead to some vulnerabilities and security issues is more problematic. You've got firmware, then you've got the operating system that runs on top of the server. You have a hypervisor, which is the emulation layer that translates that lower-level hardware into a number of virtual machines that applications can run in. You have control plane software that connects multiple systems together so that you can have five or ten or a hundred, or a thousand servers working in a pool, in a fleet. And then you've got higher-level software that allows a user to carve up the resources that they need to identify the amount of compute, and memory, and storage that they want to spin up.And that is exposed to the end-user by way of APIs and/or a user interface. And so you've got many layers of software that are running on top of hardware, and the two in conjunction are all there to provide infrastructure services to the end-user. And so when you're going to the public cloud today, you don't have to worry about any of that, right? Both of you have probably spun up infrastructure on the public cloud, but they call it 16 digits to freedom because you just swipe a credit card and hit an API, and within seconds, certainly within a minute, you've got readily available virtual servers and services that allow you to deploy software quickly and manage a project with team members. And the kinds of things that used to take days, weeks, or even months inside an enterprise can be done now in a matter of minutes, and that's extraordinarily powerful.But what you don't see is all the integration of these different components running, very well stitched together under the hood. Now, for someone who's deploying their own infrastructure in their own data center today, that sausage-making is very evident. Today, if you're not a cloud hyperscaler, you are having to go pick a hardware vendor and then figure out your operating system and your control plane and your hypervisor, and you have to bind all those things together to create a rack-level system. And it might have three or four different vendors and three or four different products inside of it, and ultimately, you have to bear the responsibility of knitting all that together.Will: Because those products were developed in silos from each other?Steve: Yeah.Will: They were not co-developed. You've got hardware that was designed in a silo separate from oftentimes it sounds like the firmware and all of the software for operating those resources.Steve: Yeah. The hardware has a certain set of market user requirements, and then if you're a Red Hat or you're a VMware, you're talking to your customers about what they need and you're thinking at the software layer. And then you yourself are trying to make it such that it can run across ten or twenty different types of hardware, which means that you cannot do things that bind or provide hooks into that underlying hardware which, unfortunately, is where a ton of value comes from. You can see an analog to this in thinking about the Android ecosystem compared to the Apple ecosystem and what that experience is like when all that hardware and software is integrated together, co-designed together, and you have that iPhone experience. Plenty of other analogs in the automotive industry, with Tesla, and health equipment, and Peloton and others, but when hardware and software—we believe certainly—when hardware and software is co-designed together, you get a better artifact and you get a much, much better user experience. Unfortunately, that is just not the case today in on-prem computing.Jason: So, this is probably a great time to transition to Oxide. Maybe to keep the analogy going, the public cloud is that iPhone experience, but it's just running in somebody else's data center, whether that's AWS, Azure, or one of the other public clouds. You're developing iOS for on-prem, for the people who want to run their own servers, which seems like kind of a countertrend. Maybe you can talk us through the dynamics in that market as it stands today, and how that's growing and evolving, and what role Oxide Computer plays in that, going forward.Steve: You've got this what my co-founder Jess affectionately refers to as ‘infrastructure privilege' in the hyperscalers, where they have been able to apply the money, and the time, and the resources to develop this, kind of, iPhone stack, instead of thinking about a server as a single 1U unit, or single machine, had looked at, well, what does a rack—which is the case that servers are slotted into in these large data centers—what does rack-level computing look like and where can we drive better power efficiency? Where can we drive better density? How can we drive much better security at scale than the commodity server market today? And doing things like implementing hardware Roots of Trust and Chain of Trust, so that you can ensure the software that is running on your machines is what is intended to be running there. The blessing is that we all—the market—gets access to that modern infrastructure, but you can only rent it.The only way you can access it is to rent, and that means that you need to run in one of the three mega cloud providers' data centers in those locations, that you are having to operate in a rental fee model, which at scale can become very, very prohibitively expensive. Our fundamental belief is that the way that these hyperscale data centers have been designed and these products have been designed certainly looks a lot more like what modern computers should look like, but the rest of the market should have access to the same thing. You should be able to buy and own and deploy that same product that runs inside a Facebook data center, or Apple data center, or Amazon, or a Google data center, you should be able to take that product with you wherever your business needs to run. A bit intimidating at the top because what we signed up for was building hardware, and taking a clean sheet paper approach to what a modern server could look like. There's a lot of good hardware innovation that the hyperscalers have helped drive; if you go back to 2010, Facebook pioneered being a lot more open about these modern open hardware systems that they were developing, and the Open Compute Project, OCP, has been a great collection point for these hyperscalers investing in these modern rack-level systems and doing it in the open, thinking about what the software is that is required to operate modern machines, importantly, in a way that does not sink the operations teams of the enterprises that are running them.Again, I think one of the things that was just so stunning to me, when I was at Joyent—we were running these machines, these commodity machines, and stitching together the software at scale—was how much of the organization's time was tied up in the deployment, and the integration, and the operation of this. And not just the organization's time, but actually our most precious resource, our engineering team, was having to spend so much time figuring out where a performance problem was coming from. For example in [clear throat], man, those are the times in which you really are pounding your fist on the table because you will try and go downstack to figure out, is this in the control plane? Is this in the firmware? Is this in the hardware?And commodity systems of today make it extremely, extremely difficult to figure that out. But what we set out to do was build same rack-level system that you might find in a hyperscaler data center, complete with all the software that you need to operate it with the automation required for high availability and low operational overhead, and then with a CloudFront end, with a set of services on the front end of that rack-level system that delight developers, that look like the cloud experience that developers have come to love and depend on in the public cloud. And that means everything is programmable, API-driven services, all the hardware resources that you need—compute, memory, and storage—are actually a pool of resources that you can carve up and get access to and use in a very developer-friendly way. And the developer tools that your software teams have come to depend on just work and all the tooling that these developers have invested so much time in over the last several years, to be able to automate things, to be able to deploy software faster are resident in that product. And so it is definitely kind of hardware and software co-designed, much like some of the original servers long, long, long ago, but modernized with the hardware innovation and open software approach that the cloud has ushered in.Jason: And give us a sense of scale; I think we're so used to seeing the headline numbers of the public cloud, you know, $300-and-some billion dollars today, adding $740-some billion over the next five years in public cloud spend. It's obviously a massive transformation, huge amount of green space up for grabs. What's happening in the on-prem market where your Oxide Computer is playing and how do you think about the growth in that market relative to a public cloud?Steve: It's funny because as Will can attest, as we were going through and fundraising, the prevalent sentiment was, like, everything's going to the public cloud. As we're talking to the folks in the VC community, it was Amazon, Microsoft, and Google are going to own the entirety of compute. We fundamentally disagreed because, A, we've lived it, and b, we went out as we were starting out and talked to dozens and dozens of our peers in the enterprise, who said, “Our cloud ambitions are to be able to get 20, 30, 40% of our workloads out there, and then we still have 60, 70% of our infrastructure that is going to continue to run in our own data centers for reasons including regulatory compliance, latency, security, and in a lot of cases, cost.” It's not possible for these enterprises that are spending half a billion, a billion dollars a year to run all of their infrastructure in the public cloud. What you've seen on-premises, and it depends on who you're turning to, what sort of poll and research you're turning to, but the on-prem market, one is growing, which I think surprises a lot of folks; the public cloud market, of course, it's growing like gangbusters, and that does not surprise a lot of folks, but what we see is that the combined market of on-prem and cloud, you can call it—if on-premise on the order of $100 billion and cloud is on the order of $150 billion, you are going to see enormous growth in both places over the next 10, 15 years.These markets are going to look very, very small compared to where they will be because one of the biggest drivers of whether it's public cloud or on-prem infrastructure, is everything shifting to digital formats. The digitalization that is just all too commonplace, described everywhere. But we're still very, very early in that journey. I think that if you look at the global GDP, less than 10% of the global GDP is on the internet, is online. Over the coming 10, 20 years, as that shifts to 20%, 30%, you're seeing exponential growth. And again, we believe and we have heard from the market that is representative of that $100 billion that investments in the public cloud and on-prem is going to continue to grow much, much more as we look forward.Will: Steve, I really appreciate you letting listeners know how special a VC I am.Steve: [laugh].Will: [laugh]. It was really important point that I wanted to make sure we hit on.Steve: Yeah, should we come back to that?Will: Yeah, yeah yeah—Steve: Yeah, let's spend another five or ten minutes on that.Will: —we'll revisit that. We'll revisit that later. But when we're talking about the market here, one of the things that got us so excited about investing in Oxide is looking at the existing ecosystem of on-prem commercial providers. I think if you look at the public cloud, there are fierce competitors there, with unbelievably sophisticated operations and product development. When you look at the on-prem ecosystem and who you would go to if you were going to build your own data center today, it's a lot of legacy companies that have started to optimize more for, I would say, profitability over the last couple of years than they have for really continuing to drive forward from an R&D and product standpoint.Would love maybe for you to touch on briefly, what does competition for you look like in the on-prem ecosystem? I think it's very clear who you're competing with, from a public cloud perspective, right? It's Microsoft, Google, Amazon, but who are you going up against in the on-prem ecosystem?Steve: Yeah. And just one note on that. We don't view ourselves as competing with Amazon, Google, and Microsoft. In fact, we are ardent supporters of cloud in the format, namely this kind of programmable API-fronted infrastructure as being the path of the future of compute and storage and networking. That is the way that, in the future, most software should be deployed to, and operated on, and run.We just view the opportunity for, and what customers are really, really excited about is having those same benefits of public cloud, but in a format in which they can own it and being able to have access to that everywhere their business needs to run, so that it's not, you know, do I get all this velocity, and this innovation, and this simplicity when I rent public cloud, or do I own my infrastructure and have to give up a lot of that? But to the first part of your question, I think the first issue is that it isn't one vendor that you are talking about what is the collection of vendors that I go to to get servers, software to make my servers talk to each other, switches to network together these servers, and additional software to operate, and manage, and monitor, and update. And there's a lot of complexity there. And then when you take apart each one of those different sets of vendors in the ecosystem, they're not designing together, so you've got these kind of data boundaries and these product boundaries that start to become really, really real when you're operating at scale, and when you're running critical applications to your business on these machines. And you find yourself spending an enormous amount of the company's time just knitting this stuff together and operating it, which is all time lost that could be spent adding additional features to your own product and better competing with your competitors.And so I think that you have a couple of things in play that make it hard for customers running infrastructure on-premises, you've got that dynamic that it's a fractured ecosystem, that these things are not designed together, that you have this kit car that you have to assemble yourself and it doesn't even come with a blueprint of the particular car design that you're building. I think that you do have some profit-taking in that it is very monopolized, especially on the software side where you've only got a couple of large players that know that there are few alternatives for companies. And so you are seeing these ELAs balloon, and you are seeing practices that look a lot like Oracle Enterprise software sales that are really making this on-prem experience not very economically attractive. And so our approach is, hardware should come with all the software required to operate it, it should be tightly integrated, the software should be all open-source. Something we haven't talked about.I think open-source is playing an enormous role in accelerating the cloud landscape and the technology landscapes. We are going to be developing our software in an open manner, and truly believe whether it's from a security view through to the open ecosystem, that it is imperative that software be open. And then we are integrating the switch into that rack-level product so that you've got networking baked in. By doing that, it opens up a whole new vector of value to the customer where, for example, you can see for the first time what is the path of traffic from my virtual machine to a switchboard? Or when things are not performing well, being able to look into that path, and the health, and see where things are not performing as well as they should, and being able to mitigate those sorts of issues.It does turn out if you are able to get rid of a lot of the old, crufty artifacts that have built up inside even these commodity system servers, and you are able to start designing at a rack level where you can drive much better power efficiency and density, and you bake in the software to effectively make this modern rack-level server look like a cloud in a box, and ensure these things can snap together in a grid, where in that larger fleet, operational management is easy because you've got the same automation capabilities that the big cloud hyperscalers have today. It can really simplify life. It ends up being an economic win and maybe most importantly, presents the infrastructure in a way that the developers love. And so there's not this view of the public cloud being the fast, innovative path for developers and on-prem being this, submit a trouble ticket and try and get access to a VM in six days, which sadly is the experience that we hear a lot of companies are still struggling with in on-prem computing.Jason: Practically, when you're going out and talking to customers, you're going to be a heterogeneous environment where presumably they already have their own on-prem infrastructure and they'll start to plug in—Steve: Yeah.Jason: —Oxide Computer alongside of it. And presumably, they're also to some degree in the public cloud. It's a fairly complex environment that you're trying to insert yourself into. How are your customers thinking about building on top of Oxide Computer in that heterogeneous environment? And how do you see Oxide Computer expanding within these enterprises, given that there's a huge amount of existing capital that's gone into building out their data centers that are already operating today, and the public cloud deployments that they have?Steve: As customers are starting to adopt Oxide rack-level computing, they are certainly going to be going into environments where they've got multiple generations of multiple different types of infrastructure. First, the discussions that we're having are around what are the points of data exfiltration, of data access that one needs to operate their broader environment. You can think about handoff points like the network where you want to make sure you've got a consistent protocol to, like, BGP or other, to be able to speak from your layer 2 networks to your layer 3 networks; you've got operational software that is doing monitoring and alerting and rolling up for service for your SRE teams, your operations teams, and we are making sure that Oxide's endpoint—the front end of the Oxide product—will integrate well, will provide the data required for those systems to run well. Another thorny issue for a lot of companies is identity and access management, controlling the authentication and the access for users of their infrastructure systems, and that's another area where we are making sure that the interface from Oxide to the systems they use today, and also resident in the Oxide product such as one wants to use it directly, has a clean cloud-like identity and access management construct for one to use. But at the highest level it is, make sure that you can get out of the Oxide infrastructure, the kind of data and tooling required to incorporate into management of your overall fleet.Over time, I think customers are going to experience a much simpler and much more automated world inside of the Oxide ecosystem; I think they're going to find that there are exponentially fewer hours required to manage that environment and that is going to inevitably just lead to wanting to replace a hundred racks of the extant commodity stack with, you know, sixty racks of Oxide that provide much better density, smaller footprint in the data center, and again, software-driven in the way that these folks are looking for.Jason: And in that answer, you alluded to a lot of the specialization and features that you guys can offer. I've always loved Alan Kay's quote, “People who are really serious about software make their own hardware.”Steve: Yeah.Jason: Obviously, you've got some things in here that only Oxide Computer can do. What are some of those features that traditional vendors can't even touch or deliver that you'll be able to, given your hardware-software integration?Steve: Maybe not the most exciting example, but I think one that is extremely important to a lot of the large enterprise company that we're working with, and that is at a station, being able to attest to the software that is running on your hardware. And why is that important? Well, as we've talked about, you've got a lot of different vendors that are participating in that system that you're deploying in your data center. And today, a lot of that software is proprietary and opaque and it is very difficult to know what versions of things you are running, or what was qualified inside that package that was delivered in the firmware. We were talking to a large financial institution, and they said their teams are spending two weeks a month just doing, kind of a proof of trust in their infrastructure that their customer's data is running on, and how cumbersome and hard it is because of how murky and opaque those lower-level system software world is.What do the hyperscalers do? They have incorporated hardware Root of Trust, which ensures from that first boot instruction, from that first instruction on the microprocessor, that you have a trusted and verifiable path, from the system booting all the way up through the control plane software to, say, a provisioned VM. And so what this does is it allows the rest of the market access to a bunch of security innovation that has gone on where these hyperscalers would never run without this. Again, having the hardware Root of Trust anchored at a station process, the way to attest all that software running is going to be really, really impactful for more than just security-conscious customers, but certainly, those that are investing more in that are really, really excited. If you move upstack a little bit, when you co-design the hardware with the control plane, both the server and the switch hardware with the control plane, it opens up a whole bunch of opportunity to improve performance, improve availability because you now have systems that are designed to work together very, very well.You can now see from the networking of a system through to the resources that are being allocated on a particular machine, and when things are slow, when things are broken, you are able to identify and drive those fixes, in some cases that you could not do before, in much, much, much faster time, which allows you to start driving infrastructure that looks a lot more like the five nines environment that we expect out of the public cloud.Jason: A lot of what you just mentioned, actually, once again, ties back to that analogy to the iPhone, and having that kind of secure enclave that powers Touch ID and Face ID—Steve: Yep.Jason: —kind of a server equivalent, and once again, optimization around particular workflows, the iPhone knows exactly how many photos every [laugh] iOS user takes, and therefore they have a custom chip dedicated specifically to processing images. I think that tight coupling, just relating it back to that iOS and iPhone integration, is really exciting.Steve: Well, and the feedback loop is so important because, you know, like iPhone, we're going to be able to understand where there are rough edges and where things are—where improvements can even can continue to be made. And because this is software-driven hardware, you get an opportunity to continuously improve that artifact over time. It now stops looking like the old, your car loses 30% of the value when you drive it off the lot. Because there's so much intelligent software baked into the hardware, and there's an opportunity to update and add features, and take the learnings from that hardware-software interaction and feed that back into an improving product over time, you can start to see the actual hardware itself have a much longer useful life. And that's one of the things we're really excited about is that we don't think servers should be commodities that the vendors are trying to push you to replace every 36 months.One of the things that is important to keep in mind is as Moore's laws is starting to slow or starting to hit some of the limitations, you won't have CPU density and some of these things, driving the need to replace hardware as quickly. So, with software that helps you drive better utilization and create a better-combined product in that rack-level system, we think we're going to see customers that can start getting five, six, seven years of useful life out of the product, not the typical two, or three, or maybe four that customers are seeing today in the commodity systems.Will: Steve, that's one of the challenges for Oxide is that you're taking on excellence in a bunch of interdisciplinary sciences here, between the hardware, the software, the firmware, the security; this is a monster engineering undertaking. One of the things that I've seen as an investor is how dedicated you have got to be to hiring, to build basically the Avengers team here to go after such a big mission. Maybe you could touch on just how you've thought about architecting a team here. And it's certainly very different than what the legacy providers from an on-prem ecosystem perspective have taken on.Steve: I think one of the things that has been so important is before we even set out on what we were going to build, the three of us spent time and focused on what kind of company we wanted to build, what kind of company that we wanted to work at for the next long chunk of our careers. And it's certainly drawing on experiences that we had in the past. Plenty of positives, but also making sure to keep in mind the negatives and some of the patterns we did not want to repeat in where we were working next. And so we spent a lot of time just first getting the principles and the values of the company down, which was pretty easy because the three of us shared these values. And thinking about all the headwinds, just all the foot faults that hurt startups and even big companies, all the time, whether it be the subjectivity and obscurity of compensation or how folks in some of these large tech companies doing performance management and things, and just thinking about how we could start from a point of building a company that people really want to work for and work with.And I think then layering on top of that, setting out on a mission to go build the next great computer company and build computers for the cloud era, for the cloud generation, that is, as you say, it's a big swing. And it's ambitious, and exhilarating and terrifying, and I think with that foundation of focusing first on the fundamentals of the business regardless of what the business is, and then layering on top of it the mission that we are taking on, that has been appealing, that's been exciting for folks. And it has given us the great opportunity of having terrific technologists from all over the world that have come inbound and have wanted to be a part of this. And we, kind of, will joke internally that we've got eight or nine startups instead of a startup because we're building hardware, and we're taking on developing open-source firmware, and a control plane, and a switch, and hardware Root of Trust, and in all of these elements. And just finding folks that are excited about the mission, that share our values, and that are great technologists, but also have the versatility to work up and down the stack has been really, really key.So far, so great. We've been very fortunate to build a terrific, terrific team. Shameless plug: we are definitely still hiring all over the company. So, from hardware engineering, software engineering, operations, support, sales, we're continuing to add to the team, and that is definitely what is going to make this company great.Will: Maybe just kind of a wrap-up question here. One of the things Jason and I always like to ask folks is, if you succeed over the next five years, how have you changed the market that you're operating in, and what does the company look like in five years? And I want you to know as an investor, I'm holding you to this. Um, so—Steve: Yeah, get your pen ready. Yeah.Will: Yeah, yeah. [laugh].Steve: Definitely. Expect to hear about that in the next board meeting. When we get this product in the market and five years from now, as that has expanded and we've done our jobs, then I think one of the most important things is we will see an incredible amount of time given back to these companies, time that is wasted today having to stitch together a fractured ecosystem of products that were not designed to work together, were not designed with each other in mind. And in some cases, this can be 20, 30, 40% of an organization's time. That is something you can't get back.You know, you can get more money, you can—there's a lot that folks can control, but that loss of time, that inefficiency in DIY your own cloud infrastructure on-premises, will be a big boon. Because that means now you've got the ability for these companies to capitalize on digitalizing their businesses, and just the velocity of their ability to go improve their own products, that just will have a flywheel effect. So, that great simplification where you don't even consider having to go through and do these low-level updates, and having to debug and deal with performance issues that are impossible to sort out, this—aggregation just goes away. This system comes complete and you wouldn't think anything else, just like an iPhone. I think the other thing that I would hope to see is that we have made a huge dent in the efficiency of computing systems on-premises, that the amount of power required to power your applications today has fallen by a significant amount because of the ability to instrument the system, from a hardware and software perspective, to understand where power is being used, where it is being wasted.And I think that can have some big implications, both to just economics, to the climate, to a number of things, by building and people using smarter systems that are more efficient. I think generally just making it commonplace that you have a programmable infrastructure that is great for developers everywhere, that is no longer restricted to a rental-only model. Is that enough for five years?Will: Yeah, I think I think democratizing access to hyperscale infrastructure for everybody else sounds about right.Steve: All right. I'm glad you wrote that down.Jason: Well, once again, Steve, thanks for coming on. Really exciting, I think, in this conversation, talking about the server market as being a fairly dynamic market still, that has a great growth path, and we're really excited to see Oxide Computer succeed, so thanks for coming on and sharing your story with us.Steve: Yeah, thank you both. It was a lot of fun.Will: Thank you for listening to Perfectly Boring. You can keep up the latest on the podcast at perfectlyboring.com, and follow us on Apple, Spotify, or wherever you listen to podcasts. We'll see you next time.

ACM ByteCast
Bryan Cantrill - Episode 17

ACM ByteCast

Play Episode Listen Later Jun 28, 2021 51:02


In this episode of ACM ByteCast, Rashmi Mohan hosts Bryan Cantrill, Co-founder and Chief Technology Officer at Oxide Computer Company and a past member of the ACM Queue Editorial Board. Previously, he was Vice President of Engineering and CTO at Joyent. He is known for his work on the award-winning DTrace software, a comprehensive dynamic tracing framework for which he was included in MIT Technology Review's TR35 (35 Top Young Innovators) list. Bryan describes discovering computing as a kid growing up in the 80s and falling in love with the challenge of solving difficult problems and getting hard programs to work. He talks about DTrace, which he first conceived as an undergraduate at Brown University and developed at Sun Microsystems (later acquired by Oracle). He also explains why he thinks open source will conquer every domain, his current challenge of designing a rack-scale computer for the enterprise, and much more.

OV | BUILD
Merline Saintil (Board Director): Cybersecurity In The Boardroom

OV | BUILD

Play Episode Listen Later Mar 10, 2021 28:28


Merline Saintil has led teams at Intuit, Yahoo, PayPal, Adobe, Joyent, and Sun Microsystems. Now, she serves on the board of directors at a combination of five public and private companies – Alkami Technology, ShotSpotter, Inc., Banner Bank, Lightspeed HQ, GitLab, and most recently she was appointed to Evolv Technology's Board of Directors. . She's constantly thinking about how companies can address and be prepared for cybersecurity threats. Today, Merline shares these insights, her advice for first time managers, and what it means to send the figurative elevator back down.

The Voicebot Podcast
TalkSocket Co-Founders Andrew Delorenzo and Chandler Murch - Voicebot Podcast Ep 167

The Voicebot Podcast

Play Episode Listen Later Sep 13, 2020 67:10


TalkSocket just launched a Kickstarter for a physical product that makes Alexa (and other voice assistants) instantly accessible hands-free on any iPhone or Android smartphone. We talk about the product launch, the technical approach, the partnering strategy, and even how the company expects to eventually expand capabilities to China for Alibaba's and Baidu's voice assistants.  Today's guests are Andrew Delorenzo and Chandler Murch, the co-founders of TalkSocket, a hardware device that makes Alexa hands free on iPhone and Android. It's a non-obvious innovation, but one that just might have the right ingredients for widespread consumer adoption. Delorenzo is CEO and has nearly a decade working in the cloud hosting business for AWS, Tencent, and Joyent. Murch is CTO for TalkSocket and has a background in IT management for companies such as game maker Valve and Ed Tech pioneer Promethean.

Hackerfunk
HF-142 - Illumos

Hackerfunk

Play Episode Listen Later Oct 30, 2019 3:00


Dieses Mal haben wir Toasty zu Gast. Er erzählt uns etwas über den Solaris-Nachfolger Illumos, dessen Entwicklung und beleuchtet die Communities rund um dieses Betriebssystem. Trackliste Little VMills MetalGear Rising – Red Sun 7ieben – Nur für mich aMusic & Leviathan – Throw Navis off the train Illumos :: Illumos Webseite Project Announcement :: Illumos Project Announcement OmniOS :: OmniOS Community Edition OpenCloud :: OpenCloud Implementation für Illumos PKG5 :: Image Packaging System Manatee :: Manatee HA Extension für PostgreSQL Moray :: Key Value Store via TCP pkgsrc :: pkgsrc für Illumos von Joyent OS Umfrage :: OS Umfrage im Fediverse Minio :: MinIO High Performance Object Storage Restic :: Restic S3 Compatible Solution :: A list of Amazon S3 compatible storage solutions gitea :: gitea BSDNow Podcast :: Illumos Folgen des BSDNow Podcasts File Download (3:00 min / 230 MB)

Hackerfunk
HF-142 - Illumos

Hackerfunk

Play Episode Listen Later Oct 30, 2019 180:00


Dieses Mal haben wir Toasty zu Gast. Er erzählt uns etwas über den Solaris-Nachfolger Illumos, dessen Entwicklung und beleuchtet die Communities rund um dieses Betriebssystem. Trackliste Little VMills MetalGear Rising – Red Sun 7ieben – Nur für mich aMusic & Leviathan – Throw Navis off the train Illumos :: Illumos Webseite Project Announcement :: Illumos Project Announcement OmniOS :: OmniOS Community Edition OpenCloud :: OpenCloud Implementation für Illumos PKG5 :: Image Packaging System Manatee :: Manatee HA Extension für PostgreSQL Moray :: Key Value Store via TCP pkgsrc :: pkgsrc für Illumos von Joyent OS Umfrage :: OS Umfrage im Fediverse Minio :: MinIO High Performance Object Storage Restic :: Restic S3 Compatible Solution :: A list of Amazon S3 compatible storage solutions gitea :: gitea BSDNow Podcast :: Illumos Folgen des BSDNow Podcasts File Download (180:00 min / 230 MB)

Pivotal Insights
Episode 125: The intersection of kubernetes, Edge, AI, and tuna pizzas, with Derrick Harris

Pivotal Insights

Play Episode Listen Later May 9, 2019 55:46


Microsoft Build brought a bevy of Windows news this week, plus, there's some more Windows support in Pivotal land and an overview of Pivotal Cloud Foundry's road-map. Our guest is Derrick Harris who's recently joined Pivotal and runs the CIO crib-notes news site Intersect. Additional topics: Coté might have a tape-worm. In Europe, pizzas are sandwiches. Images in RT's. Coffee and chicken AI/ML. "Will robots come for our jobs, Derrick?" GoGrid, Joyent. "Application first." Boring AI. Edge computing.

Pivotal Conversations
The intersection of kubernetes, Edge, AI, and tuna pizzas, with Derrick Harris

Pivotal Conversations

Play Episode Listen Later May 9, 2019 55:46


Microsoft Build brought a bevy of Windows news this week, plus, there's some more Windows support in Pivotal land and an overview of Pivotal Cloud Foundry's road-map. Our guest is Derrick Harris who's recently joined Pivotal and runs the CIO crib-notes news site Intersect. Additional topics: Coté might have a tape-worm. In Europe, pizzas are sandwiches. Images in RT's. Coffee and chicken AI/ML. "Will robots come for our jobs, Derrick?" GoGrid, Joyent. "Application first." Boring AI. Edge computing.

Cloud Native in 15 Minutes
Episode 125: The intersection of kubernetes, Edge, AI, and tuna pizzas, with Derrick Harris

Cloud Native in 15 Minutes

Play Episode Listen Later May 9, 2019 55:46


Microsoft Build brought a bevy of Windows news this week, plus, there's some more Windows support in Pivotal land and an overview of Pivotal Cloud Foundry's road-map. Our guest is Derrick Harris who's recently joined Pivotal and runs the CIO crib-notes news site Intersect. Additional topics: Coté might have a tape-worm. In Europe, pizzas are sandwiches. Images in RT's. Coffee and chicken AI/ML. "Will robots come for our jobs, Derrick?" GoGrid, Joyent. "Application first." Boring AI. Edge computing.

Pivotal Podcasts
The intersection of kubernetes, Edge, AI, and tuna pizzas, with Derrick Harris

Pivotal Podcasts

Play Episode Listen Later May 9, 2019


Microsoft Build brought a bevy of Windows news this week, plus, there's some more Windows support in Pivotal land and an overview of Pivotal Cloud Foundry's road-map. Our guest is Derrick Harris who's recently joined Pivotal and runs the CIO crib-notes news site Intersect. Additional topics: Coté might have a tape-worm. In Europe, pizzas are sandwiches. Images in RT's. Coffee and chicken AI/ML. "Will robots come for our jobs, Derrick?" GoGrid, Joyent. "Application first." Boring AI. Edge computing.

Cloud & Culture
Episode 125: The intersection of kubernetes, Edge, AI, and tuna pizzas, with Derrick Harris

Cloud & Culture

Play Episode Listen Later May 9, 2019 55:46


Microsoft Build brought a bevy of Windows news this week, plus, there's some more Windows support in Pivotal land and an overview of Pivotal Cloud Foundry's road-map. Our guest is Derrick Harris who's recently joined Pivotal and runs the CIO crib-notes news site Intersect. Additional topics: Coté might have a tape-worm. In Europe, pizzas are sandwiches. Images in RT's. Coffee and chicken AI/ML. "Will robots come for our jobs, Derrick?" GoGrid, Joyent. "Application first." Boring AI. Edge computing.

The InfoQ Podcast
Bryan Cantrill on Rust and Why He Feels It’s The Biggest Change In Systems Development in His Career

The InfoQ Podcast

Play Episode Listen Later Apr 12, 2019 38:41


Bryan Cantrill is the CTO of Joyent and well known for the development of DTrace at Sun Microsystems. Today on the podcast, Bryan discusses with Wes Reisz a bit about the origins of DTrace and then spends the rest of the time discussing why he feels Rust is the “biggest development in systems development in his career.” The podcast wraps with a bit about why Bryan feels we should be rewriting parts of the operating system in Rust. Why listen to the podcast: • DTrace came down to a desire to use Dynamic Program Text Modification to instrument running systems (much like debuggers do) and has its origins to when Bryan was an undergraduate. • When a programming language delivers something to you, it takes it from you in the runtime. The classic example of this is garbage collection. The programming language gives you the ability to use memory dynamically without thinking of how the memory is stored in the system, but then it’s going to exact a runtime cost. • One of the issues with C is that it just doesn’t compose well. You can’t just necessarily pull a library off the Internet and use it well. Everyone’s C is laden with some many idiosyncrasies on how it’s used and the contract on how memory is used. • Ownership is statically tracking who owns the structure. It’s ownership and the absence of GC that allows you to address the composability issues found in C. • It’s really easy in C to have integer overflow which leads to memory safety issues that can be exploited by an attacker. Rust makes this pretty much impossible because it’s very good at how it determines how you use signed vs unsigned types. • You don’t want people solving the same problems over and over again. You want composability. You want abstractions. What you don’t want is where you’ve removed so much developer friction that you develop code that is riddled with problems. For example, it slows a developer down to force them to run a linter, but it results in better artifacts. Rust effective builds a lot of that linter checking into the memory management/type checking system. • While there’s some learning curve to Rust. It’s not that bad if you realize there are several core concepts you need to understand to understand Rust. Rust is one of those languages that you really need to learn in a structured way. Sit down with a book and learn it. • Rust struggles when you have objects that are multiply owned (such as a Doubly Linked List). It’s because it doesn’t know who owns what. While Rust supports unsafe operations, you should resist the temptation to develop with a lot of unsafe operations if you want the benefits of what Rust offers developers. • Firmware is a great spot for growing Rust development in a process of replacing bits of what we think of as the operating system. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2uZ5QHZ You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2uZ5QHZ

CoRecursive - Software Engineering Interviews
Software as a Reflection of Values With Bryan Cantrill

CoRecursive - Software Engineering Interviews

Play Episode Listen Later Dec 18, 2018 79:04


Which operating system is the best? Which programming language is the best? What text editor? Bryan Cantrill, CTO of Joyent says that is the wrong question. Languages, operating systems and communities have to make trade offs and they do that based on their values. So the right language is the one who's values align with you and your projects goals. This simple idea carries a lot of weight and I think has the potential to lift up technical discussions to a higher level of discourse. You will find it to be a helpful frame next time you need to make a technical decision. Bryan is also pretty excited about how the values of the rust community align with his values for system software. Also we cover Oberon, Clean and Simula 4, none of which I've never heard of and how IBM System/370 's Global Trace Facility doesn't hold a candle to Dtrace. Webpage for this episode Show Links: Software Values Slides The Design and Implementation of the FreeBSD Operating System Microsoft should buy github All Bryan's Talks Slack Channel for Site

BSD Now
Episode 276: Ho, Ho, Ho - 12.0 | BSD Now 276

BSD Now

Play Episode Listen Later Dec 13, 2018 70:41


FreeBSD 12.0 is finally here, partly-cloudy IPsec VPN, KLEAK with NetBSD, How to create synth repos, GhostBSD author interview, and more. ##Headlines FreeBSD 12.0 is available After a long release cycle, the wait is over: FreeBSD 12.0 is now officially available. We’ve picked a few interesting things to cover in the show, make sure to read the full Release Notes Userland: Group permissions on /dev/acpi have been changed to allow users in the operator GID to invoke acpiconf(8) to suspend the system. The default devfs.rules(5) configuration has been updated to allow mount_fusefs(8) with jail(8). The default PAGER now defaults to less(1) for most commands. The newsyslog(8) utility has been updated to reject configuration entries that specify setuid(2) or executable log files. The WITH_REPRODUCIBLE_BUILD src.conf(5) knob has been enabled by default. A new src.conf(5) knob, WITH_RETPOLINE, has been added to enable the retpoline mitigation for userland builds. Userland applications: The dtrace(1) utility has been updated to support if and else statements. The legacy gdb(1) utility included in the base system is now installed to /usr/libexec for use with crashinfo(8). The gdbserver and gdbtui utilities are no longer installed. For interactive debugging, lldb(1) or a modern version of gdb(1) from devel/gdb should be used. A new src.conf(5) knob, WITHOUT_GDB_LIBEXEC has been added to disable building gdb(1). The gdb(1) utility is still installed in /usr/bin on sparc64. The setfacl(1) utility has been updated to include a new flag, -R, used to operate recursively on directories. The geli(8) utility has been updated to provide support for initializing multiple providers at once when they use the same passphrase and/or key. The dd(1) utility has been updated to add the status=progress option, which prints the status of its operation on a single line once per second, similar to GNU dd(1). The date(1) utility has been updated to include a new flag, -I, which prints its output in ISO 8601 formatting. The bectl(8) utility has been added, providing an administrative interface for managing ZFS boot environments, similar to sysutils/beadm. The bhyve(8) utility has been updated to add a new subcommand to the -l and -s flags, help, which when used, prints a list of supported LPC and PCI devices, respectively. The tftp(1) utility has been updated to change the default transfer mode from ASCII to binary. The chown(8) utility has been updated to prevent overflow of UID or GID arguments where the argument exceeded UID_MAX or GID_MAX, respectively. Kernel: The ACPI subsystem has been updated to implement Device object types for ACPI 6.0 support, required for some Dell, Inc. Poweredge™ AMD® Epyc™ systems. The amdsmn(4) and amdtemp(4) drivers have been updated to attach to AMD® Ryzen 2™ host bridges. The amdtemp(4) driver has been updated to fix temperature reporting for AMD® 2990WX CPUs. Kernel Configuration: The VIMAGE kernel configuration option has been enabled by default. The dumpon(8) utility has been updated to add support for compressed kernel crash dumps when the kernel configuration file includes the GZIO option. See rc.conf(5) and dumpon(8) for additional information. The NUMA option has been enabled by default in the amd64 GENERIC and MINIMAL kernel configurations. Device Drivers: The random(4) driver has been updated to remove the Yarrow algorithm. The Fortuna algorithm remains the default, and now only, available algorithm. The vt(4) driver has been updated with performance improvements, drawing text at rates ranging from 2- to 6-times faster. Deprecated Drivers: The lmc(4) driver has been removed. The ixgb(4) driver has been removed. The nxge(4) driver has been removed. The vxge(4) driver has been removed. The jedec_ts(4) driver has been removed in 12.0-RELEASE, and its functionality replaced by jedec_dimm(4). The DRM driver for modern graphics chipsets has been marked deprecated and marked for removal in FreeBSD 13. The DRM kernel modules are available from graphics/drm-stable-kmod or graphics/drm-legacy-kmod in the Ports Collection as well as via pkg(8). Additionally, the kernel modules have been added to the lua loader.conf(5) module_blacklist, as installation from the Ports Collection or pkg(8) is strongly recommended. The following drivers have been deprecated in FreeBSD 12.0, and not present in FreeBSD 13.0: ae(4), de(4), ed(4), ep(4), ex(4), fe(4), pcn(4), sf(4), sn(4), tl(4), tx(4), txp(4), vx(4), wb(4), xe(4) Storage: The UFS/FFS filesystem has been updated to support check hashes to cylinder-group maps. Support for check hashes is available only for UFS2. The UFS/FFS filesystem has been updated to consolidate TRIM/BIO_DELETE commands, reducing read/write requests due to fewer TRIM messages being sent simultaneously. TRIM consolidation support has been enabled by default in the UFS/FFS filesystem. TRIM consolidation can be disabled by setting the vfs.ffs.dotrimcons sysctl(8) to 0, or adding vfs.ffs.dotrimcons=0 to sysctl.conf(5). NFS: The NFS version 4.1 server has been updated to include pNFS server support. ZFS: ZFS has been updated to include new sysctl(8)s, vfs.zfs.arc_min_prefetch_ms and vfs.zfs.arc_min_prescient_prefetch_ms, which improve performance of the zpool(8) scrub subcommand. The new spacemap_v2 zpool feature has been added. This provides more efficient encoding of spacemaps, especially for full vdev spacemaps. The large_dnode zpool feature been imported, allowing better compatibility with pools created under ZFS-on-Linux 0.7.x Many bug fixes have been applied to the device removal feature. This feature allows you to remove a non-redundant or mirror vdev from a pool by relocating its data to other vdevs. Includes the fix for PR 229614 that could cause processes to hang in zil_commit() Boot Loader Changes: The lua loader(8) has been updated to detect a list of installed kernels to boot. The loader(8) has been updated to support geli(8) for all architectures and all disk-like devices. The loader(8) has been updated to add support for loading Intel® microcode updates early during the boot process. Networking: The pf(4) packet filter is now usable within a jail(8) using vnet(9). The pf(4) packet filter has been updated to use rmlock(9) instead of rwlock(9), resulting in significant performance improvements. The SO_REUSEPORT_LB option has been added to the network stack, allowing multiple programs or threads to bind to the same port, and incoming connections load balanced using a hash function. Again, read the release notes for a full list, check out the errata notices. A big THANKS to the entire release engineering team and all developers involved in the release, much appreciated! ###Abandon Linux. Move to FreeBSD or Illumos If you use GNU/Linux and you are only on opensource, you may be doing it wrong. Here’s why. Is your company based on opensource based software only? Do you have a bunch of developers hitting some kind of server you have installed for them to “do their thing”? Being it for economical reasons (remember to donate), being it for philosophycal ones, you may have skipped good alternatives. The BSD’s and Illumos. I bet you are running some sort of Debian, openSuSE or CentOS. It’s very discouraging having entered into the IT field recently and discover many of the people you meet do not even recognise the name BSD. Naming Solaris seems like naming the evil itself. The problem being many do not know why. They can’t point anything specific other than it’s fading out. This has recently shown strong when Oracle officials have stated development for new features has ceased and almost 90 % of developers for Solaris have been layed off. AIX seems alien to almost everybody unless you have a white beard. And all this is silly. And here’s why. You are certainly missing two important features that FreeBSD and Illumos derivatives are enjoying. A full virtualization technology, much better and fully developed compared to the LXC containers in the Linux world, such as Jails on BSD, Zones in Solaris/Illumos, and the great ZFS file system which both share. You have probably heard of a new Linux filesystem named Btrfs, which by the way, development has been dropped from the Red Hat side. Trying to emulate ZFS, Oracle started developing Btrfs file system before they acquired Sun (the original developer of ZFS), and SuSE joined the effort as well as Red Hat. It is not as well developed as ZFS and it hasn’t been tested in production environments as extensively as the former has. That leaves some uncertainty on using it or not. Red Hat leaving it aside does add some more. Although some organizations have used it with various grades of success. But why is this anyhow interesting for a sysadmin or any organization? Well… FreeBSD (descendant of Berkeley UNIX) and SmartOS (based on Illumos) aglutinate some features that make administration easier, safer, faster and more reliable. The dream of any systems administrator. To start, the ZFS filesystem combines the typical filesystem with a volume manager. It includes protection against corruption, snapshots and copy-on-write clones, as well as volume manager. Jails is another interesting piece of technology. Linux folks usually associate this as a sort of chroot. It isn’t. It is somehow inspired by it but as you may know you can escape from a chroot environment with a blink of an eye. Jails are not called jails casually. The name has a purpose. Contain processes and programs within a defined and totally controlled environment. Jails appeared first in FreeBSD in the year 2000. Solaris Zones debuted on 2005 (now called containers) are the now proprietary version of those. There are some other technologies on Linux such as Btrfs or Docker. But they have some caveats. Btrfs hasn’t been fully developed yet and it’s hasn’t been proved as much in production environments as ZFS has. And some problems have arisen recently although the developers are pushing the envelope. At some time they will match ZFS capabilities for sure. Docker is growing exponentially and it’s one of the cool technologies of modern times. The caveat is, as before, the development of this technology hasn’t been fully developed. Unlike other virtualization technologies this is not a kernel playing on top of another kernel. This is virtualization at the OS level, meaning differentiated environments can coexist on a single host, “hitting” the same unique kernel which controls and shares the resources. The problem comes when you put Docker on top of any other virtualization technology such as KVM or Xen. It breaks the purpose of it and has a performance penalty. I have arrived into the IT field with very little knowledge, that is true. But what I see strikes me. Working in a bank has allowed me to see a big production environment that needs the highest of the availability and reliability. This is, sometimes, achieved by bruteforce. And it’s legitime and adequate. Redundancy has a reason and a purpose for example. But some other times it looks, it feels, like killing flies with cannons. More hardware, more virtual machines, more people, more of this, more of that. They can afford it, so they try to maintain the cost low but at the end of the day there is a chunky budget to back operations. But here comes reality. You’re not a bank and you need to squeeze your investment as much as possible. By using FreeBSD jails you can avoid the performance penalty of KVM or Xen virtualization. Do you use VMWare or Hyper-V? You can avoid both and gain in performance. Not only that, control and manageability are equal as before, and sometimes easier to administer. There are four ways to operate them which can be divided in two categories. Hardcore and Human Being. For the Hardcore use the FreeBSD handbook and investigate as much as you can. For the Human Being way there are three options to use. Ezjail, Iocage and CBSD which are frameworks or programs as you may call to manage jails. I personally use Iocage but I have also used Ezjail. How can you use jails on your benefit? Ever tried to configure some new software and failed miserably? You can have three different jails running at the same time with different configurations. Want to try a new configuration in a production piece of hardware without applying it on the final users? You can do that with a small jail while the production environment is on in another bigger, chunkier jail. Want to divide the hardware as a replica of the division of the team/s you are working with? Want to sell virtual machines with bare metal performance? Do you want to isolate some piece of critical software or even data in a more controlled environment? Do you have different clients and you want to use the same hardware but you want to avoid them seeing each other at the same time you maintain performance and reliability? Are you a developer and you have to have reliable and portable snapshots of your work? Do you want to try new options-designs without breaking your previous work, in a timeless fashion? You can work on something, clone the jail and apply the new ideas on the project in a matter of seconds. You can stop there, export the filesystem snapshot containing all the environment and all your work and place it on a thumbdrive to later import it on a big production system. Want to change that image properties such as the network stack interface and ip? This is just one command away from you. But what properties can you assign to a jail and how can I manage them you may be wondering. Hostname, disk quota, i/o, memory, cpu limits, network isolation, network virtualization, snapshots and the manage of those, migration and root privilege isolation to name a few. You can also clone them and import and export them between different systems. Some of these things because of ZFS. Iocage is a python program to manage jails and it takes profit from ZFS advantages. But FreeBSD is not Linux you may say. No it is not. There are no run levels. The systemd factor is out of this equation. This is so since the begginning. Ever wondered where did vi come from? The TCP/IP stack? Your beloved macOS from Apple? All this is coming from the FreeBSD project. If you are used to Linux your adaptation period with any BSD will be short, very short. You will almost feel at home. Used to packaged software using yum or apt-get? No worries. With pkgng, the package management tool used in FreeBSD has almost 27.000 compiled packages for you to use. Almost all software found on any of the important GNU/Linux distros can be found here. Java, Python, C, C++, Clang, GCC, Javascript frameworks, Ruby, PHP, MySQL and the major forks, etc. All this opensource software, and much more, is available at your fingertips. I am a developer and… frankly my time is money and I appreciate both much more than dealing with systems configuration, etc. You can set a VM using VMWare or VirtualBox and play with barebones FreeBSD or you can use TrueOS (a derivative) which comes in a server version and a desktop oriented one. The latter will be easier for you to play with. You may be doing this already with Linux. There is a third and very sensible option. FreeNAS, developed by iXSystems. It is FreeBSD based and offers all these technologies with a GUI. VMWare, Hyper-V? Nowadays you can get your hands off the CLI and get a decent, usable, nice GUI. You say you play on the cloud. The major players already include FreeBSD in their offerings. You can find it in Amazon AWS or Azure (with official Microsoft support contracts too!). You can also find it in DigitalOcean and other hosting providers. There is no excuse. You can use it at home, at the office, with old or new hardware and in the cloud as well. You can even pay for a support contract to use it. Joyent, the developers of SmartOS have their own cloud with different locations around the globe. Have a look on them too. If you want the original of ZFS and zones you may think of Solaris. But it’s fading away. But it really isn’t. When Oracle bouth Sun many people ran away in an stampide fashion. Some of the good folks working at Sun founded new projects. One of these is Illumos. Joyent is a company formed by people who developed these technologies. They are a cloud operator, have been recently bought by Samsung and have a very competent team of people providing great tech solutions. They have developed an OS, called SmartOS (based on Illumos) with all these features. The source from this goes back to the early days of UNIX. Do you remember the days of OpenSolaris when Sun opensourced the crown jewels? There you have it. A modern opensource UNIX operating system with the roots in their original place and the head planted on today’s needs. In conclusion. If you are on GNU/Linux and you only use opensource software you may be doing it wrong. And missing goodies you may need and like. Once you put your hands on them, trust me, you won’t look back. And if you have some “old fashioned” admins who know Solaris, you can bring them to a new profitable and exciting life with both systems. Still not convinced? Would you have ever imagined Microsoft supporting Linux? Even loving it? They do love now FreeBSD. And not only that, they provide their own image in the Azure Cloud and you can get Microsoft support, payed support if you want to use the platform on Azure. Ain’t it… surprising? Convincing at all? PS: I haven’t mentioned both softwares, FreeBSD and SmartOS do have a Linux translation layer. This means you can run Linux binaries on them and the program won’t cough at all. Since the ABI stays stable the only thing you need to run a Linux binary is a translation between the different system calls and the libraries. Remember POSIX? Choose your poison and enjoy it. ###A partly-cloudy IPsec VPN Audience I’m assuming that readers have at least a basic knowledge of TCP/IP networking and some UNIX or UNIX-like systems, but not necessarily OpenBSD or FreeBSD. This post will therefore be light on details that aren’t OS specific and are likely to be encountered in normal use (e.g., how to use vi or another text editor.) For more information on these topics, read Absolute FreeBSD (3ed.) by Michael W. Lucas. Overview I’m redoing my DigitalOcean virtual machines (which they call droplets). My requirements are: VPN Road-warrior access, so I can use private network resources from anywhere. A site-to-site VPN, extending my home network to my VPSes. Hosting for public and private network services. A proxy service to provide a public IP address to services hosted at home. The last item is on the list because I don’t actually have a public IP address at home; my firewall’s external address is in the RFC 1918 space, and the entire apartment building shares a single public IPv4 address.1 (IPv6? Don’t I wish.) The end-state network will include one OpenBSD droplet providing firewall, router, and VPN services; and one FreeBSD droplet hosting multiple jailed services. I’ll be providing access via these droplets to a NextCloud instance at home. A simple NAT on the DO router droplet isn’t going to work, because packets going from home to the internet would exit through the apartment building’s connection and not through the VPN. It’s possible that I could do work around this issue with packet tagging using the pf firewall, but HAProxy is simple to configure and unlikely to result in hard-to-debug problems. relayd is also an option, but doesn’t have the TLS parsing abilities of HAProxy, which I’ll be using later on. Since this system includes jails running on a VPS, and they’ve got RFC 1918 addresses, I want them reachable from my home network. Once that’s done, I can access the private address space from anywhere through a VPN connection to the cloudy router. The VPN itself will be of the IPsec variety. IPsec is the traditional enterprise VPN standard, and is even used for classified applications, but has a (somewhat-deserved) reputation for complexity, but recent versions of OpenBSD turn down the difficulty by quite a bit. The end-state network should look like: https://d33wubrfki0l68.cloudfront.net/0ccf46fb057e0d50923209bb2e2af0122637e72d/e714e/201812-cloudy/endstate.svg This VPN both separates internal network traffic from public traffic and uses encryption to prevent interception or tampering. Once traffic has been encrypted, decrypting it without the key would, as Bruce Schneier once put it, require a computer built from something other than matter that occupies something other than space. Dyson spheres and a frakton of causality violation would possibly work, as would mathemagical technology that alters the local calendar such that P=NP.2 Black-bag jobs and/or suborning cloud provider employees doesn’t quite have that guarantee of impossibility, however. If you have serious security requirements, you’ll need to do better than a random blog entry. ##News Roundup KLEAK: Practical Kernel Memory Disclosure Detection Modern operating systems such as NetBSD, macOS, and Windows isolate their kernel from userspace programs to increase fault tolerance and to protect against malicious manipulations [10]. User space programs have to call into the kernel to request resources, via system calls or ioctls. This communication between user space and kernel space crosses a security boundary. Kernel memory disclosures - also known as kernel information leaks - denote the inadvertent copying of uninitialized bytes from kernel space to user space. Such disclosed memory may contain cryptographic keys, information about the kernel memory layout, or other forms of secret data. Even though kernel memory disclosures do not allow direct exploitation of a system, they lay the ground for it. We introduce KLEAK, a simple approach to dynamically detect kernel information leaks. Simply said, KLEAK utilizes a rudimentary form of taint tracking: it taints kernel memory with marker values, lets the data travel through the kernel and scans the buffers exchanged between the kernel and the user space for these marker values. By using compiler instrumentation and rotating the markers at regular intervals, KLEAK significantly reduces the number of false positives, and is able to yield relevant results with little effort. Our approach is practically feasible as we prove with an implementation for the NetBSD kernel. A small performance penalty is introduced, but the system remains usable. In addition to implementing KLEAK in the NetBSD kernel, we applied our approach to FreeBSD 11.2. In total, we detected 21 previously unknown kernel memory disclosures in NetBSD-current and FreeBSD 11.2, which were fixed subsequently. As a follow-up, the projects’ developers manually audited related kernel areas and identified dozens of other kernel memory disclosures. The remainder of this paper is structured as follows. Section II discusses the bug class of kernel memory disclosures. Section III presents KLEAK to dynamically detect instances of this bug class. Section IV discusses the results of applying KLEAK to NetBSD-current and FreeBSD 11.2. Section V reviews prior research. Finally, Section VI concludes this paper. ###How To Create Official Synth Repo System Environment Make sure /usr/dports is updated and that it contains no cruft (git pull; git status). Remove any cruft. Make sure your ‘synth’ is up-to-date ‘pkg upgrade synth’. If you already updated your system you may have to build synth from scratch, from /usr/dports/ports-mgmt/synth. Make sure /etc/make.conf is clean. Update /usr/src to the current master, make sure there is no cruft in it Do a full buildworld, buildkernel, installkernel and installworld Reboot After the reboot, before proceeding, run ‘uname -a’ and make sure you are now on the desired release or development kernel. Synth Environment /usr/local/etc/synth/ contains the synth configuration. It should contain a synth.ini file (you may have to rename the template), and you will have to create or edit a LiveSystem-make.conf file. System requirements are hefty. Just linking chromium alone eats at least 30GB, for example. Concurrent c++ compiles can eat up to 2GB per process. We recommend at least 100GB of SSD based swap space and 300GB of free space on the filesystem. synth.ini should contain this. Plus modify the builders and jobs to suit your system. With 128G of ram, 30/30 or 40/25 works well. If you have 32G of ram, maybe 8/8 or less. ; Take care when hand editing! [Global Configuration] profileselected= LiveSystem [LiveSystem] Operatingsystem= DragonFly Directorypackages= /build/synth/livepackages Directoryrepository= /build/synth/livepackages/All Directoryportsdir= /build/synth/dports Directoryoptions= /build/synth/options Directorydistfiles= /usr/distfiles Directorybuildbase= /build/synth/build Directorylogs= /build/synth/logs Directoryccache= disabled Directorysystem= / Numberofbuilders= 30 Maxjobsperbuilder= 30 Tmpfsworkdir= true Tmpfslocalbase= true Displaywithncurses= true leverageprebuilt= false LiveSystem-make.conf should contain one line to restrict licensing to only what is allowed to be built as a binary package: LICENSESACCEPTED= NONE Make sure there is no other cruft in /usr/local/etc/synth/ In the example above, the synth working dirs are in “/build/synth”. Make sure the base directories exist. Clean out any cruft for a fresh build from-scratch: rm -rf /build/synth/livepackages/* rm -rf /build/synth/logs mkdir /build/synth/logs Run synth everything. I recommend doing this in a ‘screen’ session in case you lose your ssh session (assuming you are ssh’d into the build machine). (optionally start a screen session) synth everything A full synth build takes over 24 hours to run on a 48-core box, around 12 hours to run on a 64-core box. On a 4-core/8-thread box it will take at least 3 days. There will be times when swap space is heavily used. If you have not run synth before, monitor your memory and swap loads to make sure you have configured the jobs properly. If you are overloading the system, you may have to ^C the synth run, reduce the jobs, and start it again. It will pick up where it left off. When synth finishes, let it rebuild the database. You then have a working binary repo. It is usually a good idea to run synth several times to pick up any stuff it couldn’t build the first time. Each of these incremental runs may take a few hours, depending on what it tries to build. ###Interview with founder and maintainer of GhostBSD, Eric Turgeon Thanks you Eric for taking part. To start off, could you tell us a little about yourself, just a bit of background? How did you become interested in open source? When and how did you get interested in the BSD operating systems? On your Twitter profile, you state that you are an automation engineer at iXsystems. Can you share what you do in your day-to-day job? You are the founder and project lead of GhostBSD. Could you describe GhostBSD to those who have never used it or never heard of it? Developing an operating system is not a small thing. What made you decide to start the GhostBSD project and not join another “desktop FreeBSD” related project, such as PC-BSD and DesktopBSD at the time? How did you get to the name GhostBSD? Did you consider any other names? You recently released GhostBSD 18.10? What’s new in that version and what are the key features? What has changed since GhostBSD 11.1? The current version is 18.10. Will the next version be 19.04 (like Ubuntu’s version numbering), or is a new version released after the next stable TrueOS release Can you tell us something about the development team? Is it yourself, or are there other core team members? I think I saw two other developers on your Github project page. How about the relationship with the community? Is it possible for a community member to contribute, and how are those contributions handled? What was the biggest challenge during development? If you had to pick one feature readers should check out in GhostBSD, what is it and why? What is the relationship between iXsystems and the GhostBSD project? Or is GhostBSD a hobby project that you run separately from your work at iXsystems? What is the relationship between GhostBSD and TrueOS? Is GhostBSD TrueOS with the MATE desktop on top, or are there other modifications, additions, and differences? Where does GhostBSD go from here? What are your plans for 2019? Is there anything else that wasn’t asked or that you want to share? ##Beastie Bits dialog(1) script to select audio output on FreeBSD Erlang otp on OpenBSD Capsicum https://blog.grem.de/sysadmin/FreeBSD-On-rpi3-With-crochet-2018-10-27-18-00.html Introduction to µUBSan - a clean-room reimplementation of the Undefined Behavior Sanitizer runtime pkgsrcCon 2018 in Berlin - Videos Getting started with drm-kmod ##Feedback/Questions Malcolm - Show segment idea Fraser - Question: FreeBSD official binary package options Harri - BSD Magazine Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

WIRED Business – Spoken Edition
How Right-Wing Social Media Site Gab Got Back Online

WIRED Business – Spoken Edition

Play Episode Listen Later Nov 12, 2018 6:15


After it was revealed that the suspect in the shootings at a Pittsburgh synagogue had threatened on the social media network Gab to kill Jews, multiple technology providers dropped Gab, including domain registrar GoDaddy, web hosting provider Joyent, and payment processors PayPal and Stripe. The moves knocked Gab offline for nearly a week, during which the company painted itself as a martyr for free speech and milked the media for attention. On Sunday, however, Gab returned to the web.

WashingTECH Tech Policy Podcast with Joe Miller
Brook Bello: Tech and Tech Policy Solutions to End Sex Trafficking (Ep. 160)

WashingTECH Tech Policy Podcast with Joe Miller

Play Episode Listen Later Oct 30, 2018 27:37


Brook Bello: Tech and Tech Policy Solutions to End Sex Trafficking (Ep. 160) Brook Bello joined Joe Miller to discuss how tech policies can help end sex trafficking. Bio Dr. Brook Bello (@BrookBello) is Founder and CEO/ED of More Too Life, Inc., -- an anti-sexual violence, human trafficking and youth crime prevention organization that was named by United Way Worldwide as one of the best in the nation. A sought-after international speaker and champion against human trafficking, Dr. Bello has been recognized with countless achievement awards, fellowships and appointment, she was recently named a Google Next Gen Policy Leader, with the ability to learn from leading Google executives and other leaders in profound aspects that deal with world issues in relation to tech and tech policy. She received the Lifetime Achievement Award from the 44thPresident of the United States and the White House in December 2016. She also received the advocate of the year in the state of Florida from Florida Governor Rick Scott and Florida Attorney Pam Bondi’s Human Trafficking Council. Dr. Bello is also the author of innovative root cause focused successful curricula such as, RJEDE™ (Restorative Justice End Demand Education) -- a court appointed and volunteer course for violators of sexual violence, prostitution and human trafficking prevention in Miami/Dade, Sarasota and Manatee counties. In addition, LATN™ and LATN D2 (Living Above the Noise) educational mentoring curriculum for victims to prevent sexual violence and human trafficking. She holds a Masters and PH.D in pastoral clinical counseling and accreditation in pastoral clinical and temperance based counseling. Her bachelor’s is in biblical studies. She also holds two honorary doctorates -- one in humane letters, theology and biblical studies from the Covenant Theological Seminary and Richmond Virginia Seminary. Her dissertation defends the urgency in spirituality in mental health and the profound pain caused by shame. Bello is also a licensed chaplain and ambassador with the Canadian Institute of Chaplains and Ambassadors (CICA)—the only university accredited by the United Nations Economic and Social Council (UN-ECOSOC). She is also an alum of the Skinner Leadership Institute’s Masters Series of Distinguished Leaders.  Dr. Bello was chosen 1 of 10 national heroes in a series by Dolphin Digital Media and United Way Worldwide called, The Hero Effect.” Resources More Too Life Way of the Peaceful Warrior by Dan Millman Life is Not Complicated, You Are by Carlos Wallace News Roundup Tech stocks tank following earnings reports Tech stocks led a slide on major indexes as Amazon posted a two-day decline Monday, eliminating some $127 billion from its market value, according to the Wall Street Journal. Amazon actually posted a $2.88 billion profit in the 3rd quarter—11 times last year’s figure—but its sales increased by only 29%, falling about half a billion dollars shy of the average analyst estimate of $57.1 billion. Alphabet too missed analyst estimates by about $310 million, coming in with $33.74 billion in revenue in the third quarter, which was up by 21% over last year. At Twitter, active monthly users declined, but revenue was up 29% to 650 million for the third quarter. Twitter attributed the user decline to its purging of suspicious accounts. Tesla also reported strong earnings, with $312 million in profits on $6.8 billion in revenue. As for Snap – it looks like Facebook’s Instagram stories is eroding the platform, although Snap beat estimates, however slightly. Snap lost about 2 million users since the second quarter, but its net loss was two cents per share less than expected, and it also had more revenue than analysts expected -- $297.6 million – which was about $14 million more than analysts’ expectations. N.Y. Times reports that Trump uses iPhones spied on by Russia/China The New York Times reported that President Trump uses unsecured iPhones to gossip with colleagues that Chinese and Russian spies routinely eavesdrop on to gather intelligence. President Trump denies the report saying that he only uses a government phone and, in a Tweet, said the New York Times report is “sooo wrong”. Facebook identifies Iranian misinformation campaign Facebook identified an Iranian misinformation campaign which led it to delete 82 pages the company says were engaged in “coordinated inauthentic behavior”. Facebook’s head of cybersecurity Nathaniel Gleicher said the pages had over 1 million followers. Google paid ‘Father of Android’ $90 million to leave the company following a sexual misconduct allegation The New York Times reported last week in an investigative report that Google paid Android creator Andy Rubin some $90 million dollars in 2014 when he left the company following sexual misconduct allegations. Google released Rubin with praises from Larry Page even though an internal investigation found the allegations credible, according the New York Times. The newspaper reports that Google similarly protected 2 other executives. Rubin has denied the allegations and, in a letter to Google’s employees, Google CEO Sundar Pichai wrote that Google has fired some 48 employees for sexual harassment since 2016. U.S. launches election protection cyber operation against Russia U.S. cybercommand has launched a first-of-its-kind mission against Russia to prevent election interference. The initiative followed a Justice Department report released Friday outlining Russia’s campaign of “information warfare”. Alleged Pittsburgh shooter repeatedly posted violent content on social media prior to mass murder Before he allegedly murdered 11 people in a Pittsburgh synagogue, including a 97-year-old holocaust survivor, Robert Bowers allegedly posted hateful and violent content on social media numerous times on Facebook, Twitter, and the alt-right website Gab -- but he still wasn’t on the radar of law enforcement. Joyent, the web hosting platform that hosted Gab, has since banned Gab from using its platform, knocking it offline. Kevin Roose has more in the New York Times. U.S. restricts exports to Chinese semiconductor firm Fujian Jinhua The U.S. has decided to restrict exports to Chinese semiconductor firm Fujian Jinhua. The Trump administration says the company stole intellectual property from U.S.-based Micron Technology. The rationale is that if Fujian Jinhua supplies chips to Micron, there’s a risk that the Chinese-manufactured chips would edge out those manufactured by American competitors. President Trump signs U.S. spectrum strategy President Trump has signed a memo directing the Commerce Department to develop a spectrum strategy to prepare for 5G wireless. Mr. Trump has also created a Spectrum Task Force to evaluate federal spectrum needs and how spectrum can be shared with private companies. UK fines Facebook £500,000 for data violations Finally, the UK has fined Facebook just £500,000 for Cambridge Analytica-related data violations. That’s a little over $640,000— The Guardian notes that Facebook brought in some $40.7 billion last year. The UK’s Information Commissioner’s Office found that Cambridge Analytica harvested the data of some 1 million Facebook users in the UK via loopholes on Facebook’s platform that allowed developers to access the data of Facebook’s users without their consent.

Frontier
Frontier 245: The Recognition in Your Eyes

Frontier

Play Episode Listen Later Oct 5, 2018 50:31


Subscribe via Podcast RSS | DONATE | Contact Hosts Beer: Heineken Lager Beer | Heineken Nederland B.V. The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. CompaniesFirmware Vulnerabilities in Supermicro SystemsSevere Firmware Vulnerabilities Found In Popular Supermicro Server ProductsiPhone Xr talkPurple Back CushionInsurgency SandstormDreamHost's DreamObjects disasterous datacenter departure, west coast to east coast. LOL WUT. What kinda data storage provider tells you to move your own stuff to their east coast DC, because they are moving? Use S3, GCP, B2, or Joyent. DreamHost is circling the drain. Support the Shows!Date: 2018-10-04Time: 00:50:31Download - torrent

L8ist Sh9y Podcast
Jason Hoffman on Edge and Joyent Reflections

L8ist Sh9y Podcast

Play Episode Listen Later Sep 1, 2018 44:39


Joining us this week is Jason Hoffman, CEO MobiledgeX, a startup “creating a global marketplace for organizations to deliver and drive the business of these edge enabled services and products.” Highlights Joyent – Cloud Computing’s Amiga Technology for You vs Technology for Customer How we Got Here and will Get to Edge Anchoring Edge into Current Needs Humans and Machines Integration – Computer Vision Software and Hardware Support and Aging Issues in House Mixed Reality solves Social issues of Smartphones Reality of Physics in Building the Edge ~ Battery Life is Killer App Network to Data Centric View Switch ~ Edge Data is the S3 of Cloud Lack of Understanding in Early Web Infrastructure vs Email Understanding Lack of Evolution in Hardware Provisioning - Evil Firmware Horrible Parenting Advice

THE ARCHITECHT SHOW
Ep. 65: Cloud pioneer Jason Hoffman on where edge computing will shine

THE ARCHITECHT SHOW

Play Episode Listen Later Aug 16, 2018 58:42


In this episode of the ARCHITECHT Show, Jason Hoffman discusses all things edge computing -- from the business of operating cellular infrastructure to the applications where it really makes sense. Hoffman is currently the CEO of Deutsche Telekom subsidiary MobileedgeX; he previously led cloud efforts at Ericsson and was co-founder and CTO of early cloud computing provider Joyent. He also gives his take on the role of the large cloud providers as the edge shapes up, and shares a vision of an augmented reality future that doesn't involve being tethered to a headset or looking through your phone.

BSD Now
Episode 239: The Return To ptrace | BSD Now 239

BSD Now

Play Episode Listen Later Mar 29, 2018 92:43


OpenBSD firewalling Windows 10, NetBSD’s return to ptrace, TCP Alternative Backoff, the BSD Poetic license, and AsiaBSDcon 2018 videos available. RSS Feeds: MP3 Feed | iTunes Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: - Show Notes: - Headlines Preventing Windows 10 and untrusted software from having full access to the internet using OpenBSD Whilst setting up one of my development laptops to port some software to Windows I noticed Windows 10 doing crazy things like installing or updating apps and games by default after initial setup. The one I noticed in particular was Candy Crush Soda Saga which for those who don't know of it is some cheesy little puzzle game originally for consumer devices. I honestly did not want software like this near to a development machine. It has also been reported that Windows 10 now also updates core system software without notifying the user. Surely this destroys any vaguely deterministic behaviour, in my opinion making Windows 10 by default almost useless for development testbeds. Deciding instead to start from scratch but this time to set the inbuilt Windows Firewall to be very restrictive and only allow a few select programs to communicate. In this case all I really needed to be online was Firefox, Subversion and Putty. To my amusement (and astonishment) I found out that the Windows firewall could be modified to give access very easily by programs during installation (usually because this task needs to be done with admin privileges). It also seems that Windows store Apps can change the windows firewall settings at any point. One way to get around this issue could be to install a 3rd party firewall that most software will not have knowledge about and thus not attempt to break through. However the only decent firewall I have used was Sygate Pro which unfortunately is no longer supported by recent operating systems. The last supported versions was 2003, XP and 2000. In short, I avoid 3rd party firewalls. Instead I decided to trap Windows 10 (and all of it's rogue updaters) behind a virtual machine running OpenBSD. This effectively provided me with a full blown firewall appliance. From here I could then allow specific software I trusted through the firewall (via a proxy) in a safe, controlled and deterministic manner. For other interested developers (and security conscious users) and for my own reference, I have listed the steps taken here: 1) First and foremost disable the Windows DHCP service - this is so no IP can be obtained on any interface. This effectively stops any communication with any network on the host system. This can be done by running services.msc with admin privileges and stopping and disabling the service called DHCP Client. 2) Install or enable your favorite virtualization software - I have tested this with both VirtualBox and Hyper-V. Note that on non-server versions of Windows, in order to get Hyper-V working, your processor also needs to support SLAT which is daft so to avoid faffing about, I recommend using VirtualBox to get round this seemingly arbitrary restriction. 3) Install OpenBSD on the VM - Note, if you decide to use Hyper-V, its hardware support isn't 100% perfect to run OpenBSD and you will need to disable a couple of things in the kernel. At the initial boot prompt, run the following commands. config -e -o /bsd /bsd disable acpi disable mpbios 4) Add a host only virtual adapter to the VM - This is the one which we are going to connect through the VM with. Look at the IP that VirtualBox assigns this in network manager on the host machine. Mine was [b]192.168.56.1[/b]. Set up the adapter in the OpenBSD VM to have a static address on the same subnet. For example [b]192.168.56.2[/b]. If you are using Hyper-V and OpenBSD, make sure you add a "Legacy Interface" because no guest additions are available. Then set up a virtual switch which is host only. 5) Add a bridged adapter to the VM - then assign it to whichever interface you wanted to connect to the external network with. Note that if using Wireless, set the bridged adapters MAC address to the same as your physical device or the access point will reject it. This is not needed (or possible) on Hyper-V because the actual device is "shared" rather than bridged so the same MAC address is used. Again, if you use Hyper-V, then add another virtual switch and attach it to your chosen external interface. VMs in Hyper-V "share" an adapter within a virtual switch and there is the option to also disable the hosts ability to use this interface at the same time which is fine for an additional level of security if those pesky rogue apps and updaters can also enable / disable DHCP service one day which wouldn't be too surprising. 6) Connect to your network in the host OS - In case of Wireless, select the correct network from the list and type in a password if needed. Windows will probably say "no internet available", it also does not assign an IP address which is fine. 7) Install the Squid proxy package on the OpenBSD guest and enable the daemon ``` pkg_add squid echo 'squid_flags=""' >> /etc/rc.conf.local /etc/rc.d/squid start ``` We will use this service for a limited selection of "safe and trusted" programs to connect to the outside world from within the Windows 10 host. You can also use putty on the host to connect to the VM via SSH and create a SOCKS proxy which software like Firefox can also use to connect externally. 8) Configure the software you want to be able to access the external network with Firefox - go to the connection settings and specify the VMs IP address for the proxy. Subversion - modify the %HOME%AppDataRoamingSubversionservers file and change the HTTP proxy field to the VMs IP. This is important to communicate with GitHub via https:// (Yes, GitHub also supports Subversion). For svn:// addresses you can use Putty to port forward. Chromium/Chrome - unfortunately uses the global Windows proxy settings which defeats much of the purpose of this exercise if we were going to allow all of Windows access to the internet via the proxy. It would become mayhem again. However we can still use Putty to create a SOCKS proxy and then launch the browser with the following flags: --proxy-server="socks5://:" --host-resolver-rules="MAP * 0.0.0.0 , EXCLUDE " 9) Congratulations, you are now done - Admittedly this process can be a bit fiddly to set up but it completely prevents Windows 10 from making a complete mess. This solution is probably also useful for those who like privacy or don't like the idea of their software "phoning home". Hope you find this useful and if you have any issues, please feel free to leave questions in the comments. LLDB restoration and return to ptrace(2) I've managed to unbreak the LLDB debugger as much as possible with the current kernel and hit problems with ptrace(2) that are causing issues with further work on proper NetBSD support. Meanwhile, I've upstreamed all the planned NetBSD patches to sanitizers and helped other BSDs to gain better or initial support. LLDB Since the last time I worked on LLDB, we have introduced many changes to the kernel interfaces (most notably related to signals) that apparently fixed some bugs in Go and introduced regressions in ptrace(2). Part of the regressions were noted by the existing ATF tests. However, the breakage was only marked as a new problem to resolve. For completeness, the ptrace(2) code was also cleaned up by Christos Zoulas, and we fixed some bugs with compat32. I've fixed a crash in *NetBSD::Factory::Launch(), triggered on startup of the lldb-server application. Here is the commit message: ``` We cannot call process_up->SetState() inside the NativeProcessNetBSD::Factory::Launch function because it triggers a NULL pointer deference. The generic code for launching a process in: GDBRemoteCommunicationServerLLGS::LaunchProcess sets the mdebuggedprocessup pointer after a successful call to mprocessfactory.Launch(). If we attempt to call processup->SetState() inside a platform specific Launch function we end up dereferencing a NULL pointer in NativeProcessProtocol::GetCurrentThreadID(). Use the proper call processup->SetState(,false) that sets notifydelegates to false. ``` Sanitizers I suspended development of new features in sanitizers last month, but I was still in the process of upstreaming of local patches. This process was time-consuming as it required rebasing patches, adding dedicated tests, and addressing all other requests and comments from the upstream developers. I'm not counting hot fixes, as some changes were triggering build or test issues on !NetBSD hosts. Thankfully all these issues were addressed quickly. The final result is a reduction of local delta size of almost 1MB to less than 100KB (1205 lines of diff). The remaining patches are rescheduled for later, mostly because they depend on extra work with cross-OS tests and prior integration of sanitizers with the basesystem distribution. I didn't want to put extra work here in the current state of affairs and, I've registered as a mentor for Google Summer of Code for the NetBSD Foundation and prepared Software Quality improvement tasks in order to outsource part of the labour. Userland changes I've also improved documentation for some of the features of NetBSD, described in man-pages. These pieces of information were sometimes wrong or incomplete, and this makes covering the NetBSD system with features such as sanitizers harder as there is a mismatch between the actual code and the documented code. Some pieces of software also require better namespacing support, these days mostly for the POSIX standard. I've fixed few low-hanging fruits there and requested pullups to NetBSD-8(BETA). I thank the developers for improving the landed code in order to ship the best solutions for users. BSD collaboration in LLVM A One-man-show in human activity is usually less fun and productive than collaboration in a team. This is also true in software development. Last month I was helping as a reviewer to port LLVM features to FreeBSD and when possible to OpenBSD. This included MSan/FreeBSD, libFuzzer/FreeBSD, XRay/FreeBSD and UBSan/OpenBSD. I've landed most of the submitted and reviewed code to the mainstream LLVM tree. Part of the code also verified the correctness of NetBSD routes in the existing porting efforts and showed new options for improvement. This is the reason why I've landed preliminary XRay/NetBSD code and added missing NetBSD bits to ToolChain::getOSLibName(). The latter produced setup issues with the prebuilt LLVM toolchain, as the directory name with compiler-rt goodies were located in a path like ./lib/clang/7.0.0/lib/netbsd8.99.12 with a varying OS version. This could stop working after upgrades, so I've simplified it to "netbsd", similar to FreeBSD and Solaris. Prebuilt toolchain for testers I've prepared a build of Clang/LLVM with LLDB and compiler-rt features prebuilt on NetBSD/amd64 v. 8.99.12: llvm-clang-compilerrt-lldb-7.0.0beta_2018-02-28.tar.bz2 Plan for the next milestone With the approaching NetBSD 8.0 release I plan to finish backporting a few changes there from HEAD: Remove one unused feature from ptrace(2), PTSETSIGMASK & PTGETSIGMASK. I've originally introduced these operations with criu/rr-like software in mind, but they are misusing or even abusing ptrace(2) and are not regular process debuggers. I plan to remove this operation from HEAD and backport this to NetBSD-8(BETA), before the release, so no compat will be required for this call. Future ports of criu/rr should involve dedicated kernel support for such requirements. Finish the backport of UCMACHINE_FP() to NetBSD-8. This will allow use of the same code in sanitizers in HEAD and NetBSD-8.0. By popular demand, improve the regnsub(3) and regasub(3) API, adding support for more or less substitutions than 10. Once done, I will return to ptrace(2) debugging and corrections. DigitalOcean Working with the NetBSD kernel Overview When working on complex systems, such as OS kernels, your attention span and cognitive energy are too valuable to be wasted on inefficiencies pertaining to ancillary tasks. After experimenting with different environmental setups for kernel debugging, some of which were awkward and distracting from my main objectives, I have arrived to my current workflow, which is described here. This approach is mainly oriented towards security research and the study of kernel internals. Before delving into the details, this is the general outline of my environment: My host system runs Linux. My target system is a QEMU guest. I’m tracing and debugging on my host system by attaching GDB (with NetBSD x86-64 ABI support) to QEMU’s built-in GDB server. I work with NetBSD-current. All sources are built on my host system with the cross-compilation toolchain produced by build.sh. I use NFS to share the source tree and the build artifacts between the target and the host. I find IDEs awkward, so for codebase navigation I mainly rely on vim, tmux and ctags. For non-intrusive instrumentation, such as figuring out control flow, I’m using dtrace. Preparing the host system QEMU GDB NFS Exports Building NetBSD-current A word of warning Now is a great time to familiarize yourself with the build.sh tool and its options. Be especially carefull with the following options: -r Remove contents of TOOLDIR and DESTDIR before building. -u Set MKUPDATE=yes; do not run "make clean" first. Without this, everything is rebuilt, including the tools. Chance are, you do not want to use these options once you’ve successfully built the cross-compilation toolchain and your entire userland, because building those takes time and there aren’t many good reasons to recompile them from scratch. Here’s what to expect: On my desktop, running a quad-core Intel i5-3470 at 3.20GHz with 24GB of RAM and underlying directory structure residing on a SSD drive, the entire process took about 55 minutes. I was running make with -j12, so the machine was quite busy. On an old Dell D630 laptop, running Intel Core 2 Duo T7500 at 2.20GHz with 4GB of RAM and a slow hard drive (5400RPM), the process took approximatelly 2.5 hours. I was running make with -j4. Based on the temperature alerts and CPU clock throttling messages, it was quite a struggle. Acquiring the sources Compiling the sources Preparing the guest system Provisioning your guest Pkgin and NFS shares Tailoring the kernel for debugging Installing the new kernel Configuring DTrace Debugging the guest’s kernel News Roundup Add support for the experimental Internet-Draft "TCP Alternative Backoff” ``` Add support for the experimental Internet-Draft "TCP Alternative Backoff with ECN (ABE)" proposal to the New Reno congestion control algorithm module. ABE reduces the amount of congestion window reduction in response to ECN-signalled congestion relative to the loss-inferred congestion response. More details about ABE can be found in the Internet-Draft: https://tools.ietf.org/html/draft-ietf-tcpm-alternativebackoff-ecn The implementation introduces four new sysctls: net.inet.tcp.cc.abe defaults to 0 (disabled) and can be set to non-zero to enable ABE for ECN-enabled TCP connections. net.inet.tcp.cc.newreno.beta and net.inet.tcp.cc.newreno.betaecn set the multiplicative window decrease factor, specified as a percentage, applied to the congestion window in response to a loss-based or ECN-based congestion signal respectively. They default to the values specified in the draft i.e. beta=50 and betaecn=80. net.inet.tcp.cc.abe_frlossreduce defaults to 0 (disabled) and can be set to non-zero to enable the use of standard beta (50% by default) when repairing loss during an ECN-signalled congestion recovery episode. It enables a more conservative congestion response and is provided for the purposes of experimentation as a result of some discussion at IETF 100 in Singapore. The values of beta and betaecn can also be set per-connection by way of the TCPCCALGOOPT TCP-level socket option and the new CCNEWRENOBETA or CCNEWRENOBETA_ECN CC algo sub-options. Submitted by: Tom Jones tj@enoti.me Tested by: Tom Jones tj@enoti.me, Grenville Armitage garmitage@swin.edu.au Relnotes: Yes Differential Revision: https://reviews.freebsd.org/D11616 ``` Meltdown-mitigation syspatch/errata now available The recent changes in -current mitigating the Meltdown vulnerability have been backported to the 6.1 and 6.2 (amd64) releases, and the syspatch update (for 6.2) is now available. 6.1 ``` Changes by: bluhm@cvs.openbsd.org 2018/02/26 05:36:18 Log message: Implement a workaround against the Meltdown flaw in Intel CPUs. The following changes have been backported from OpenBSD -current. Changes by: guenther@cvs.openbsd.org 2018/01/06 15:03:13 Log message: Handle %gs like %[def]s and reset set it in cpu_switchto() instead of on every return to userspace. Changes by: mlarkin@cvs.openbsd.org 2018/01/06 18:08:20 Log message: Add identcpu.c and specialreg.h definitions for the new Intel/AMD MSRs that should help mitigate spectre. This is just the detection piece, these features are not yet used. Part of a larger ongoing effort to mitigate meltdown/spectre. i386 will come later; it needs some machdep.c cleanup first. Changes by: mlarkin@cvs.openbsd.org 2018/01/07 12:56:19 Log message: remove all PG_G global page mappings from the kernel when running on Intel CPUs. Part of an ongoing set of commits to mitigate the Intel "meltdown" CVE. This diff does not confer any immunity to that vulnerability - subsequent commits are still needed and are being worked on presently. ok guenther, deraadt Changes by: mlarkin@cvs.openbsd.org 2018/01/12 01:21:30 Log message: IBRS -> IBRS,IBPB in identifycpu lines Changes by: guenther@cvs.openbsd.org 2018/02/21 12:24:15 Log message: Meltdown: implement user/kernel page table separation. On Intel CPUs which speculate past user/supervisor page permission checks, use a separate page table for userspace with only the minimum of kernel code and data required for the transitions to/from the kernel (still marked as supervisor-only, of course): - the IDT (RO) - three pages of kernel text in the .kutext section for interrupt, trap, and syscall trampoline code (RX) - one page of kernel data in the .kudata section for TLB flush IPIs (RW) - the lapic page (RW, uncachable) - per CPU: one page for the TSS+GDT (RO) and one page for trampoline stacks (RW) When a syscall, trap, or interrupt takes a CPU from userspace to kernel the trampoline code switches page tables, switches stacks to the thread's real kernel stack, then copies over the necessary bits from the trampoline stack. On return to userspace the opposite occurs: recreate the iretq frame on the trampoline stack, switch stack, switch page tables, and return to userspace. mlarkin@ implemented the pmap bits and did 90% of the debugging, diagnosing issues on MP in particular, and drove the final push to completion. Many rounds of testing by naddy@, sthen@, and others Thanks to Alex Wilson from Joyent for early discussions about trampolines and their data requirements. Per-CPU page layout mostly inspired by DragonFlyBSD. ok mlarkin@ deraadt@ Changes by: bluhm@cvs.openbsd.org 2018/02/22 13:18:59 Log message: The GNU assembler does not understand 1ULL, so replace the constant with 1. Then it compiles with gcc, sign and size do not matter here. Changes by: bluhm@cvs.openbsd.org 2018/02/22 13:27:14 Log message: The compile time assertion for cpu info did not work with gcc. Rephrase the condition in a way that both gcc and clang accept it. Changes by: guenther@cvs.openbsd.org 2018/02/22 13:36:40 Log message: Set the PG_G (global) bit on the special page table entries that are shared between the u-k and u+k tables, because they're actually in all tables. OpenBSD 6.1 errata 037 ``` 6.2 ``` Changes by: bluhm@cvs.openbsd.org 2018/02/26 05:29:48 Log message: Implement a workaround against the Meltdown flaw in Intel CPUs. The following changes have been backported from OpenBSD -current. Changes by: guenther@cvs.openbsd.org 2018/01/06 15:03:13 Log message: Handle %gs like %[def]s and reset set it in cpu_switchto() instead of on every return to userspace. Changes by: mlarkin@cvs.openbsd.org 2018/01/06 18:08:20 Log message: Add identcpu.c and specialreg.h definitions for the new Intel/AMD MSRs that should help mitigate spectre. This is just the detection piece, these features are not yet used. Part of a larger ongoing effort to mitigate meltdown/spectre. i386 will come later; it needs some machdep.c cleanup first. Changes by: mlarkin@cvs.openbsd.org 2018/01/07 12:56:19 Log message: remove all PG_G global page mappings from the kernel when running on Intel CPUs. Part of an ongoing set of commits to mitigate the Intel "meltdown" CVE. This diff does not confer any immunity to that vulnerability - subsequent commits are still needed and are being worked on presently. Changes by: mlarkin@cvs.openbsd.org 2018/01/12 01:21:30 Log message: IBRS -> IBRS,IBPB in identifycpu lines Changes by: guenther@cvs.openbsd.org 2018/02/21 12:24:15 Log message: Meltdown: implement user/kernel page table separation. On Intel CPUs which speculate past user/supervisor page permission checks, use a separate page table for userspace with only the minimum of kernel code and data required for the transitions to/from the kernel (still marked as supervisor-only, of course): - the IDT (RO) - three pages of kernel text in the .kutext section for interrupt, trap, and syscall trampoline code (RX) - one page of kernel data in the .kudata section for TLB flush IPIs (RW) - the lapic page (RW, uncachable) - per CPU: one page for the TSS+GDT (RO) and one page for trampoline stacks (RW) When a syscall, trap, or interrupt takes a CPU from userspace to kernel the trampoline code switches page tables, switches stacks to the thread's real kernel stack, then copies over the necessary bits from the trampoline stack. On return to userspace the opposite occurs: recreate the iretq frame on the trampoline stack, switch stack, switch page tables, and return to userspace. mlarkin@ implemented the pmap bits and did 90% of the debugging, diagnosing issues on MP in particular, and drove the final push to completion. Many rounds of testing by naddy@, sthen@, and others Thanks to Alex Wilson from Joyent for early discussions about trampolines and their data requirements. Per-CPU page layout mostly inspired by DragonFlyBSD. Changes by: bluhm@cvs.openbsd.org 2018/02/22 13:18:59 Log message: The GNU assembler does not understand 1ULL, so replace the constant with 1. Then it compiles with gcc, sign and size do not matter here. Changes by: bluhm@cvs.openbsd.org 2018/02/22 13:27:14 Log message: The compile time assertion for cpu info did not work with gcc. Rephrase the condition in a way that both gcc and clang accept it. Changes by: guenther@cvs.openbsd.org 2018/02/22 13:36:40 Log message: Set the PG_G (global) bit on the special page table entries that are shared between the u-k and u+k tables, because they're actually in all tables. OpenBSD 6.2 errata 009 ``` syspatch iXsystems a2k18 Hackathon Report: Ken Westerback on dhclient and more Ken Westerback (krw@) has sent in the first report from the (recently concluded) a2k18 hackathon: YYZ -> YVR -> MEL -> ZQN -> CHC -> DUD -> WLG -> AKL -> SYD -> BNE -> YVR -> YYZ For those of you who don’t speak Airport code: Toronto -> Vancouver -> Melbourne -> Queenstown -> Christchurch -> Dunedin Then: Dunedin -> Wellington -> Auckland -> Sydney -> Brisbane -> Vancouver -> Toronto ``` Whew. Once in Dunedin the hacking commenced. The background was a regular tick of new meltdown diffs to test in addition to whatever work one was actually engaged in. I was lucky (?) in that none of the problems with the various versions cropped up on my laptop. ``` ``` I worked with rpe@ and tb@ to make the install script create the 'correct' FQDN when dhclient was involved. I worked with tb@ on some code cleanup in various bits of the base. dhclient(8) got some nice cleanup, further pruning/improving log messages in particular. In addition the oddball -q option was flipped into the more normal -v. I.e. be quiet by default and verbose on request. More substantially the use of recorded leases was made less intrusive by avoiding continual reconfiguration of the interface with the same information. The 'request', 'require' and 'ignore' dhclient.conf(5) statement were changed so they are cumulative, making it easier to build longer lists of affected options. I tweaked softraid(4) to remove a handrolled version of duid_format(). I sprinkled a couple of M_WAITOK into amd64 and i386 mpbios to document that there is really no need to check for NULL being returned from some malloc() calls. I continued to help test the new filesystem quiescing logic that deraadt@ committed during the hackathon. I only locked myself out of my room once! Fueled by the excellent coffee from local institutions The Good Earth Cafe and The Good Oil Cafe, and the excellent hacking facilities and accommodations at the University of Otago it was another enjoyable and productive hackathon south of the equator. And I even saw penguins. Thanks to Jim Cheetham and the support from the project and the OpenBSD Foundation that made it all possible ``` Poetic License I found this when going through old documents. It looks like I wrote it and never posted it. Perhaps I didn’t consider it finished at the time. But looking at it now, I think it’s good enough to share. It’s a redrafting of the BSD licence, in poetic form. Maybe I had plans to do other licences one day; I can’t remember. I’ve interleaved it with the original license text so you can see how true, or otherwise, I’ve been to it. Enjoy :-) ``` Copyright (c) , All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: ``` You may redistribute and use – as source or binary, as you choose, and with some changes or without – this software; let there be no doubt. But you must meet conditions three, if in compliance you wish to be. 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. The first is obvious, of course – To keep this text within the source. The second is for binaries Place in the docs a copy, please. A moral lesson from this ode – Don’t strip the copyright on code. The third applies when you promote: You must not take, from us who wrote, our names and make it seem as true we like or love your version too. (Unless, of course, you contact us And get our written assensus.) THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. One final point to be laid out (You must forgive my need to shout): THERE IS NO WARRANTY FOR THIS WHATEVER THING MAY GO AMISS. EXPRESS, IMPLIED, IT’S ALL THE SAME – RESPONSIBILITY DISCLAIMED. WE ARE NOT LIABLE FOR LOSS NO MATTER HOW INCURRED THE COST THE TYPE OR STYLE OF DAMAGE DONE WHATE’ER THE LEGAL THEORY SPUN. THIS STILL REMAINS AS TRUE IF YOU INFORM US WHAT YOU PLAN TO DO. When all is told, we sum up thus – Do what you like, just don’t sue us. Beastie Bits AsiaBSDCon 2018 Videos The January/February 2018 FreeBSD Journal is Here Announcing the pkgsrc-2017Q4 release (2018-01-04) BSD Hamburg Event ZFS User conference Unreal Engine 4 Being Brought Natively To FreeBSD By Independent Developer Tarsnap ad Feedback/Questions Philippe - I heart FreeBSD and other questions Cyrus - BSD Now is excellent Architect - Combined Feedback Dale - ZFS on Linux moving to ZFS on FreeBSD Tommi - New BUG in Finland Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Advance Tech Podcast
James Nugent

Advance Tech Podcast

Play Episode Listen Later Nov 14, 2017 61:52


This week we interview James Nugent, a software developer from Bath, England. He currently works in engineering at Joyent, an open source public cloud company recently acquired by Samsung Electronics. Previously, James was a core contributor at HashiCorp building operations tooling, and Event Store LLP, which produces an open source stream database with a built in projections system. For a comprehensive breakdown of the episode and showlinks take a look at the shownotes below or by clicking the episode title.

BSD Now
219: We love the ARC

BSD Now

Play Episode Listen Later Nov 8, 2017 130:29


Papers we love: ARC by Bryan Cantrill, SSD caching adventures with ZFS, OpenBSD full disk encryption setup, and a Perl5 Slack Syslog BSD daemon. This episode was brought to you by Headlines Papers We Love: ARC: A Self-Tuning, Low Overhead Replacement Cache (https://www.youtube.com/watch?v=F8sZRBdmqc0&feature=youtu.be) Ever wondered how the ZFS ARC (Adaptive Replacement Cache) works? How about if Bryan Cantrill presented the original paper on its design? Today is that day. Slides (https://www.slideshare.net/bcantrill/papers-we-love-arc-after-dark) It starts by looking back at a fundamental paper from the 40s where the architecture of general-purpose computers are first laid out The main is the description of memory hierarchies, where you have a small amount of very fast memory, then the next level is slower but larger, and on and on. As we look at the various L1, L2, and L3 caches on a CPU, then RAM, then flash, then spinning disks, this still holds true today. The paper then does a survey of the existing caching policies and tries to explain the issues with each. This includes ‘MIN', which is the theoretically optimal policy, which requires future knowledge, but is useful for setting the upper bound, what is the best we could possibly do. The paper ends up showing that the ARC can end up being better than manually trying to pick the best number for the workload, because it adapts as the workload changes At about 1:25 into the video, Bryan start talking about the practical implementation of the ARC in ZFS, and some challenges they have run into recently at Joyent. A great discussion about some of the problems when ZFS needs to shrink the ARC. Not all of it applies 1:1 to FreeBSD because the kernel and the kmem implementation are different in a number of ways There were some interesting questions asked at the end as well *** How do I use man pages to learn how to use commands? (https://unix.stackexchange.com/a/193837) nwildner on StackExchange has a very thorough answer to the question how to interpret man pages to understand complicated commands (xargs in this case, but not specifically). Have in mind what you want to do. When doing your research about xargs you did it for a purpose, right? You had a specific need that was reading standard output and executing commands based on that output. But, when I don't know which command I want? Use man -k or apropos (they are equivalent). If I don't know how to find a file: man -k file | grep search. Read the descriptions and find one that will better fit your needs. Apropos works with regular expressions by default, (man apropos, read the description and find out what -r does), and on this example I'm looking for every manpage where the description starts with "report". Always read the DESCRIPTION before starting Take a time and read the description. By just reading the description of the xargs command we will learn that: xargs reads from STDIN and executes the command needed. This also means that you will need to have some knowledge of how standard input works, and how to manipulate it through pipes to chain commands The default behavior is to act like /bin/echo. This gives you a little tip that if you need to chain more than one xargs, you don't need to use echo to print. We have also learned that unix filenames can contain blank and newlines, that this could be a problem and the argument -0 is a way to prevent things explode by using null character separators. The description warns you that the command being used as input needs to support this feature too, and that GNU find support it. Great. We use a lot of find with xargs. xargs will stop if exit status 255 is reached. Some descriptions are very short and that is generally because the software works on a very simple way. Don't even think of skipping this part of the manpage ;) Other things to pay attention... You know that you can search for files using find. There is a ton of options and if you only look at the SYNOPSIS, you will get overwhelmed by those. It's just the tip of the iceberg. Excluding NAME, SYNOPSIS, and DESCRIPTION, you will have the following sections: When this method will not work so well... + Tips that apply to all commands Some options, mnemonics and "syntax style" travel through all commands making you buy some time by not having to open the manpage at all. Those are learned by practice and the most common are: Generally, -v means verbose. -vvv is a variation "very very verbose" on some software. Following the POSIX standard, generally one dash arguments can be stacked. Example: tar -xzvf, cp -Rv. Generally -R and/or -r means recursive. Almost all commands have a brief help with the --help option. --version shows the version of a software. -p, on copy or move utilities means "preserve permissions". -y means YES, or "proceed without confirmation" in most cases. Default values of commands. At the pager chunk of this answer, we saw that less -is is the pager of man. The default behavior of commands are not always shown at a separated section on manpages, or at the section that is most top placed. You will have to read the options to find out defaults, or if you are lucky, typing /pager will lead you to that info. This also requires you to know the concept of the pager(software that scrolls the manpage), and this is a thing you will only acquire after reading lots of manpages. And what about the SYNOPSIS syntax? After getting all the information needed to execute the command, you can combine options, option-arguments and operands inline to make your job done. Overview of concepts: Options are the switches that dictates a command behavior. "Do this" "don't do this" or "act this way". Often called switches. Check out the full answer and see if it helps you better grasp the meaning of a man page and thus the command. *** My adventure into SSD caching with ZFS (Home NAS) (https://robertputt.co.uk/my-adventure-into-ssd-caching-with-zfs-home-nas.html) Robert Putt as written about his adventure using SSDs for caching with ZFS on his home NAS. Recently I decided to throw away my old defunct 2009 MacBook Pro which was rotting in my cupboard and I decided to retrieve the only useful part before doing so, the 80GB Intel SSD I had installed a few years earlier. Initially I thought about simply adding it to my desktop as a bit of extra space but in 2017 80GB really wasn't worth it and then I had a brainwave… Lets see if we can squeeze some additional performance out of my HP Microserver Gen8 NAS running ZFS by installing it as a cache disk. I installed the SSD to the cdrom tray of the Microserver using a floppy disk power to SATA power converter and a SATA cable, unfortunately it seems the CD ROM SATA port on the motherboard is only a 3gbps port although this didn't matter so much as it was an older 3gbps SSD anyway. Next I booted up the machine and to my suprise the disk was not found in my FreeBSD install, then I realised that the SATA port for the CD drive is actually provided by the RAID controller, so I rebooted into intelligent provisioning and added an additional RAID0 array with just the 1 disk to act as my cache, in fact all of the disks in this machine are individual RAID0 arrays so it looks like just a bunch of disks (JBOD) as ZFS offers additional functionality over normal RAID (mainly scrubbing, deduplication and compression). Configuration Lets have a look at the zpool before adding the cache drive to make sure there are no errors or uglyness: Now lets prep the drive for use in the zpool using gpart. I want to split the SSD into two seperate partitions, one for L2ARC (read caching) and one for ZIL (write caching). I have decided to split the disk into 20GB for ZIL and 50GB for L2ARC. Be warned using 1 SSD like this is considered unsafe because it is a single point of failure in terms of delayed writes (a redundant configuration with 2 SSDs would be more appropriate) and the heavy write cycles on the SSD from the ZIL is likely to kill it over time. Now it's time to see if adding the cache has made much of a difference. I suspect not as my Home NAS sucks, it is a HP Microserver Gen8 with the crappy Celeron CPU and only 4GB RAM, anyway, lets test it and find out. First off lets throw fio at the mount point for this zpool and see what happens both with the ZIL and L2ARC enabled and disabled. Observations Ok, so the initial result is a little dissapointing, but hardly unexpected, my NAS sucks and there are lots of bottle necks, CPU, memory and the fact only 2 of the SATA ports are 6gbps. There is no real difference performance wise in comparison between the results, the IOPS, bandwidth and latency appear very similar. However lets bare in mind fio is a pretty hardcore disk benchmark utility, how about some real world use cases? Next I decided to test a few typical file transactions that this NAS is used for, Samba shares to my workstation. For the first test I wanted to test reading a 3GB file over the network with both the cache enabled and disabled, I would run this multiple times to ensure the data is hot in the L2ARC and to ensure the test is somewhat repeatable, the network itself is an uncongested 1gbit link and I am copying onto the secondary SSD in my workstation. The dataset for these tests has compression and deduplication disabled. Samba Read Test Not bad once the data becomes hot in the L2ARC cache reads appear to gain a decent advantage compared to reading from the disk directly. How does it perform when writing the same file back accross the network using the ZIL vs no ZIL. Samba Write Test Another good result in the real world test, this certainately helps the write transfer speed however I do wonder what would happen if you filled the ZIL transferring a very large file, however this is unlikely with my use case as I typically only deal with a couple of files of several hundred megabytes at any given time so a 20GB ZIL should suit me reasonably well. Is ZIL and L2ARC worth it? I would imagine with a big beefy ZFS server running in a company somewhere with a large disk pool and lots of users with multiple enterprise level SSD ZIL and L2ARC would be well worth the investment, however at home I am not so sure. Yes I did see an increase in read speeds with cached data and a general increase in write speeds however it is use case dependant. In my use case I rarely access the same file frequently, my NAS primarily serves as a backup and for archived data, and although the write speeds are cool I am not sure its a deal breaker. If I built a new home NAS today I'd probably concentrate the budget on a better CPU, more RAM (for ARC cache) and more disks. However if I had a use case where I frequently accessed the same files and needed to do so in a faster fashion then yes, I'd probably invest in an SSD for caching. I think if you have a spare SSD lying around and you want something fun todo with it, sure chuck it in your ZFS based NAS as a cache mechanism. If you were planning on buying an SSD for caching then I'd really consider your needs and decide if the money can be spent on alternative stuff which would improve your experience with your NAS. I know my NAS would benefit more from an extra stick of RAM and a more powerful CPU, but as a quick evening project with some parts I had hanging around adding some SSD cache was worth a go. More Viewer Interview Questions for Allan News Roundup Setup OpenBSD 6.2 with Full Disk Encryption (https://blog.cagedmonster.net/setup-openbsd-with-full-disk-encryption/) Here is a quick way to setup (in 7 steps) OpenBSD 6.2 with the encryption of the filesystem. First step: Boot and start the installation: (I)nstall: I Keyboard Layout: ENTER (I'm french so in my case I took the FR layout) Leave the installer with: ! Second step: Prepare your disk for encryption. Using a SSD, my disk is named : sd0, the name may vary, for example : wd0. Initiating the disk: Configure your volume: Now we'll use bioctl to encrypt the partition we created, in this case : sd0a (disk sd0 + partition « a »). Enter your passphrase. Third step: Let's resume the OpenBSD's installer. We follow the install procedure Fourth step: Partitioning of the encrypted volume. We select our new volume, in this case: sd1 The whole disk will be used: W(hole) Let's create our partitions: NB: You are more than welcome to create multiple partitions for your system. Fifth step: System installation It's time to choose how we'll install our system (network install by http in my case) Sixth step: Finalize the installation. Last step: Reboot and start your system. Put your passphrase. Welcome to OpenBSD 6.2 with a full encrypted file system. Optional: Disable the swap encryption. The swap is actually part of the encrypted filesystem, we don't need OpenBSD to encrypt it. Sysctl is giving us this possibility. Step-by-Step FreeBSD installation with ZFS and Full Disk Encryption (https://blog.cagedmonster.net/step-by-step-freebsd-installation-with-full-disk-encryption/) 1. What do I need? For this tutorial, the installation has been made on a Intel Core i7 - AMD64 architecture. On a USB key, you would probably use this link : ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/amd64/ISO-IMAGES/11.1/FreeBSD-11.1-RELEASE-amd64-mini-memstick.img If you can't do a network installation, you'd better use this image : ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/amd64/ISO-IMAGES/11.1/FreeBSD-11.1-RELEASE-amd64-memstick.img You can write the image file on your USB device (replace XXXX with the name of your device) using dd : # dd if=FreeBSD-11.1-RELEASE-amd64-mini-memstick.img of=/dev/XXXX bs=1m 2. Boot and install: Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F1.png) 3. Configure your keyboard layout: Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F2.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F3.png) 4. Hostname and system components configuration : Set the name of your machine: [Screenshot](https://blog.cagedmonster.net/content/images/2017/09/F4.png_ What components do you want to install? Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F5.png) 5. Network configuration: Select the network interface you want to configure. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F6.png) First, we configure our IPv4 network. I used a static adress so you can see how it works, but you can use DHCP for an automated configuration, it depends of what you want to do with your system (desktop/server) Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F7.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F7-1.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F8.png) IPv6 network configuration. Same as for IPv4, you can use SLAAC for an automated configuration. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F9.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F10-1.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F10-2.png) Here, you can configure your DNS servers, I used the Google DNS servers so you can use them too if needed. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F11.png) 6. Select the server you want to use for the installation: I always use the IPv6 mirror to ensure that my IPv6 network configuration is good.Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F12.png) 7. Disk configuration: As we want to do an easy full disk encryption, we'll use ZFS. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F13.png) Make sure to select the disk encryption :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F14.png) Launch the disk configuration :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F15.png) Here everything is normal, you have to select the disk you'll use :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F16.png) I have only one SSD disk named da0 :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F17.png) Last chance before erasing your disk :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F18.png) Time to choose the password you'll use to start your system : Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F19.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F20.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F21.png) 8. Last steps to finish the installation: The installer will download what you need and what you selected previously (ports, src, etc.) to create your system: Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F22.png) 8.1. Root password: Enter your root password: Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F22-1.png) 8.2. Time and date: Set your timezone, in my case: Europe/France Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F22-2.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F23.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F23-1.png) Make sure the date and time are good, or you can change them :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F24.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F25.png) 8.3. Services: Select the services you'll use at system startup depending again of what you want to do. In many cases powerd and ntpd will be useful, sshd if you're planning on using FreeBSD as a server. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26.png) 8.4. Security: Security options you want to enable. You'll still be able to change them after the installation with sysctl. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-1.png) 8.5. Additionnal user: Create an unprivileged system user: Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-2.png) Make sure your user is in the wheel group so he can use the su command. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-3.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-4.png) 8.6. The end: End of your configuration, you can still do some modifications if you want : Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-5.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-6.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-7.png) 9. First boot: Enter the passphrase you have chosen previously : Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F27.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F28.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F29.png) Welcome to Freebsd 11.1 with full disk encryption! *** The anatomy of ldd program on OpenBSD (http://nanxiao.me/en/the-anatomy-of-ldd-program-on-openbsd/) In the past week, I read the ldd (https://github.com/openbsd/src/blob/master/libexec/ld.so/ldd/ldd.c) source code on OpenBSD to get a better understanding of how it works. And this post should also be a reference for other*NIX OSs. The ELF (https://en.wikipedia.org/wiki/Executable_and_Linkable_Format) file is divided into 4 categories: relocatable, executable, shared, and core. Only the executable and shared object files may have dynamic object dependencies, so the ldd only check these 2 kinds of ELF file: (1) Executable. ldd leverages the LD_TRACE_LOADED_OBJECTS environment variable in fact, and the code is as following: if (setenv("LD_TRACE_LOADED_OBJECTS", "true", 1) < 0) err(1, "setenv(LD_TRACE_LOADED_OBJECTS)"); When LDTRACELOADED_OBJECTS is set to 1 or true, running executable file will show shared objects needed instead of running it, so you even not needldd to check executable file. See the following outputs: $ /usr/bin/ldd usage: ldd program ... $ LD_TRACE_LOADED_OBJECTS=1 /usr/bin/ldd Start End Type Open Ref GrpRef Name 00000b6ac6e00000 00000b6ac7003000 exe 1 0 0 /usr/bin/ldd 00000b6dbc96c000 00000b6dbcc38000 rlib 0 1 0 /usr/lib/libc.so.89.3 00000b6d6ad00000 00000b6d6ad00000 rtld 0 1 0 /usr/libexec/ld.so (2) Shared object. The code to print dependencies of shared object is as following: if (ehdr.e_type == ET_DYN && !interp) { if (realpath(name, buf) == NULL) { printf("realpath(%s): %s", name, strerror(errno)); fflush(stdout); _exit(1); } dlhandle = dlopen(buf, RTLD_TRACE); if (dlhandle == NULL) { printf("%sn", dlerror()); fflush(stdout); _exit(1); } _exit(0); } Why the condition of checking a ELF file is shared object or not is like this: if (ehdr.e_type == ET_DYN && !interp) { ...... } That's because the file type of position-independent executable (PIE) is the same as shared object, but normally PIE contains a interpreter program header since it needs dynamic linker to load it while shared object lacks (refer this article). So the above condition will filter PIE file. The dlopen(buf, RTLD_TRACE) is used to print dynamic object information. And the actual code is like this: if (_dl_traceld) { _dl_show_objects(); _dl_unload_shlib(object); _dl_exit(0); } In fact, you can also implement a simple application which outputs dynamic object information for shared object yourself: # include int main(int argc, char **argv) { dlopen(argv[1], RTLD_TRACE); return 0; } Compile and use it to analyze /usr/lib/libssl.so.43.2: $ cc lddshared.c $ ./a.out /usr/lib/libssl.so.43.2 Start End Type Open Ref GrpRef Name 000010e2df1c5000 000010e2df41a000 dlib 1 0 0 /usr/lib/libssl.so.43.2 000010e311e3f000 000010e312209000 rlib 0 1 0 /usr/lib/libcrypto.so.41.1 The same as using ldd directly: $ ldd /usr/lib/libssl.so.43.2 /usr/lib/libssl.so.43.2: Start End Type Open Ref GrpRef Name 00001d9ffef08000 00001d9fff15d000 dlib 1 0 0 /usr/lib/libssl.so.43.2 00001d9ff1431000 00001d9ff17fb000 rlib 0 1 0 /usr/lib/libcrypto.so.41.1 Through the studying of ldd source code, I also get many by-products: such as knowledge of ELF file, linking and loading, etc. So diving into code is a really good method to learn *NIX deeper! Perl5 Slack Syslog BSD daemon (https://clinetworking.wordpress.com/2017/10/13/perl5-slack-syslog-bsd-daemon/) So I have been working on my little Perl daemon for a week now. It is a simple syslog daemon that listens on port 514 for incoming messages. It listens on a port so it can process log messages from my consumer Linux router as well as the messages from my server. Messages that are above alert are sent, as are messages that match the regex of SSH or DHCP (I want to keep track of new connections to my wifi). The rest of the messages are not sent to slack but appended to a log file. This is very handy as I can get access to info like failed ssh logins, disk failures, and new devices connecting to the network all on my Android phone when I am not home. Screenshot (https://clinetworking.files.wordpress.com/2017/10/screenshot_2017-10-13-23-00-26.png) The situation arose today that the internet went down and I thought to myself what would happen to all my important syslog messages when they couldn't be sent? Before the script only ran an eval block on the botsend() function. The error was returned, handled, but nothing was done and the unsent message was discarded. So I added a function that appended unsent messengers to an array that are later sent when the server is not busy sending messages to slack. Slack has a limit of one message per second. The new addition works well and means that if the internet fails my server will store these messages in memory and resend them at a rate of one message per second when the internet connectivity returns. It currently sends the newest ones first but I am not sure if this is a bug or a feature at this point! It currently works with my Linux based WiFi router and my FreeBSD server. It is easy to scale as all you need to do is send messages to syslog to get them sent to slack. You could sent CPU temp, logged in users etc. There is a github page: https://github.com/wilyarti/slackbot Lscpu for OpenBSD/FreeBSD (http://nanxiao.me/en/lscpu-for-openbsdfreebsd/) Github Link (https://github.com/NanXiao/lscpu) There is a neat command, lscpu, which is very handy to display CPU information on GNU/Linux OS: $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 2 But unfortunately, the BSD OSs lack this command, maybe one reason is lscpu relies heavily on /proc file system which BSD don't provide, :-). TakeOpenBSD as an example, if I want to know CPU information, dmesg should be one choice: $ dmesg | grep -i cpu cpu0 at mainbus0: apid 0 (boot processor) cpu0: Intel(R) Core(TM)2 Duo CPU P8700 @ 2.53GHz, 2527.35 MHz cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM, PBE,SSE3,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,XSAVE,NXE,LONG,LAHF,PERF,SENSOR cpu0: 3MB 64b/line 8-way L2 cache cpu0: apic clock running at 266MHz cpu0: mwait min=64, max=64, C-substates=0.2.2.2.2.1.3, IBE But the output makes me feeling messy, not very clear. As for dmidecode, it used to be another option, but now can't work out-of-box because it will access /dev/mem which for security reason, OpenBSD doesn't allow by default (You can refer this discussion): $ ./dmidecode $ dmidecode 3.1 Scanning /dev/mem for entry point. /dev/mem: Operation not permitted Based on above situation, I want a specified command for showing CPU information for my BSD box. So in the past 2 weeks, I developed a lscpu program for OpenBSD/FreeBSD, or more accurately, OpenBSD/FreeBSD on x86 architecture since I only have some Intel processors at hand. The application getsCPU metrics from 2 sources: (1) sysctl functions. The BSD OSs provide sysctl interface which I can use to get general CPU particulars, such as how many CPUs the system contains, the byte-order of CPU, etc. (2) CPUID instruction. For x86 architecture, CPUID instruction can obtain very detail information of CPU. This coding work is a little tedious and error-prone, not only because I need to reference both Intel and AMD specifications since these 2 vendors have minor distinctions, but also I need to parse the bits of register values. The code is here (https://github.com/NanXiao/lscpu), and if you run OpenBSD/FreeBSD on x86 processors, please try it. It will be better you can give some feedback or report the issues, and I appreciate it very much. In the future if I have other CPUs resource, such as ARM or SPARC64, maybe I will enrich this small program. *** Beastie Bits OpenBSD Porting Workshop - Brian Callahan will be running an OpenBSD porting workshop in NYC for NYC*BUG on December 6, 2017. (http://daemonforums.org/showthread.php?t=10429) Learn to tame OpenBSD quickly (http://www.openbsdjumpstart.org/#/) Detect the operating system using UDP stack corner cases (https://gist.github.com/sortie/94b302dd383df19237d1a04969f1a42b) *** Feedback/Questions Awesome Mike - ZFS Questions (http://dpaste.com/1H22BND#wrap) Michael - Expanding a file server with only one hard drive with ZFS (http://dpaste.com/1JRJ6T9) - information based on Allan's IRC response (http://dpaste.com/36M7M3E) Brian - Optimizing ZFS for a single disk (http://dpaste.com/3X0GXJR#wrap) ***

BSD Now
212: The Solaris Eclipse

BSD Now

Play Episode Listen Later Sep 20, 2017 100:57


We recap vBSDcon, give you the story behind a PF EN, reminisce in Solaris memories, and show you how to configure different DEs on FreeBSD. This episode was brought to you by Headlines [vBSDCon] vBSDCon was held September 7 - 9th. We recorded this only a few days after getting home from this great event. Things started on Wednesday night, as attendees of the thursday developer summit arrived and broke into smallish groups for disorganized dinner and drinks. We then held an unofficial hacker lounge in a medium sized seating area, working and talking until we all decided that the developer summit started awfully early tomorrow. The developer summit started with a light breakfast and then then we dove right in Ed Maste started us off, and then Glen Barber gave a presentation about lessons learned from the 11.1-RELEASE cycle, and comparing it to previous releases. 11.1 was released on time, and was one of the best releases so far. The slides are linked on the DevSummit wiki page (https://wiki.freebsd.org/DevSummit/20170907). The group then jumped into hackmd.io a collaborative note taking application, and listed of various works in progress and upstreaming efforts. Then we listed wants and needs for the 12.0 release. After lunch we broke into pairs of working groups, with additional space for smaller meetings. The first pair were, ZFS and Toolchain, followed by a break and then a discussion of IFLIB and network drivers in general. After another break, the last groups of the day met, pkgbase and secure boot. Then it was time for the vBSDCon reception dinner. This standing dinner was a great way to meet new people, and for attendees to mingle and socialize. The official hacking lounge Thursday night was busy, and included some great storytelling, along with a bunch of work getting done. It was very encouraging to watch a struggling new developer getting help from a seasoned veteran. Watching the new developers eyes light up as the new information filled in gaps and they now understood so much more than just a few minutes before, and they raced off to continue working, was inspirational, and reminded me why these conferences are so important. The hacker lounge shut down relatively early by BSD conference standards, but, the conference proper started at 8:45 sharp the next morning, so it made sense. Friday saw a string of good presentations, I think my favourite was Jonathan Anderson's talk on Oblivious sandboxing. Jonathan is a very energetic speaker, and was able to keep everyone focused even during relatively complicated explanations. Friday night I went for dinner at ‘Big Bowl', a stir-fry bar, with a largish group of developers and users of both FreeBSD and OpenBSD. The discussions were interesting and varied, and the food was excellent. Benedict had dinner with JT and some other folks from iXsystems. Friday night the hacker lounge was so large we took over a bigger room (it had better WiFi too). Saturday featured more great talks. The talk I was most interested in was from Eric McCorkle, who did the EFI version of my GELIBoot work. I had reviewed some of the work, but it was interesting to hear the story of how it happened, and to see the parallels with my own story. My favourite speaker was Paul Vixie, who gave a very interesting talk about the gets() function in libc. gets() was declared unsafe before the FreeBSD project even started. The original import of the CSRG code into FreeBSD includes the compile time, and run-time warnings against using gets(). OpenBSD removed gets() in version 5.6, in 2014. Following Paul's presentation, various patches were raised, to either cause use of gets() to crash the program, or to remove gets() entirely, causing such programs to fail to link. The last talk before the closing was Benedict's BSD Systems Management with Ansible (https://people.freebsd.org/~bcr/talks/vBSDcon2017_Ansible.pdf). Shortly after, Allan won a MacBook Pro by correctly guessing the number of components in a jar that was standing next to the registration desk (Benedict was way off, but had a good laugh about the unlikely future Apple user). Saturday night ended with the Conference Social, and excellent dinner with more great conversations On Sunday morning, a number of us went to the Smithsonian Air and Space Museum site near the airport, and saw a Concorde, an SR-71, and the space shuttle Discovery, among many other exhibits. Check out the full photo album by JT (https://t.co/KRmSNzUSus), our producer. Thanks to all the sponsors for vBSDcon and all the organizers from Verisign, who made it such a great event. *** The story behind FreeBSD-EN-17.08.pf (https://www.sigsegv.be//blog/freebsd/FreeBSD-EN-17.08.pf) After our previous deep dive on a bug in episode 209, Kristof Provost, the maintainer of pf on FreeBSD (he is going to hate me for saying that) has written the story behind a recent ERRATA notice for FreeBSD First things first, so I have to point out that I think Allan misremembered things. The heroic debugging story is PR 219251, which I'll try to write about later. FreeBSD-EN-17:08.pf is an issue that affected some FreeBSD 11.x systems, where FreeBSD would panic at startup. There were no reports for CURRENT. There's very little to go on here, but we do know the cause of the panic ("integer divide fault"), and that the current process was "pf purge". The pf purge thread is part of the pf housekeeping infrastructure. It's a housekeeping kernel thread which cleans up things like old states and expired fragments. The lack of mention of pf functions in the backtrace is a hint unto itself. It suggests that the error is probably directly in pfpurgethread(). It might also be in one of the static functions it calls, because compilers often just inline those so they don't generate stack frames. Remember that the problem is an "integer divide fault". How can integer divisions be a problem? Well, you can try to divide by zero. The most obvious suspect for this is this code: idx = pfpurgeexpiredstates(idx, pfhashmask / (Vpfdefaultrule.timeout[PFTMINTERVAL] * 10)); However, this variable is both correctly initialised (in pfattachvnet()) and can only be modified through the DIOCSETTIMEOUT ioctl() call and that one checks for zero. At that point I had no idea how this could happen, but because the problem did not affect CURRENT I looked at the commit history and found this commit from Luiz Otavio O Souza: Do not run the pf purge thread while the VNET variables are not initialized, this can cause a divide by zero (if the VNET initialization takes to long to complete). Obtained from: pfSense Sponsored by: Rubicon Communications, LLC (Netgate) That sounds very familiar, and indeed, applying the patch fixed the problem. Luiz explained it well: it's possible to use Vpfdefaultrule.timeout before it's initialised, which caused this panic. To me, this reaffirms the importance of writing good commit messages: because Luiz mentioned both the pf purge thread and the division by zero I was easily able to find the relevant commit. If I hadn't found it this fix would have taken a lot longer. Next week we'll look at the more interesting story I was interested in, which I managed to nag Kristof into writing *** The sudden death and eternal life of Solaris (http://dtrace.org/blogs/bmc/2017/09/04/the-sudden-death-and-eternal-life-of-solaris/) A blog post from Bryan Cantrill about the death of Solaris As had been rumored for a while, Oracle effectively killed Solaris. When I first saw this, I had assumed that this was merely a deep cut, but in talking to Solaris engineers still at Oracle, it is clearly much more than that. It is a cut so deep as to be fatal: the core Solaris engineering organization lost on the order of 90% of its people, including essentially all management. Of note, among the engineers I have spoken with, I heard two things repeatedly: “this is the end” and (from those who managed to survive Friday) “I wish I had been laid off.” Gone is any of the optimism (however tepid) that I have heard over the years — and embarrassed apologies for Oracle's behavior have been replaced with dismay about the clumsiness, ineptitude and callousness with which this final cut was handled. In particular, that employees who had given their careers to the company were told of their termination via a pre-recorded call — “robo-RIF'd” in the words of one employee — is both despicable and cowardly. To their credit, the engineers affected saw themselves as Sun to the end: they stayed to solve hard, interesting problems and out of allegiance to one another — not out of any loyalty to the broader Oracle. Oracle didn't deserve them and now it doesn't have them — they have been liberated, if in a depraved act of corporate violence. Assuming that this is indeed the end of Solaris (and it certainly looks that way), it offers a time for reflection. Certainly, the demise of Solaris is at one level not surprising, but on the other hand, its very suddenness highlights the degree to which proprietary software can suffer by the vicissitudes of corporate capriciousness. Vulnerable to executive whims, shareholder demands, and a fickle public, organizations can simply change direction by fiat. And because — in the words of the late, great Roger Faulkner — “it is easier to destroy than to create,” these changes in direction can have lasting effect when they mean stopping (or even suspending!) work on a project. Indeed, any engineer in any domain with sufficient longevity will have one (or many!) stories of exciting projects being cancelled by foolhardy and myopic management. For software, though, these cancellations can be particularly gutting because (in the proprietary world, anyway) so many of the details of software are carefully hidden from the users of the product — and much of the innovation of a cancelled software project will likely die with the project, living only in the oral tradition of the engineers who knew it. Worse, in the long run — to paraphrase Keynes — proprietary software projects are all dead. However ubiquitous at their height, this lonely fate awaits all proprietary software. There is, of course, another way — and befitting its idiosyncratic life and death, Solaris shows us this path too: software can be open source. In stark contrast to proprietary software, open source does not — cannot, even — die. Yes, it can be disused or rusty or fusty, but as long as anyone is interested in it at all, it lives and breathes. Even should the interest wane to nothing, open source software survives still: its life as machine may be suspended, but it becomes as literature, waiting to be discovered by a future generation. That is, while proprietary software can die in an instant, open source software perpetually endures by its nature — and thrives by the strength of its communities. Just as the existence of proprietary software can be surprisingly brittle, open source communities can be crazily robust: they can survive neglect, derision, dissent — even sabotage. In this regard, I speak from experience: from when Solaris was open sourced in 2005, the OpenSolaris community survived all of these things. By the time Oracle bought Sun five years later in 2010, the community had decided that it needed true independence — illumos was born. And, it turns out, illumos was born at exactly the right moment: shortly after illumos was announced, Oracle — in what remains to me a singularly loathsome and cowardly act — silently re-proprietarized Solaris on August 13, 2010. We in illumos were indisputably on our own, and while many outsiders gave us no chance of survival, we ourselves had reason for confidence: after all, open source communities are robust because they are often united not only by circumstance, but by values, and in our case, we as a community never lost our belief in ZFS, Zones, DTrace and myriad other technologies like MDB, FMA and Crossbow. Indeed, since 2010, illumos has thrived; illumos is not only the repository of record for technologies that have become cross-platform like OpenZFS, but we have also advanced our core technologies considerably, while still maintaining highest standards of quality. Learning some of the mistakes of OpenSolaris, we have a model that allows for downstream innovation, experimentation and differentiation. For example, Joyent's SmartOS has always been focused on our need for a cloud hypervisor (causing us to develop big features like hardware virtualization and Linux binary compatibility), and it is now at the heart of a massive buildout for Samsung (who acquired Joyent a little over a year ago). For us at Joyent, the Solaris/illumos/SmartOS saga has been formative in that we have seen both the ill effects of proprietary software and the amazing resilience of open source software — and it very much informed our decision to open source our entire stack in 2014. Judging merely by its tombstone, the life of Solaris can be viewed as tragic: born out of wedlock between Sun and AT&T and dying at the hands of a remorseless corporate sociopath a quarter century later. And even that may be overstating its longevity: Solaris may not have been truly born until it was made open source, and — certainly to me, anyway — it died the moment it was again made proprietary. But in that shorter life, Solaris achieved the singular: immortality for its revolutionary technologies. So while we can mourn the loss of the proprietary embodiment of Solaris (and we can certainly lament the coarse way in which its technologists were treated!), we can rejoice in the eternal life of its technologies — in illumos and beyond! News Roundup OpenBSD on the Lenovo Thinkpad X1 Carbon (5th Gen) (https://jcs.org/2017/09/01/thinkpad_x1c) Joshua Stein writes about his experiences running OpenBSD on the 5th generation Lenovo Thinkpad X1 Carbon: ThinkPads have sort of a cult following among OpenBSD developers and users because the hardware is basic and well supported, and the keyboards are great to type on. While no stranger to ThinkPads myself, most of my OpenBSD laptops in recent years have been from various vendors with brand new hardware components that OpenBSD does not yet support. As satisfying as it is to write new kernel drivers or extend existing ones to make that hardware work, it usually leaves me with a laptop that doesn't work very well for a period of months. After exhausting efforts trying to debug the I2C touchpad interrupts on the Huawei MateBook X (and other 100-Series Intel chipset laptops), I decided to take a break and use something with better OpenBSD support out of the box: the fifth generation Lenovo ThinkPad X1 Carbon. Hardware Like most ThinkPads, the X1 Carbon is available in a myriad of different internal configurations. I went with the non-vPro Core i7-7500U (it was the same price as the Core i5 that I normally opt for), 16Gb of RAM, a 256Gb NVMe SSD, and a WQHD display. This generation of X1 Carbon finally brings a thinner screen bezel, allowing the entire footprint of the laptop to be smaller which is welcome on something with a 14" screen. The X1 now measures 12.7" wide, 8.5" deep, and 0.6" thick, and weighs just 2.6 pounds. While not available at initial launch, Lenovo is now offering a WQHD IPS screen option giving a resolution of 2560x1440. Perhaps more importantly, this display also has much better brightness than the FHD version, something ThinkPads have always struggled with. On the left side of the laptop are two USB-C ports, a USB-A port, a full-size HDMI port, and a port for the ethernet dongle which, despite some reviews stating otherwise, is not included with the laptop. On the right side is another USB-A port and a headphone jack, along with a fan exhaust grille. On the back is a tray for the micro-SIM card for the optional WWAN device, which also covers the Realtek microSD card reader. The tray requires a paperclip to eject which makes it inconvenient to remove, so I think this microSD card slot is designed to house a card semi-permanently as a backup disk or something. On the bottom are the two speakers towards the front and an exhaust grille near the center. The four rubber feet are rather plastic feeling, which allows the laptop to slide around on a desk a bit too much for my liking. I wish they were a bit softer to be stickier. Charging can be done via either of the two USB-C ports on the left, though I wish more vendors would do as Google did on the Chromebook Pixel and provide a port on both sides. This makes it much more convenient to charge when not at one's desk, rather than having to route a cable around to one specific side. The X1 Carbon includes a 65W USB-C PD with a fixed USB-C cable and removable country-specific power cable, which is not very convenient due to its large footprint. I am using an Apple 61W USB-C charger and an Anker cable which charge the X1 fine (unlike HP laptops which only work with HP USB-C chargers). Wireless connectivity is provided by a removable Intel 8265 802.11a/b/g/n/ac WiFi and Bluetooth 4.1 card. An Intel I219-V chip provides ethernet connectivity and requires an external dongle for the physical cable connection. The screen hinge is rather tight, making it difficult to open with one hand. The tradeoff is that the screen does not wobble in the least bit when typing. The fan is silent at idle, and there is no coil whine even under heavy load. During a make -j4 build, the fan noise is reasonable and medium-pitched, rather than a high-pitched whine like on some laptops. The palm rest and keyboard area remain cool during high CPU utilization. The full-sized keyboard is backlit and offers two levels of adjustment. The keys have a soft surface and a somewhat clicky feel, providing very quiet typing except for certain keys like Enter, Backspace, and Escape. The keyboard has a reported key travel of 1.5mm and there are dedicated Page Up and Page Down keys above the Left and Right arrow keys. Dedicated Home, End, Insert, and Delete keys are along the top row. The Fn key is placed to the left of Control, which some people hate (although Lenovo does provide a BIOS option to swap it), but it's in the same position on Apple keyboards so I'm used to it. However, since there are dedicated Page Up, Page Down, Home, and End keys, I don't really have a use for the Fn key anyway. Firmware The X1 Carbon has a very detailed BIOS/firmware menu which can be entered with the F1 key at boot. F12 can be used to temporarily select a different boot device. A neat feature of the Lenovo BIOS is that it supports showing a custom boot logo instead of the big red Lenovo logo. From Windows, download the latest BIOS Update Utility for the X1 Carbon (my model was 20HR). Run it and it'll extract everything to C:driversflash(some random string). Drop a logo.gif file in that directory and run winuptp.exe. If a logo file is present, it'll ask whether to use it and then write the new BIOS to its staging area, then reboot to actually flash it. + OpenBSD support Secure Boot has to be disabled in the BIOS menu, and the "CSM Support" option must be enabled, even when "UEFI/Legacy Boot" is left on "UEFI Only". Otherwise the screen will just go black after the OpenBSD kernel loads into memory. Based on this component list, it seems like everything but the fingerprint sensor works fine on OpenBSD. *** Configuring 5 different desktop environments on FreeBSD (https://www.linuxsecrets.com/en/entry/51-freebsd/2017/09/04/2942-configure-5-freebsd-x-environments) This fairly quick tutorial over at LinuxSecrets.com is a great start if you are new to FreeBSD, especially if you are coming from Linux and miss your favourite desktop environment It just goes to show how easy it is to build the desktop you want on modern FreeBSD The tutorial covers: GNOME, KDE, Xfce, Mate, and Cinnamon The instructions for each boil down to some variation of: Install the desktop environment and a login manager if it is not included: > sudo pkg install gnome3 Enable the login manager, and usually dbus and hald: > sudo sysrc dbusenable="YES" haldenable="YES" gdmenable="YES" gnomeenable="YES"? If using a generic login manager, add the DE startup command to your .xinitrc: > echo "exec cinnamon" > ~/.xinitrc And that is about it. The tutorial goes into more detail on other configuration you can do to get your desktop just the way you like it. To install Lumina: > sudo pkg install lumina pcbsd-utils-qt5 This will install Lumina and the pcbsd utilities package which includes pcdm, the login manager. In the near future we hear the login manager and some of the other utilities will be split into separate packages, making it easier to use them on vanilla FreeBSD. > sudo sysrc pcdmenable=”YES” dbusenable="YES" hald_enable="YES" Reboot, and you should be greeted with the graphical login screen *** A return-oriented programming defense from OpenBSD (https://lwn.net/Articles/732201/) We talked a bit about RETGUARD last week, presenting Theo's email announcing the new feature Linux Weekly News has a nice breakdown on just how it works Stack-smashing attacks have a long history; they featured, for example, as a core part of the Morris worm back in 1988. Restrictions on executing code on the stack have, to a great extent, put an end to such simple attacks, but that does not mean that stack-smashing attacks are no longer a threat. Return-oriented programming (ROP) has become a common technique for compromising systems via a stack-smashing vulnerability. There are various schemes out there for defeating ROP attacks, but a mechanism called "RETGUARD" that is being implemented in OpenBSD is notable for its relative simplicity. In a classic stack-smashing attack, the attack code would be written directly to the stack and executed there. Most modern systems do not allow execution of on-stack code, though, so this kind of attack will be ineffective. The stack does affect code execution, though, in that the call chain is stored there; when a function executes a "return" instruction, the address to return to is taken from the stack. An attacker who can overwrite the stack can, thus, force a function to "return" to an arbitrary location. That alone can be enough to carry out some types of attacks, but ROP adds another level of sophistication. A search through a body of binary code will turn up a great many short sequences of instructions ending in a return instruction. These sequences are termed "gadgets"; a large program contains enough gadgets to carry out almost any desired task — if they can be strung together into a chain. ROP works by locating these gadgets, then building a series of stack frames so that each gadget "returns" to the next. There is, of course, a significant limitation here: a ROP chain made up of exclusively polymorphic gadgets will still work, since those gadgets were not (intentionally) created by the compiler and do not contain the return-address-mangling code. De Raadt acknowledged this limitation, but said: "we believe once standard-RET is solved those concerns become easier to address separately in the future. In any case a substantial reduction of gadgets is powerful". Using the compiler to insert the hardening code greatly eases the task of applying RETGUARD to both the OpenBSD kernel and its user-space code. At least, that is true for code written in a high-level language. Any code written in assembly must be changed by hand, though, which is a fair amount of work. De Raadt and company have done that work; he reports that: "We are at the point where userland and base are fully working without regressions, and the remaining impacts are in a few larger ports which directly access the return address (for a variety of reasons)". It can be expected that, once these final issues are dealt with, OpenBSD will ship with this hardening enabled. The article wonders about applying the same to Linux, but notes it would be difficult because the Linux kernel cannot currently be compiled using LLVM If any benchmarks have been run to determine the cost of using RETGUARD, they have not been publicly posted. The extra code will make the kernel a little bigger, and the extra overhead on every function is likely to add up in the end. But if this technique can make the kernel that much harder to exploit, it may well justify the extra execution overhead that it brings with it. All that's needed is somebody to actually do the work and try it out. Videos from BSDCan have started to appear! (https://www.youtube.com/playlist?list=PLeF8ZihVdpFfVEsCxNWGDmcATJfRZacHv) Henning Brauer: tcp synfloods - BSDCan 2017 (https://www.youtube.com/watch?v=KuHepyI0_KY) Benno Rice: The Trouble with FreeBSD - BSDCan 2017 (https://www.youtube.com/watch?v=1DM5SwoXWSU) Li-Wen Hsu: Continuous Integration of The FreeBSD Project - BSDCan 2017 (https://www.youtube.com/watch?v=SCLfKWaUGa8) Andrew Turner: GENERIC ARM - BSDCan 2017 (https://www.youtube.com/watch?v=gkYjvrFvPJ0) Bjoern A. Zeeb: From the outside - BSDCan 2017 (https://www.youtube.com/watch?v=sYmW_H6FrWo) Rodney W. Grimes: FreeBSD as a Service - BSDCan 2017 (https://www.youtube.com/watch?v=Zf9tDJhoVbA) Reyk Floeter: The OpenBSD virtual machine daemon - BSDCan 2017 (https://www.youtube.com/watch?v=Os9L_sOiTH0) Brian Kidney: The Realities of DTrace on FreeBSD - BSDCan 2017 (https://www.youtube.com/watch?v=NMUf6VGK2fI) The rest will continue to trickle out, likely not until after EuroBSDCon *** Beastie Bits Oracle has killed sun (https://meshedinsights.com/2017/09/03/oracle-finally-killed-sun/) Configure Thunderbird to send patch friendly (http://nanxiao.me/en/configure-thunderbird-to-send-patch-friendly/) FreeBSD 10.4-BETA4 Available (https://www.freebsd.org/news/newsflash.html#event20170909:01) iXsystems looking to hire kernel and zfs developers (especially Sun/Oracle Refugees) (https://www.facebook.com/ixsystems/posts/10155403417921508) Speaking of job postings, UnitedBSD.com has few job postings related to BSD (https://unitedbsd.com/) Call for papers USENIX FAST ‘18 - February 12-15, 2018, Due: September 28 2017 (https://www.freebsdfoundation.org/news-and-events/call-for-papers/usenix-fast-18-call-for-papers/) Scale 16x - March 8-11, 2018, Due: October 31, 2017 (https://www.freebsdfoundation.org/news-and-events/call-for-papers/scale-16x-call-for-participation/) FOSDEM ‘18 - February 3-4, 2018, Due: November 3 2017 (https://www.freebsdfoundation.org/news-and-events/call-for-papers/fosdem-18-call-for-participation/) Feedback/Questions Jason asks about cheap router hardware (http://dpaste.com/340KRHG) Prashant asks about latest kernels with freebsd-update (http://dpaste.com/2J7DQQ6) Matt wants know about VM Performance & CPU Steal Time (http://dpaste.com/1H5SZ81) John has config questions regarding Dell precision 7720, FreeBSD, NVME, and ZFS (http://dpaste.com/0X770SY) ***

BSD Now
191: I Know 64 & A Bunch More

BSD Now

Play Episode Listen Later Apr 26, 2017 126:58


We cover TrueOS/Lumina working to be less dependent on Linux, How the IllumOS network stack works, Throttling the password gropers & the 64 bit inode call for testing. This episode was brought to you by Headlines vBSDCon CFP closed April 29th (https://easychair.org/conferences/?conf=vbsdcon2017) EuroBSDCon CFP closes April 30th (https://2017.eurobsdcon.org/2017/03/13/call-for-proposals/) Developer Commentary: Philosophy, Evolution of TrueOS/Lumina, and Other Thoughts. (https://www.trueos.org/blog/developer-commentary-philosophy-evolution-trueoslumina-thoughts/) Philosophy of Development No project is an island. Every single project needs or uses some other external utility, library, communications format, standards compliance, and more in order to be useful. A static project is typically a dead project. A project needs regular upkeep and maintenance to ensure it continues to build and run with the current ecosystem of libraries and utilities, even if the project has no considerable changes to the code base or feature set. “Upstream” decisions can have drastic consequences on your project. Through no fault of yours, your project can be rendered obsolete or broken by changing standards in the global ecosystem that affect your project's dependencies. Operating system focus is key. What OS is the project originally designed for? This determines how the “upstream” dependencies list appears and which “heartbeat” to monitor. Evolution of PC-BSD, Lumina, and TrueOS. With these principles in mind – let's look at PC-BSD, Lumina, and TrueOS. PC-BSD : PC-BSD was largely designed around KDE on FreeBSD. KDE/Plasma5 has been available for Linux OS's for well over a year, but is still not generally available on FreeBSD. It is still tucked away in the experimental “area51” repository where people are trying to get it working first. Lumina : As a developer with PC-BSD for a long time, and a tester from nearly the beginning of the project, I was keenly aware the “winds of change” were blowing in the open-source ecosystem. TrueOS : All of these ecosystem changes finally came to a head for us near the beginning of 2016. KDE4 was starting to deteriorate underneath us, and the FreeBSD “Release” branch would never allow us to compete with the rate of graphics driver or standards changes coming out of the Linux camp. The Rename and Next Steps With all of these changes and the lack of a clear “upgrade” path from PC-BSD to the new systems, we decided it was necessary to change the project itself (name and all). To us, this was the only way to ensure people were aware of the differences, and that TrueOS really is a different kind of project from PC-BSD. Note this was not a “hostile takeover” of the PC-BSD project by rabid FreeBSD fanatics. This was more a refocusing of the PC-BSD project into something that could ensure longevity and reliability for the foreseeable future. Does TrueOS have bugs and issues? Of course! That is the nature of “rolling” with upstream changes all the time. Not only do you always get the latest version of something (a good thing), you also find yourself on the “front line” for finding and reporting bugs in those same applications (a bad thing if you like consistency or stability). What you are also seeing is just how much “churn” happens in the open-source ecosystem at any given time. We are devoted to providing our users (and ourselves – don't forget we use TrueOS every day too!) a stable, reliable, and secure experience. Please be patient as we continue striving toward this goal in the best way possible, not just doing what works for the moment, but the project's future too. Robert Mustacchi: Excerpts from The Soft Ring Cycle #1 (https://www.youtube.com/watch?v=vnD10WQ2930) The author of the “Turtles on the Wire” post we featured the other week, is back with a video. Joyent has started a new series of lunchtime technical discussions to share information as they grow their engineering team This video focuses on the network stack, how it works, and how it relates to virtualization and multi-tenancy Basically, how the network stack on IllumOS works when you have virtual tenants, be they virtual machines or zones The video describes the many layers of the network stack, how they work together, and how they can be made to work quickly It also talks about the trade-offs between high throughput and low latency How security is enforced, so virtual tenants cannot send packets into VLANs they are not members of, or receive traffic that they are not allowed to by the administrator How incoming packets are classified, and eventually delivered to the intended destination How the system decides if it has enough available resources to process the packet, or if it needs to be dropped How interface polling works on IllumOS (a lot different than on FreeBSD) Then the last 20 minutes are about how the qemu interface of the KVM hypervisor interfaces with the network stack We look forward to seeing more of these videos as they come out *** Forcing the password gropers through a smaller hole with OpenBSD's PF queues (http://bsdly.blogspot.com/2017/04/forcing-password-gropers-through.html) While preparing material for the upcoming BSDCan PF and networking tutorial (http://www.bsdcan.org/2017/schedule/events/805.en.html), I realized that the pop3 gropers were actually not much fun to watch anymore. So I used the traffic shaping features of my OpenBSD firewall to let the miscreants inflict some pain on themselves. Watching logs became fun again. The actual useful parts of this article follow - take this as a walkthrough of how to mitigate a wide range of threats and annoyances. First, analyze the behavior that you want to defend against. In our case that's fairly obvious: We have a service that's getting a volume of unwanted traffic, and looking at our logs the attempts come fairly quickly with a number of repeated attempts from each source address. I've written about the rapid-fire ssh bruteforce attacks and their mitigation before (and of course it's in The Book of PF) as well as the slower kind where those techniques actually come up short. The traditional approach to ssh bruteforcers has been to simply block their traffic, and the state-tracking features of PF let you set up overload criteria that add the source addresses to the table that holds the addresses you want to block. For the system that runs our pop3 service, we also have a PF ruleset in place with queues for traffic shaping. For some odd reason that ruleset is fairly close to the HFSC traffic shaper example in The Book of PF, and it contains a queue that I set up mainly as an experiment to annoy spammers (as in, the ones that are already for one reason or the other blacklisted by our spamd). The queue is defined like this: queue spamd parent rootq bandwidth 1K min 0K max 1K qlimit 300 yes, that's right. A queue with a maximum throughput of 1 kilobit per second. I have been warned that this is small enough that the code may be unable to strictly enforce that limit due to the timer resolution in the HFSC code. But that didn't keep me from trying. Now a few small additions to the ruleset are needed for the good to put the evil to the task. We start with a table to hold the addresses we want to mess with. Actually, I'll add two, for reasons that will become clear later: table persist counters table persist counters The rules that use those tables are: block drop log (all) quick from pass in quick log (all) on egress proto tcp from to port pop3 flags S/SA keep state (max-src-conn 2, max-src-conn-rate 3/3, overload flush global, pflow) set queue spamd pass in log (all) on egress proto tcp to port pop3 flags S/SA keep state (max-src-conn 5, max-src-conn-rate 6/3, overload flush global, pflow) The last one lets anybody connect to the pop3 service, but any one source address can have only open five simultaneous connections and at a rate of six over three seconds. The results were immediately visible. Monitoring the queues using pfctl -vvsq shows the tiny queue works as expected: queue spamd parent rootq bandwidth 1K, max 1K qlimit 300 [ pkts: 196136 bytes: 12157940 dropped pkts: 398350 bytes: 24692564 ] [ qlength: 300/300 ] [ measured: 2.0 packets/s, 999.13 b/s ] and looking at the pop3 daemon's log entries, a typical encounter looks like this: Apr 19 22:39:33 skapet spop3d[44875]: connect from 111.181.52.216 Apr 19 22:39:33 skapet spop3d[75112]: connect from 111.181.52.216 Apr 19 22:39:34 skapet spop3d[57116]: connect from 111.181.52.216 Apr 19 22:39:34 skapet spop3d[65982]: connect from 111.181.52.216 Apr 19 22:39:34 skapet spop3d[58964]: connect from 111.181.52.216 Apr 19 22:40:34 skapet spop3d[12410]: autologout time elapsed - 111.181.52.216 Apr 19 22:40:34 skapet spop3d[63573]: autologout time elapsed - 111.181.52.216 Apr 19 22:40:34 skapet spop3d[76113]: autologout time elapsed - 111.181.52.216 Apr 19 22:40:34 skapet spop3d[23524]: autologout time elapsed - 111.181.52.216 Apr 19 22:40:34 skapet spop3d[16916]: autologout time elapsed - 111.181.52.216 here the miscreant comes in way too fast and only manages to get five connections going before they're shunted to the tiny queue to fight it out with known spammers for a share of bandwidth. One important takeaway from this, and possibly the most important point of this article, is that it does not take a lot of imagination to retool this setup to watch for and protect against undesirable activity directed at essentially any network service. You pick the service and the ports it uses, then figure out what are the parameters that determine what is acceptable behavior. Once you have those parameters defined, you can choose to assign to a minimal queue like in this example, block outright, redirect to something unpleasant or even pass with a low probability. 64-bit inodes (ino64) Status Update and Call for Testing (https://lists.freebsd.org/pipermail/freebsd-fs/2017-April/024684.html) Inodes are data structures corresponding to objects in a file system, such as files and directories. FreeBSD has historically used 32-bit values to identify inodes, which limits file systems to somewhat under 2^32 objects. Many modern file systems internally use 64-bit identifiers and FreeBSD needs to follow suit to properly and fully support these file systems. The 64-bit inode project, also known as ino64, started life many years ago as a project by Gleb Kurtsou (gleb@). After that time several people have had a hand in updating it and addressing regressions, after mckusick@ picked up and updated the patch, and acted as a flag-waver. Overview : The ino64 branch extends the basic system types inot and devt from 32-bit to 64-bit, and nlink_t from 16-bit to 64-bit. Motivation : The main risk of the ino64 change is the uncontrolled ABI breakage. Quirks : We handled kinfo sysctl MIBs, but other MIBs which report structures depended on the changed type, are not handled in general. It was considered that the breakage is either in the management interfaces, where we usually allow ABI slip, or is not important. Testing procedure : The ino64 project can be tested by cloning the project branch from GitHub or by applying the patch to a working tree. New kernel, old world. New kernel, new world, old third-party applications. 32bit compat. Targeted tests. NFS server and client test Other filesystems Test accounting Ports Status with ino64 : A ports exp-run for ino64 is open in PR 218320. 5.1. LLVM : LLVM includes a component called Address Sanitizer or ASAN, which triesto intercept syscalls, and contains knowledge of the layout of many system structures. Since stat and lstat syscalls were removed and several types and structures changed, this has to be reflected in the ASAN hacks. 5.2. lang/ghc : The ghc compiler and parts of the runtime are written in Haskell, which means that to compile ghc, you need a working Haskell compiler for bootstrap. 5.3. lang/rust Rustc has a similar structure to GHC, and same issue. The same solution of patching the bootstrap was done. Next Steps : The tentative schedule for the ino64 project: 2017-04-20 Post wide call for testing : Investigate and address port failures with maintainer support 2017-05-05 Request second exp-run with initial patches applied : Investigate and address port failures with maintainer support 2017-05-19 Commit to HEAD : Address post-commit failures where feasible *** News Roundup Sing, beastie, sing! (http://meka.rs/blog/2017/01/25/sing-beastie-sing/) FreeBSD digital audio workstation, or DAW for short, is now possible. At this very moment it's not user friendly that much, but you'll manage. What I want to say is that I worked on porting some of the audio apps to FreeBSD, met some other people interested in porting audio stuff and became heavily involved with DrumGizmo - drum sampling engine. Let me start with the basic setup. FreeBSD doesn't have hard real-time support, but it's pretty close. For the needs of audio, FreeBSD's implementation of real-time is sufficient and, in my opinion, superior to the one you can get on Linux with RT path (which is ugly, not supported by distributions and breaks apps like VirtualBox). As default install of FreeBSD is concerned with real-time too much, we have to tweak sysctl a bit, so append this to your /etc/sysctl.conf: kern.timecounter.alloweddeviation=0 hw.usb.uaudio.buffer_ms=2 # only on -STABLE for now hw.snd.latency=0 kern.coredump=0 So let me go through the list. First item tells FreeBSD how many events it can aggregate (or wait for) before emitting them. The reason this is the default is because aggregating events saves power a bit, and currently more laptops are running FreeBSD than DAWs. Second one is the lowest possible buffer for USB audio driver. If you're not using USB audio, this won't change a thing. Third one has nothing to do with real-time, but dealing with programs that consume ~3GB of RAM, dumping cores around made a problem on my machine. Besides, core dumps are only useful if you know how to debug the problem, or someone is willing to do that for you. I like to not generate those files by default, but if some app is constantly crashing, I enable dumps, run the app, crash it, and disable dumps again. I lost 30GB in under a minute by examining 10 different drumkits of DrumGizmo and all of them gave me 3GB of core file, each. More setup instructions follow, including jackd setup and PulseAudio using virtual_oss. With this setup I can play OSS, JACK and PulseAudio sound all at the same time, which I was not able to do on Linux. FreeBSD 11 Unbound DNS server (https://itso.dk/?p=499) In FreeBSD, there is a built-in DNS server called Unbound. So why would run a local DNS server? I am in a region where internet traffic is still a bit expensive, that also implies slow, and high response times. To speed that a up a little, you can use own DNS server. It will speed up because for every homepage you visit, there will be several hooks to other domains: commercials, site components, and links to other sites. These, will now all be cached locally on your new DNS server. In my case I use an old PC-Engine Alix board for my home DNS server, but you can use almost everything, Raspberry Pi, old laptop/desktop and others. As long as it runs FreeBSD. Goes into more details about what commands to run and which services to start Try it out if you are in a similar situation *** Why it is important that documentation and tutorials be correct and carefully reviewed (https://arxiv.org/pdf/1704.02786.pdf) A group of researchers found that a lot of online web programming tutorials contain serious security flaws. They decided to do a research project to see how this impacts software that is written possibly based on those tutorials. They used a number of simple google search terms to make a list of tutorials, and manually audited them for common vulnerabilities. They then crawled GitHub to find projects with very similar code snippets that might have been taken from those tutorials. The Web is replete with tutorial-style content on how to accomplish programming tasks. Unfortunately, even top-ranked tutorials suffer from severe security vulnerabilities, such as cross-site scripting (XSS), and SQL injection (SQLi). Assuming that these tutorials influence real-world software development, we hypothesize that code snippets from popular tutorials can be used to bootstrap vulnerability discovery at scale. To validate our hypothesis, we propose a semi-automated approach to find recurring vulnerabilities starting from a handful of top-ranked tutorials that contain vulnerable code snippets. We evaluate our approach by performing an analysis of tens of thousands of open-source web applications to check if vulnerabilities originating in the selected tutorials recur. Our analysis framework has been running on a standard PC, analyzed 64,415 PHP codebases hosted on GitHub thus far, and found a total of 117 vulnerabilities that have a strong syntactic similarity to vulnerable code snippets present in popular tutorials. In addition to shedding light on the anecdotal belief that programmers reuse web tutorial code in an ad hoc manner, our study finds disconcerting evidence of insufficiently reviewed tutorials compromising the security of open-source projects. Moreover, our findings testify to the feasibility of large-scale vulnerability discovery using poorly written tutorials as a starting point The researchers found 117 vulnerabilities, of these, at least 8 appear to be nearly exact copy/pastes of the tutorials that were found to be vulnerable. *** 1.3.0 Development Preview: New icon themes (https://lumina-desktop.org/1-3-0-development-preview-new-icon-themes/) As version 1.3.0 of the Lumina desktop starts getting closer to release, I want to take a couple weeks and give you all some sneak peaks at some of the changes/updates that we have been working on (and are in the process of finishing up). New icon theme (https://lumina-desktop.org/1-3-0-development-preview-new-icon-themes/) Material Design Light/Dark There are a lot more icons available in the reference icon packs which we still have not gotten around to renaming yet, but this initial version satisfies all the XDG standards for an icon theme + all the extra icons needed for Lumina and it's utilities + a large number of additional icons for application use. This highlights one the big things that I love about Lumina: it gives you an interface that is custom-tailored to YOUR needs/wants – rather than expecting YOU to change your routines to accomodate how some random developer/designer across the world thinks everybody should use a computer. Lumina Media Player (https://lumina-desktop.org/1-3-0-development-preview-lumina-mediaplayer/) This is a small utility designed to provide the ability for the user to play audio and video files on the local system, as well as stream audio from online sources. For now, only the Pandora internet radio service is supported via the “pianobar” CLI utility, which is an optional runtime dependency. However, we hope to gradually add new streaming sources over time. For a long time I had been using another Pandora streaming client on my TrueOS desktop, but it was very fragile with respect to underlying changes: LibreSSL versions for example. The player would regularly stop functioning for a few update cycles until a version of LibreSSL which was “compatible” with the player was used. After enduring this for some time, I was finally frustrated enough to start looking for alternatives. A co-worker pointed me to a command-line utility called “pianobar“, which was also a small client for Pandora radio. After using pianobar for a couple weeks, I was impressed with how stable it was and how little “overhead” it required with regards to extra runtime dependencies. Of course, I started thinking “I could write a Qt5 GUI for that!”. Once I had a few free hours, I started writing what became lumina-mediaplayer. I started with the interface to pianobar itself to see how complicated it would be to interact with, but after a couple days of tinkering in my spare time, I realized I had a full client to Pandora radio basically finished. Beastie Bits vBSDCon CFP closes April 29th (https://easychair.org/conferences/?conf=vbsdcon2017) EuroBSDCon CFP closes April 30th (https://2017.eurobsdcon.org/2017/03/13/call-for-proposals/) clang(1) added to base on amd64 and i386 (http://undeadly.org/cgi?action=article&sid=20170421001933) Theo: “Most things come to an end, sorry.” (https://marc.info/?l=openbsd-misc&m=149232307018311&w=2) ASLR, PIE, NX, and other capital letters (https://www.dragonflydigest.com/2017/04/24/19609.html) How SSH got port number 22 (https://www.ssh.com/ssh/port) Netflix Serving 90Gb/s+ From Single Machines Using Tuned FreeBSD (https://news.ycombinator.com/item?id=14128637) Compressed zfs send / receive lands in FreeBSD HEAD (https://svnweb.freebsd.org/base?view=revision&revision=317414) *** Feedback/Questions Steve - FreeBSD Jobs (http://dpaste.com/3QSMYEH#wrap) Mike - CuBox i4Pro (http://dpaste.com/0NNYH22#wrap) Steve - Year of the BSD Desktop? (http://dpaste.com/1QRZBPD#wrap) Brad - Configuration Management (http://dpaste.com/2TFV8AJ#wrap) ***

BSD Now
175: How the Dtrace saved Christmas

BSD Now

Play Episode Listen Later Jan 4, 2017 97:29


This week on BSDNow, we've got all sorts of post-holiday goodies to share. New OpenSSL APIs, Dtrace, OpenBSD This episode was brought to you by Headlines OpenSSL 1.1 API migration path, or the lack thereof (https://www.mail-archive.com/tech@openbsd.org/msg36437.html) As many of you will already be aware, the OpenSSL 1.1.0 release intentionally introduced significant API changes from the previous release. In summary, a large number of data structures that were previously publically visible have been made opaque, with accessor functions being added in order to get and set some of the fields within these now opaque structs. It is worth noting that the use of opaque data structures is generally beneficial for libraries, since changes can be made to these data structures without breaking the ABI. As such, the overall direction of these changes is largely reasonable. However, while API change is generally necessary for progression, in this case it would appear that there is NO transition plan and a complete disregard for the impact that these changes would have on the overall open source ecosystem. So far it seems that the only approach is to place the migration burden onto each and every software project that uses OpenSSL, pushing significant code changes to each project that migrates to OpenSSL 1.1, while maintaining compatibility with the previous API. This is forcing each project to provide their own backwards compatibility shims, which is practically guaranteeing that there will be a proliferation of variable quality implementations; it is almost a certainty that some of these will contain bugs, potentially introducing security issues or memory leaks. I think this will be a bigger issue for other operating systems that do not have the flexibility of the ports tree to deliver a newer version of OpenSSL. If a project switches from the old API to the new API, and the OS only provides the older branch of OpenSSL, how can the application work? Of course, this leaves the issue, if application A wants OpenSSL 1.0, and application B only works with OpenSSL 1.1, how does that work? Due to a number of factors, software projects that make use of OpenSSL cannot simply migrate to the 1.1 API and drop support for the 1.0 API - in most cases they will need to continue to support both. Firstly, I am not aware of any platform that has shipped a production release with OpenSSL 1.1 - any software that supported OpenSSL 1.1 only, would effectively be unusable on every platform for the time being. Secondly, the OpenSSL 1.0.2 release is supported until the 31st of December 2019, while OpenSSL 1.1.0 is only supported until the 31st of August 2018 - any LTS style release is clearly going to consider shipping with 1.0.2 as a result. Platforms that are attempting to ship with OpenSSL 1.1 are already encountering significant challenges - for example, Debian currently has 257 packages (out of 518) that do not build against OpenSSL 1.1. There are also hidden gotchas for situations where different libraries are linked against different OpenSSL versions and then share OpenSSL data structures between them - many of these problems will be difficult to detect since they only fail at runtime. It will be interesting to see what happens with OpenSSL, and LibreSSL Hopefully, most projects will decide to switch to the cleaner APIs provided by s2n or libtls, although they do not provide the entire functionality of the OpenSSL API. Hacker News comments (https://news.ycombinator.com/item?id=13284648) *** exfiltration via receive timing (http://www.tedunangst.com/flak/post/exfiltration-via-receive-timing) Another similar way to create a backchannel but without transmitting anything is to introduce delays in the receiver and measure throughput as observed by the sender. All we need is a protocol with transmission control. Hmmm. Actually, it's easier (and more reliable) to code this up using a plain pipe, but the same principle applies to networked transmissions. For every digit we want to “send” back, we sleep a few seconds, then drain the pipe. We don't care about the data, although if this were a video file or an OS update, we could probably do something useful with it. Continuously fill the pipe with junk data. If (when) we block, calculate the difference between before and after. This is a our secret backchannel data. (The reader and writer use different buffer sizes because on OpenBSD at least, a writer will stay blocked even after a read depending on the space that opens up. Even simple demos have real world considerations.) In this simple example, the secret data (argv) is shared by the processes, but we can see that the writer isn't printing them from its own address space. Nevertheless, it works. Time to add random delays and buffering to firewalls? Probably not. An interesting thought experiment that shows just how many ways there are to covertly convey a message *** OpenBSD Desktop in about 30 Minutes (https://news.ycombinator.com/item?id=13223351) Over at hackernews we have a very non-verbose, but handy guide to getting to a OpenBSD desktop in about 30 minutes! First, the guide will assume you've already installed OpenBSD 6.0, so you'll need to at least be at the shell prompt of your freshly installed system to begin. With that, now its time to do some tuning. Editing some resource limits in login.conf will be our initial task, upping some datasize tunables to 2GB Next up, we will edit some of the default “doas” settings to something a bit more workable for desktop computing Another handy trick, editing your .profile to have your PKG_PATH variables set automatically will make One thing some folks may overlook, but disabling atime can speed disk performance (which you probably don't care about atime on your desktop anyway), so this guide will show you what knobs to tweak in /etc/fstab to do so After some final WPA / Wifi configuration, we then drop to “mere mortal” mode and begin our package installations. In this particular guide, he will be setting up Lumina Desktop (Which yes, it is on OpenBSD) A few small tweaks later for xscreensaver and your xinitrc file, then you are ready to run “startx” and begin your desktop session! All in all, great guide which if you are fast can probably be done in even less than 30 minutes and will result in a rock-solid OpenBSD desktop rocking Lumina none-the-less. *** How DTrace saved Christmas (https://hackernoon.com/dtrace-at-home-145ba773371e) Adam Leventhal, one of the co-creators of DTrace, wrote up this post about how he uses DTrace at home, to save Christmas I had been procrastinating making the family holiday card. It was a combination of having a lot on my plate and dreading the formulation of our annual note recapping the year; there were some great moments, but I'm glad I don't have to do 2016 again. It was just before midnight and either I'd make the card that night or leave an empty space on our friends' refrigerators. Adobe Illustrator had other ideas: “Unable to set maximum number of files to be opened” I'm not the first person to hit this. The problem seems to have existed since CS6 was released in 2016. None of the solutions were working for me, and — inspired by Sara Mauskopf's excellent post (https://medium.com/startup-grind/how-to-start-a-company-with-no-free-time-b70fbe7b918a#.uujdblxc6) — I was rapidly running out of the time bounds for the project. Enough; I'd just DTrace it. A colleague scoffed the other day, “I mean, how often do you actually use DTrace?” In his mind DTrace was for big systems, critical system, when dollars and lives were at stake. My reply: I use DTrace every day. I can't imagine developing software without DTrace, and I use it when my laptop (not infrequently) does something inexplicable (I'm forever grateful to the Apple team that ported it to Mac OS X) Illustrator is failing on setrlimit(2) and blowing up as result. Let's confirm that it is in fact returning -1:$ sudo dtrace -n 'syscall::setrlimit:return/execname == "Adobe Illustrato"/{ printf("%d %d", arg1, errno); }' dtrace: description 'syscall::setrlimit:return' matched 1 probe CPU ID FUNCTION:NAME 0 532 setrlimit:return -1 1 There it is. And setrlimit(2) is failing with errno 1 which is EPERM (value too high for non-root user). I already tuned up the files limit pretty high. Let's confirm that it is in fact setting the files limit and check the value to which it's being set. To write this script I looked at the documentation for setrlimit(2) (hooray for man pages!) to determine that the position of the resource parameter (arg0) and the type of the value parameter (struct rlimit). I needed the DTrace copyin() subroutine to grab the structure from the process's address space: $ sudo dtrace -n 'syscall::setrlimit:entry/execname == "Adobe Illustrato"/{ this->r = *(struct rlimit *)copyin(arg1, sizeof (struct rlimit)); printf("%x %x %x", arg0, this->r.rlimcur, this->r.rlimmax); }' dtrace: description 'syscall::setrlimit:entry' matched 1 probe CPU ID FUNCTION:NAME 0 531 setrlimit:entry 1008 2800 7fffffffffffffff Looking through /usr/include/sys/resource.h we can see that 1008 corresponds to the number of files (RLIMITNOFILE | _RLIMITPOSIX_FLAG) The quickest solution was to use DTrace again to whack a smaller number into that struct rlimit. Easy: $ sudo dtrace -w -n 'syscall::setrlimit:entry/execname == "Adobe Illustrato"/{ this->i = (rlimt *)alloca(sizeof (rlimt)); *this->i = 10000; copyout(this->i, arg1 + sizeof (rlimt), sizeof (rlimt)); }' dtrace: description 'syscall::setrlimit:entry' matched 1 probe dtrace: could not enable tracing: Permission denied Oh right. Thank you SIP (System Integrity Protection). This is a new laptop (at least a new motherboard due to some bizarre issue) which probably contributed to Illustrator not working when once it did. Because it's new I haven't yet disabled the part of SIP that prevents you from using DTrace on the kernel or in destructive mode (e.g. copyout()). It's easy enough to disable, but I'm reboot-phobic — I hate having to restart my terminals — so I went to plan B: lldb + After using DTrace to get the address of the setrlimit function, Adam used lldb to change the result before it got back to the application: (lldb) break set -n _init Breakpoint 1: 47 locations. (lldb) run … (lldb) di -s 0x1006e5b72 -c 1 0x1006e5b72: callq 0x1011628e0 ; symbol stub for: setrlimit (lldb) memory write 0x1006e5b72 0x31 0xc0 0x90 0x90 0x90 (lldb) di -s 0x1006e5b72 -c 4 0x1006e5b72: xorl %eax, %eax 0x1006e5b74: nop 0x1006e5b75: nop 0x1006e5b76: nop Next I just did a process detach and got on with making that holiday card… DTrace was designed for solving hard problems on critical systems, but the need to understand how systems behave exists in development and on consumer systems. Just because you didn't write a program doesn't mean you can't fix it. News Roundup Say my Blog's name! (https://functionallyparanoid.com/2016/12/22/say-my-blogs-name/) Brian Everly over at functionally paranoid has a treat for us today. Let us give you a moment to get the tin-foil hats on… Ok, done? Let's begin! He starts off with a look at physical security. He begins by listing your options: BIOS passwords – Not something I'm typically impressed with. Most can be avoided by opening up the machine, closing a jumper and powering it up to reset the NVRAM to factory defaults. I don't even bother with them. Full disk encryption – This one really rings my bell in a positive way. If you can kill power to the box (either because the bad actor has to physically steal it and they aren't carrying around a pile of car batteries and an inverter or because you can interrupt power to it some other way), then the disk will be encrypted. The other beauty of this is that if a drive fails (and they all do eventually) you don't have to have any privacy concerns about chucking it into an electronics recycler (or if you are a bad, bad person, into a landfill) because that data is effectively gibberish without the key (or without a long time to brute force it). Two factor auth for logins – I like this one as well. I'm not a fan of biometrics because if your fingerprint is compromised (yes, it can happen – read (https://www.washingtonpost.com/news/federal-eye/wp/2015/07/09/hack-of-security-clearance-system-affected-21-5-million-people-federal-authorities-say/) about the department of defense background checks that were extracted by a bad agent – they included fingerprint images) you can't exactly send off for a new finger. Things like the YubiKey (https://www.yubico.com/) are pretty slick. They require that you have the physical hardware key as well as the password so unless the bad actor lifted your physical key, they would have a much harder time with physical access to your hardware. Out of those options, Brian mentions that he uses disk encryption and yubi-key for all his secure network systems. Next up is network segmentation, in this case the first thing to do is change your admin password for any ISP supplied modem / router. He goes on to scare us of javascript attacks being used not against your local machine, but instead non WAN exposed router admin interface. Scary Stuff! For added security, naturally he firewalls the router by plugging in the LAN port to a OpenBSD box which does the 2nd layer of firewall / router protection. What about privacy and browsing? Here's some more of his tips: I use Unbound as my DNS resolver on my local network (with all UDP port 53 traffic redirected to it by pf so I don't have to configure anything on the clients) and then forward the traffic to DNSCrypt Proxy, caching the results in Unbound. I notice ZERO performance penalty for this and it greatly enhances privacy. This combination of Unbound and DNSCrypt Proxy works very well together. You can even have redundancy by having multiple upstream resolvers running on different ports (basically run the DNSCrypt Proxy daemon multiple times pointing to different public resolvers). I also use Firefox exclusively for my web browsing. By leveraging the tips on this page (https://www.privacytools.io/), you can lock it down to do a great job of privacy protection. The fact that your laptop's battery drain rate can be used to fingerprint your browser completely trips me out but hey – that's the world we live in.' What about the cloud you may ask? Well Brian has a nice solution for that as well: I recently decided I would try to live a cloud-free life and I'll give you a bit of a synopsis on it. I discovered a wonderful Open Source project called FreeNAS (http://www.freenas.org/). What this little gem does is allow you to install a FreeBSD/zfs file server appliance on amd64 hardware and have a slick administrative web interface for managing it. I picked up a nice SuperMicro motherboard and chassis that has 4 hot swap drive bays (and two internal bays that I used to mirror the boot volume on) and am rocking the zfs lifestyle! (Thanks Alan Jude!) One of the nicest features of the FreeNAS is that it provides the ability to leverage the FreeBSD jail functionality in an easy to use way. It also has plugins but the security on those is a bit sketchy (old versions of libraries, etc.) so I decided to roll my own. I created two jails – one to run OwnCloud (yeah, I know about NextCloud and might switch at some point) and the other to run a full SMTP/IMAP email server stack. I used Lets Encrypt (https://letsencrypt.org/) to generate the SSL certificates and made sure I hit an A on SSLLabs (https://www.ssllabs.com/) before I did anything else. His post then goes in to talk about Backups and IoT devices, something else you need to consider in this truely paranoid world we are forced to live in. We even get a nice shout-out near the end! Enter TarSnap (http://www.tarsnap.com/) – a company that advertises itself as “Online Backups for the Truly Paranoid”. It brings a tear to my eye – a kindred spirit! :-) Thanks again to Alan Jude and Kris Moore from the BSD Now podcast (http://www.bsdnow.tv/) for turning me onto this company. It has a very easy command syntax (yes, it isn't a GUI tool – suck it up buttercup, you wanted to learn the shell didn't you?) and even allows you to compile the thing from source if you want to.” We've only covered some of the highlights here, but you really should take a few moments of your time today and read this top to bottom. Lots of good tips here, already thinking how I can secure my home network better. The open source book: “Producing Open Source Software” (http://producingoss.com/en/producingoss.pdf) “How to Run a Successful Free Software Project” by Karl Fogel 9 chapters and over 200 pages of content, plus many appendices Some interesting topics include: Choosing a good name version control bug tracking creating developer guidelines setting up communications channels choosing a license (although this guide leans heavily towards the GPL) setting the tone of the project joining or creating a Non-Profit Organization the economics of open source release engineering, packaging, nightly builds, etc how to deal with forks A lot of good information packaged into this ebook This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License *** DTrace Flamegraphs for node.js on FreeBSD (http://www.venshare.com/dtrace-flamegraphs-for-freebsd-and-node-js-2/) One of the coolest tools built on top of DTrace is flamegraphs They are a very accurate, and visual way to see where a program is spending its time, which can tell you why it is slow, or where it could be improved. Further enhancements include off-cpu flame graphs, which tell you when the program is doing nothing, which can also be very useful > Recently BSD UNIXes are being acknowledged by the application development community as an interesting operating system to deploy to. This is not surprising given that FreeBSD had jails, the original container system, about 17 years ago and a lot of network focused businesses such as netflix see it as the best way to deliver content. This developer interest has led to hosting providers supporting FreeBSD. e.g. Amazon, Azure, Joyent and you can get a 2 months free instance at Digital Ocean. DTrace is another vital feature for anyone who has had to deal with production issues and has been in FreeBSD since version 9. As of FreeBSD 11 the operating system now contains some great work by Fedor Indutny so you can profile node applications and create flamegraphs of node.js processes without any additional runtime flags or restarting of processes. This is one of the most important things about DTrace. Many applications include some debugging functionality, but they require that you stop the application, and start it again in debugging mode. Some even require that you recompile the application in debugging mode. Being able to attach DTrace to an application, while it is under load, while the problem is actively happening, can be critical to figuring out what is going on. In order to configure your FreeBSD instance to utilize this feature make the following changes to the configuration of the server. Load the DTrace module at boot Increase some DTrace limits Install node with the optional DTrace feature compiled in Follow the generic node.js flamegraph tutorial (https://nodejs.org/en/blog/uncategorized/profiling-node-js/) > I hope you find this article useful. The ability to look at a runtime in this manor has saved me twice this year and I hope it will save you in the future too. My next post on freeBSD and node.js will be looking at some scenarios on utilising the ZFS features. Also check out Brendan Gregg's ACM Queue Article (http://queue.acm.org/detail.cfm?id=2927301) “The Flame Graph: This visualization of software execution is a new necessity for performance profiling and debugging” SSHGuard 2.0 Call for Testing (https://sourceforge.net/p/sshguard/mailman/message/35580961/) SSHGuard is a tool for monitoring brute force attempts and blocking them It has been a favourite of mine for a while because it runs as a pipe from syslogd, rather than reading the log files from the disk A lot of work to get SSHGuard working with new log sources (journalctl, macOS log) and backends (firewalld, ipset) has happened in 2.0. The new version also uses a configuration file. Most importantly, SSHGuard has been split into several processes piped into one another (sshg-logmon | sshg-parser | sshg-blocker | sshg-fw). sshg-parser can run with capsicum(4) and pledge(2). sshg-blocker can be sandboxed in its default configuration (without pid file, whitelist, blacklisting) and has not been tested sandboxed in other configurations. Breaking the processes up so that the sensitive bits can be sandboxes is very nice to see *** Beastie Bits pjd's 2007 paper from AsiaBSDCon: “Porting the ZFS file system to the FreeBSD operating system” (https://2007.asiabsdcon.org/papers/P16-paper.pdf) A Message From the FreeBSD Foundation (https://vimeo.com/user60888329) Remembering Roger Faulkner, Unix Champion (http://thenewstack.io/remembering-roger-faulkner/) and A few HN comments (including Bryan Cantrill) (https://news.ycombinator.com/item?id=13293596) Feedback/Questions Peter - TrueOS Network (http://pastebin.com/QtyJeHMk) Chris - Remote Desktop (http://pastebin.com/ru726VTV) Goetz - Geli on Serial (http://pastebin.com/LQZPgF5g) Joe - BGP (http://pastebin.com/jFeL8zKX) Alejandro - BSD Router (http://pastebin.com/Xq9cbmfn) ***

Envoy Office Hacks
#36 - Joyent's Mario Kart Hack

Envoy Office Hacks

Play Episode Listen Later Dec 5, 2016 6:14


When you think of how you spend your spare time, pursuing pet projects, building, playing, reading… we’re almost positive designing an analytics program wouldn’t be on your leisure-time list. But it did make the list of one guy, and the project brought unexpected benefits to his company. This is Joyent’s Mario Kart Hack. For photos & more info, go to: http://bit.ly/Kartlytics

The New Stack Analysts
#120: CloudNativeCon Pancake Breakfast - Kubernetes Takes Us Beyond 'Peak Confusion'

The New Stack Analysts

Play Episode Listen Later Nov 9, 2016 45:29


Last December, Joyent chief technology officer Bryan Cantrill asked if we had reached "peak confusion" in the container space, referring how the possible combinations of container technologies and platforms would leave any CTO, however, prescient about future technologies, completely bewildered. At the TNS Analysts Pancake Breakfast Panel, held Nov. 8 at CloudNativeCon 2016, CNCF Executive Director Dan Kohn was joined by Comcast Systems Architect Erik St. Martin, CoreOS CEO Alex Polvi, and Wercker CTO Andy Smith to discuss the progression and future of Kubernetes. They fielded audience questions regarding its adoption in the enterprise and highlighted between them how developers can best circumvent some of the challenges still facing developers working with Kubernetes today. Learn more at: https://thenewstack.io/kubecon-pancake-breakfast-kubernetes-beyond-valley-peak-confusion/

The New Stack Analysts
#115: Exploring the Economics of Containers

The New Stack Analysts

Play Episode Listen Later Sep 15, 2016 58:25


In this week's episode of The New Stack Analysts, we dive into a discussion surrounding migrating from Amazon EC2 to containers, the shift toward container-based infrastructure, and how Joyent has helped reverse logistics solution platform Optoro to achieve its own API-driven infrastructure while remaining on premise. The New Stack Founder Alex Williams and co-host EBook editor Benjamin Ball spoke with Optoro Director of DevOps Zach Dunn for an in-depth discussion surrounding these topics and more. Watch on YouTube: https://www.youtube.com/watch?v=DzlNyWBmIdQ Learn more at: https://thenewstack.io/tns-analysts-show-107-exploring-economics-containers/

BSD Now
155: Cabling up FreeBSD

BSD Now

Play Episode Listen Later Aug 17, 2016 117:37


This week on BSDNow, Allen is away in the UK (For BSDCam), but we still have a full episode for you! Don't miss our interview with This episode was brought to you by Headlines My two year journey to becoming an OS Developer (http://zinascii.com/2016/going-to-joyent.html) A blog post by Ryan Zezeski about how he ended doing OS Development instead of working on application We have featured his posts before, including The illumos SYSCALL Handler (http://zinascii.com/2016/the-illumos-syscall-handler.html) It started in the summer of 2014: I had just left Basho after 3.5 years of working on Riak, when I decided I wanted to become an OS developer. I purchased Solaris Internals, cloned illumos-gate, fired up cscope, and got to work. I hardly knew any C, x86 might as well have been Brainfuck, and, frankly, I knew shit about operating systems. But I was determined. I've always learned best by beating my head against something until it makes sense. I'm not a fast learner; I'm persistent. What others have in ability I make up for in effort. And when it comes to OS internals it's all about work ethic. The more you look, the more you realize it's just another program. The main difference being: it's the program all the other programs run on. My strategy: to pick something, anything, that looked interesting, and write a post describing how it works. I wrote several of these posts in 2014 and 2015. More important, it put me in touch with Roger Faulkner: the creator of truss(1), the Solaris process model, and the real /proc filesystem. At the time I didn't like my interaction with Roger. He explained, in what I would later find out to be his typical gruff manner, that I was wrong; so I concluded he is a prick. But over the years I realized that I was being a brat—he was trying to teach me something and I let my ego get in the way. I've come to view that interaction as a blessing. I interacted with one of the greats, a mentor of my mentor's mentor (a Great Great Mentor). A couple of weeks later something even more surreal happened, at illumos Day 2014. Bryan Cantrill was the last speaker of the day. One of my mentors and someone I admire greatly. He was there to regale us with the story of Joyent's resurrection of lx-branded zones: Linux system call emulation on top of the illumos kernel. But before he would do that he decided to speak about me! I couldn't believe it. I was so overwhelmed that I don't remember most of what he said. I was too busy flipping shit—Bryan Cantrill is on stage, in front of other kernel developers I look up to, saying my name. I was in a dream. It turns out, unknown to me at the time, that he wrote the POSIX queue code for both Solaris and QNX, which I wrote about. He compared me to the great expository technical writers Elliott Organick and Richard Stevens. And it was at this moment that I knew I could do this: I could become an OS developer. Never underestimate the effect kind words can have on someone that looks up to you. There is a lot more to the story, and it is definitely worth the read The story then goes on to talk about his recent run in with Bryan Cantrill > A week from now my two year journey to become an OS developer comes to an end; and a new chapter begins. I don't know what specific things I'm going to work on, but I'm sure it will push me to the limit. I look forward to the challenge. *** Version 1.0 of the Lumina Desktop released (https://lumina-desktop.org/version-1-0-0-released/) After 4 years of development, Lumina Desktop has now hit version 1.0! This release brings with it a slew of new features and support: Completely customizable interface! Rather than having to learn how to use a new layout, change the desktop to suit you instead! Simple shortcuts for any application! The “favorites” system makes it easy to find and launch applications at any time. Extremely lightweight! Allows applications to utilize more of your system hardware and revitalizes older systems! Multiple-monitor support! Each monitor is treated as an independent entity – making it great for presentation systems which use a temporary monitor or for workstations which utilize an array of monitors for various tasks. While originally developed on PC-BSD, it already has been ported to a variety of different platforms, including OpenBSD, DragonFly, NetBSD, Debian and Gentoo Lumina has become the defacto desktop environment for TrueOS (Formerly PC-BSD), and looks like will provide a solid framework to continue growing desktop features. *** n2k16 hackathon report: Ken Westerback on dhclient, bridges, routing and more (http://undeadly.org/cgi?action=article&sid=20160804200232) Next up, we have a report from Ken Westerback talking about the recent OpenBSD hackathon in Prague He starts by telling us about the work in bpf: First order of business, stsp@'s weird setup involving bridges and multiple dhclient clients. A bit of bpf(4) programming to restrict dhclient to handling ethernet packets unicast to its interface worked. Cool. Unfortunately it turned out some lazy dhcp servers always use ethernet broadcasts just because some lesser, non-OpenBSD clients ignore unicast packets until they have configured IP. Classic chicken and egg. So this was backed out just before 6.0. Sigh. Next up, he talks about an idea he had on the flight over, specifically with regard to how DHCP leases are stored, and how keeping the SSID information with them could speed up re-connection times, by only trying leases for current SSID's connected. After a day or so of hacking, it was working! However for $REASONS it was shelved for post 6.0, bummer! He then discusses an on-going project with Peter Hessler on passing along relevant PIDs in response to routing messages generated by kernel from ioctl events. This is something they've been hacking at, in order to allow dhclient to recognize its own routing messages. Sounds like they are both still works-in-progress. However, Ken did get something in for 6.0: Diving back into dhclient code I discovered that in situations where multiple offers were received the unused offers were not being declined and discarded. Despite a clear comment saying that's what was being done! Thus dhclient might gradually use up more and more memory. And possibly be retrying offers that should have been discarded. The fix for this did make 6.0! Yay! In Memoriam Roger Faulkner (https://www.usenix.org/memoriam-roger-faulkner) USENIX has re-released Roger Faulkner's original paper on /proc as a free download The UNIX community recently lost one of its original pioneers, Roger Faulkner, whom one commenter described as “The godfather of post-AT&T UNIX” In his memory, the USENIX group as re-released his original paper on the /proc file-system from 1991. Roger worked in many area's of UNIX, however the process file system /proc was his special baby. “/proc began as a debugger interface superseding ptrace(2) but has evolved into a general interface to the process model.” The original /proc only had a file for each process, not a directory. "Data may be transferred from or to any valid locations in the process's address space by applying lseek(2) to position the file at the virtual address of interest followed by read(2) or write(2)." Processes could be controlled using IOCTLs on the file As the USENIX article states: Roger believed that terrible things were sometimes required to create beautiful abstractions, and his trailblazing work on /proc embodies this burden: the innards may be delicate and nasty ("vile," as Roger might say in his distinguished Carolinian accent)—but the resulting abstractions are breathtaking in their power, scope and robustness. RIP Roger, and thanks for the wonderful UNIX legacy you've left us all. Interview - Myke Geiger - myke@servernorth.net (mailto:myke@servernorth.net) / @mWare (https://twitter.com/mWare) Using FreeBSD at a DSL/Cable ISP *** News Roundup New options in bsdinstall - some sysctls and date/time settings (https://www.reddit.com/r/freebsd/comments/4vxnw3/new_options_in_bsdinstall_some_sysctls_and/) bsdinstall in FreeBSD 11.0 will feature a number of new menus. The first, well allow you to set the date and time. Often on computers that have been in storage, or some embedded type devices that have no RTC, the date will be wildly wrong, and ntpd will refuse to run until the date is correctly set. This feature makes it easy to enter the date and time using dialog(1) The second menu, inspired by the existing ‘services' menu, offers a number of ‘hardening' options This menu allows users to easily enable a number of security features, including: Hide processes running as other users/groups Disable reading the kernel message buffer and debugging processes for unprivileged users Randomize the PID of newly created processes Enable the stack guard Erase /tmp at boot Disable remote syslog Disable sendmail All of these options are off by default, so that an install done with the installer will be the same as an install from source, or an upgrade. A number of these options are candidates to become on-by-default in the future, so the hope is that this menu will get more users to test these features and find any negative interactions with applications or general use, so they can be fixed. *** Rawrite32: the NetBSD image writing tool (https://www.netbsd.org/~martin/rawrite32/) Martin of the NetBSD project has released a new version of his USB imaging tool, rawrite32 For those who've not used this tool before, it is a Windows Application that allows writing NetBSD images directly to USB media (other other disk media) This update brings with it support for writing .xz file, and binary signing This may come in handy for writing other OS images to memory sticks as well, especially for those locked into a windows environment who need to switch. *** ZFS-Snap-Diff -- A pretty interface for viewing what changed after a ZFS snapshot (https://github.com/j-keck/zfs-snap-diff) There are lots of nice little utilities to help create and maintain your ZFS snapshots. However today we have something unique to look at, ‘zfs-snap-diff'. What makes it unique, is that it ships with a built-in golang / angularjs GUI for snapshot management It looks very powerful, including a built-in diff utility, so you can even see the changes in text-files, in addition to downloading files, restoring old versions and more. Its nice to see so many ZFS utilities starting to take off, and evolve file-management further. *** Dtrace Conf 2016 Event Videos (https://www.joyent.com/about/events/2016/dtrace-conf) The videos from Dtrace.conf 2016 have been posted Some highlights: Useful DTrace Intro CTF Everywhere Distributed DTrace DTrace for Apps DTrace json() subroutine Implementing (or not) fds[] in FreeBSD OpenDTrace DTrace performance improvements with always-on instrumentation D Syntactic Sugar DTrace and Go, DTrace and Postgres dtrace.conf(16) wrap-up by Bryan Cantrill (https://www.joyent.com/blog/dtrace-conf-16-wrap-up) Once again, it was an eclectic mix of technologists — and once again, the day got kicked off with me providing an introduction to dtrace.conf and its history. (Just to save you the time filling out your Cantrill Presentation Bingo Card: you can find me punching myself at 16:19, me offering unsolicited personal medical history at 20:11, and me getting trolled by unikernels at 38:25.) The next DTrace.conf isn't until 2020 *** Beastie Bits The BSD Daemon features in Mexican candy packaging (https://www.reddit.com/r/BSD/comments/4vngmw/the_bsd_daemon_feature_in_mexican_candy_packaging/) Remove PG_ZERO and zeroidle (page-zeroing) entirely (http://lists.dragonflybsd.org/pipermail/commits/2016-August/624202.html) OpenBSD: Release Songs: 6.0: "Black Hat" (https://www.openbsd.org/lyrics.html#60b) OpenBSD Gaming Resource (http://satterly.neocities.org/openbsd_games.html) LibreSSL 2.4.2 and 2.3.7 Released (http://bsdsec.net/articles/libressl-2-4-2-and-2-3-7-released) Feedback/Questions Pedja - Bhyve GUI (http://pastebin.com/LJcJmNsR) Tim - Jail Management (http://pastebin.com/259x94Rh) Don - X260 (http://pastebin.com/A86yHnzz) David - Updates (http://pastebin.com/wjtcuVSA) Ghislain - Jail Management (http://pastebin.com/DgH9G7p5) ***

The New Stack Analysts
#102: How Joyent Manta Aims to Simplify Object Storage at Scale

The New Stack Analysts

Play Episode Listen Later Aug 9, 2016 38:35


In this episode of The New Stack Analysts, we dive into the pitfalls working with massive amounts of data present to enterprises today, why Joyent built its own facility called Thoth to automate its crash dump management, and how object computing at the source is crucial for getting the most out of one's data when working at scale. The New Stack Founder Alex Williams spoke with Joyent CTO Bryan Cantrill to hear his thoughts on these topics and more. Learn more at: https://thenewstack.io/tns-analysts-show-101-joyent-manta-aims-simplify-object-storage-scale/ Watch on YouTube: https://www.youtube.com/watch?v=DkP2sdTsYa0

The New Stack Analysts
#97: Security Must be a Top Priority with Container Deployments

The New Stack Analysts

Play Episode Listen Later Jul 12, 2016 50:03


In this episode of The New Stack Analysts, we delve into a discussion surrounding the security of containers at scale, how hardware visualization impacts container security, and the ways in which Joyent has contributed to the ongoing use of containers in production. The New Stack Founder Alex Williams teamed up with co-host and ebook editor Benjamin Ball to speak with Joyent CTO Bryan Cantrill to hear his thoughts on these issues and more. Learn more at: https://thenewstack.io/tns-analysts-show-97-departing-container-island-joyent/

AppCraft Podcast
7. Adás - WWDC, avagy kíváncsian várjuk

AppCraft Podcast

Play Episode Listen Later Jun 25, 2016 96:36


A mai adásban megbeszéljük, hogy András egy tanulságos ázsiai látogatás után átállt a sötét oldalra és Windows telefonját Androidra cserélte. Majd jön néhány rövidebb hír, a Microsoft megvette a Linkedin-t, a Samsung a Joyent-et és érkeznek a hajlítható telefonok. Gábor összefoglalja a PWA Summit-ot, beszélgetünk a Chromebook-on futtatható Android appokról. Végül alaposan megvitatjuk, hogy az Apple fejlesztői konferenciája valóban olyan eseménytelen volt-e, mint amilyennek első látásra gondoltuk. Jó szórakozást!

REACTIVE
43: npm install space-suit

REACTIVE

Play Episode Listen Later Jun 23, 2016 57:12


Kahlil is back and has HUGE news! NASA is using node. We speculate about why Samsung bought Joyent. We talk about politics a bit and what to do about gun violence. Kahlil gives a shoutout to Browserify and Raquel gives us an update on her React learnings.

Vancouver Tech Podcast
Episode 31: Microsoft

Vancouver Tech Podcast

Play Episode Listen Later Jun 20, 2016 75:48


Drew and James open the show with a plug for a cool job, a short discussion on a recent company acquisition, and some reflection on a special event that they went to on Friday. This weeks guest is Kenneth Auchenberg, a Program Manager at Microsoft.

Daily
#956 Miscelánea

Daily

Play Episode Listen Later Jun 17, 2016 12:29


Viernes de miscelánea donde volvemos un poco sobre algunas de las novedades de la WWDC. Retomamos también la adquisición de LinkedIn por parte de Microsoft lo que nos lleva a comentar la compra de Joyent (que no Joylent) por parte de Samsung. Y terminamos con las novedades de Twyp.Esperamos tus comentarios en http://emilcar.fm

Daily
#956 Miscelánea

Daily

Play Episode Listen Later Jun 17, 2016 12:29


Viernes de miscelánea donde volvemos un poco sobre algunas de las novedades de la WWDC. Retomamos también la adquisición de LinkedIn por parte de Microsoft lo que nos lleva a comentar la compra de Joyent (que no Joylent) por parte de Samsung. Y terminamos con las novedades de Twyp.Esperamos tus comentarios en http://emilcar.fm

BSD Now
128: The State of BSD

BSD Now

Play Episode Listen Later Feb 10, 2016 90:14


This week on BSDNow, we interview Nick Wolff about how FreeBSD is used across the State of Ohio and some of the specific technology used. That, plus the latest news is coming your way right now on BSDNow, the place to This episode was brought to you by Headlines Doc like an Egyptian: Managing project documentation with Sphinx (https://opensource.com/business/16/1/scale-14x-interview-dru-lavigne) In case you didn't make it out to SCALE a few weeks back, we have a great interview with Dru Lavigne over at OpenSource.com which goes over her talk on “Doc like an Egyptian”. In particular she discusses the challenges of running a wiki for documentation for PC-BSD and FreeNAS which prompted the shift to using Sphinx instead. “While the main purpose of a wiki is to invite user contributions and to provide a low barrier to entry, very few people come to write documentation (however, every spambot on the planet will quickly find your wiki, which creates its own set of maintenance issues). Wikis are designed for separate, one-ish page infobytes, such as how-tos. They really aren't designed to provide navigation in a Table of Contents or to provide a flow of Chapters, though you can hack your pages to provide navigational elements to match the document's flow. This gets more difficult as the document increases in size—our guides tend to be 300+ pages. It becomes a nightmare as you try to provide versioned copies of each of those pages so that the user is finding and reading the right page for their version of software. While wiki translation extensions are available, how to configure them is not well documented, their use is slow and clunky, and translated pages only increase the number of available pages, getting you back to the problems in the previous bullet. This is a big deal for projects that have a global audience. While output-generation wiki extensions are available (for example, to convert your wiki pages to HTML or PDF), how to configure them is not well documented, and they provide very little control for the layout of the generated format. This is a big deal for projects that need to make their documentation available in multiple formats.“ She then discusses some of the hurdles of migration from the Wiki to Sphinx, and follows up with some of the differences using Sphinx you should be aware of for any documentation project. “While Sphinx is easy to learn, it does have its quirks. For example, it does not support stacked tags. This means, for example, you can not bold italic a phrase using tags—to achieve that requires a CSS workaround. And, while Sphinx does have extensive documentation, a lot of it assumes you already know what you are doing. When you don't, it can be difficult to find an example that does what you are trying to achieve. Sphinx is well suited for projects with an existing repository—say, on github—a build infrastructure, and contributors who are comfortable with using text editors and committing to the repo (or creating, say, git pull requests).“ Initial FreeBSD RISC-V Architecture Port Committed. (http://freebsdfoundation.blogspot.com/2016/02/initial-freebsd-risc-v-architecture.html) Touching on a story we mentioned a few weeks back, we have a blog post from from Annie over at the FreeBSD foundation talking about the details behind the initial support for RISC-V. To start us off, you may be wondering what is RISC-V and what makes it special?RISC-V is an exciting new open-source Instruction-Set Architecture (ISA) developed at the University of California at Berkeley, which is seeing increasing interest in the embedded systems and hardware-software research communities. Currently the improvements allows booting FreeBSD in the Spike simulator, from the university of Berkeley, with enough reliability to do various things, such as SSH, shell, mail, etc. The next steps include getting multi-core support working, and getting it working in simulations of Cambridge's open-source LowRISC System-on-Chip functioning, and ready for early hardware. Both ports and packages are expected to land in the coming days, so if you love hacking on branch new architectures, this may be your time to jump in. *** FreeBSD Bhyve hypervisor supporting Windows UEFI guests (https://svnweb.freebsd.org/base?view=revision&revision=295124) If you have not been following bhyve lately, you're in for a treat when FreeBSD 10.3 ships in the coming weeks bhyve now supports UEFI and CSM booting, in addition to its existing FreeBSD userboot loader, and grub-bhyve port The EFI support allows Windows guests to be run on FreeBSD Due to the lack of graphics, this requires making a custom .iso to do an ‘Unattended Install' of Windows, but this is easily done just editing and including a .xml file The bootrom can now allocate memory Added some SATA command emulations (no-op) Increased the number of virtio-blk indirect descriptors Added a Firmware guest query interface Add -l option to specify userboot path FreeBSD Bhyve Hypervisor Running Windows Server 2012 R2 Standard (https://jameslodge.com/freebsd-bhyve-hypervisor-running-windows-server-2012-r2-standard/) In related news, TidalScale officially released their product today (http://www.prnewswire.com/news-releases/tidalscale-releases-its-system-scaling-hyperkernel-300216105.html) TidalScale is a commercial product based on bhyve that allows multiple physical machines to be combined into a single massive virtual machine, with the combined processor power, memory, disk I/O, and network capacity of all of the machines *** FreeBSD TACACS+ GNS3 and Cisco 3700 Router (http://www.unixmen.com/freebsd-tacacs-gns3-and-cisco-3700-router/) “TACACS+ – (Terminal Access Controller Access Control System plus) — is a session protocol developed by Cisco.” This tutorial covers configuring FreeBSD and the tac_plus4 port to act as an authentication, authorization, and accounting server for Cisco routers The configuration of FreeBSD, the software, and the router are covered It also includes how to set the FreeBSD server up as a VM on windows, and bridge it to the network I am sure there are some network administrators out there that would appreciate this *** Interview - Nick Wolff - darkfiberiru@gmail.com (mailto:darkfiberiru@gmail.com) / @darkfiberiru (https://twitter.com/darkfiberiru) News Roundup Papers We Love Presents : Bryan Cantrill on Jails & Solaris Zones (http://lists.nycbug.org/pipermail/talk/2016-February/016495.html) The folks over at NYCBug point us to “Papers We Love”, a New York based meetup group where past papers are presented. They have a talk scheduled for tomorrow (Feb 11th) with Bryan Cantrill discussing Jails and Solaris Zones The talk starts at 7PM at the Tumblr building, located between 5th and Park Ave South on 21st street “We're crazy excited to have Bryan Cantrill, CTO of Joyent, formerly of Sun Microsystems, presenting on Jails: Confining the omnipotent root (https://us-east.manta.joyent.com/bcantrill/public/ppwl-cantrill-jails.pdf). by Poul-Henning Kamp and Robert Watson and Solaris Zones: Operating System Support for Consolidating Commercial Workloads (https://us-east.manta.joyent.com/bcantrill/public/ppwl-cantrill-zones.pdf) by Dan Price and Andy Tucker!” The abstract posted gives us a sneak peak of what to expect, first covering jails as a method to “partition” the operating system environment, but maintaining the UNIX “root” model. Next it looks like he will compare and contrast with the Solaris Zones functionality, which creates virtualized application execution environments, within the single OS instance. Sounds like a fantastic talk, hopefully somebody remembers to record and post it for us to enjoy later! There will not be a live stream, but a video of the event should appear online after it has been edited *** FreeBSD Storage Summit (https://wiki.freebsd.org/201602StorageSummit) The FreeBSD Foundation will be hosting a Storage Summit, co-located at the USENIX FAST (Filesystems And Storage Technology) conference Developers and Vendors are invited to work on storage related issues This summit will be a hackathon focused event, rather than a discussion focused devsummit After setup and introductions, the summit will start with a “Networking Synergies Panel”, to discuss networking as it relates to storage After a short break, the attendees will break up into a number of working groups focused on solving actual problems The current working groups include: CAM Scheduling & Locking, led by Justin Gibbs: “Updating CAM queuing/scheduling and locking models to minimize cross-cpu contention and support multi-queue controllers” ZFS, led by Matt Ahrens: topics will include enabling the new cryptographic hashes supported by OpenZFS on FreeBSD, Interaction with the kernel memory subsystem, and other upcoming features. User Space Storage Stack, led by George Neville-Neil This event offers a unique opportunity for developers and vendors from the storage industry to meet at an event they will likely already be attending *** Tor Browser 5.5 for OpenBSD/amd64 -current is completed (http://lists.nycbug.org/pipermail/talk/2016-February/016514.html) “The Tor BSD Diversity Project (TDP) is proud to announce the release of Tor Browser (TB) version 5.5 for OpenBSD. Please note that this version of TB remains in development mode, and is not meant to ensure strong privacy, anonymity or security.” “TDP (https://torbsd.github.io) is an effort to extend the use of the BSD Unixes into the Tor ecosystem, from the desktop to the network. TDP is focused on diversifying the Tor network, with TB being the flagship project. Additional efforts are made to increase the number of *BSD relays on the Tor network among other sub-projects” Help test the new browser bundle, or help diversify the Tor network *** “FreeBSD Mastery: Advanced ZFS” Table of Contents (http://blather.michaelwlucas.com/archives/2548) We brought you the news about sponsoring the Advanced ZFS book that MWL is working on, now Michael has given us the tentative chapter layout of the (sure to be a classic) tome coming from him and Allan. 0: Introduction 1: Boot Environments 2: Delegation and Jails 3: Sharing 4: Replication 5: zvols 6: Advanced Hardware 7: Caches 8: Performance 9: Tuning 10: ZFS Potpourri In addition to the tease about the upcoming book, michael has asked the community for assistance in coming up with the cover art for it as well. In particular it should probably be in-line with his previous works, with a parody of some other classic art-work. If you have something, go tweet out to him at @mwlauthor Beastie Bits Online registration for AsiaBSDCon 2016 now open SOON (https://2016.asiabsdcon.org/index.html.en) BhyveCon 2016 (http://bhyvecon.org/) NYC*BUG shell-fu talk slides (http://www.nycbug.org/index.cgi?action=view&id=10640) Possible regression in DragonFly i915 graphics on older Core2Duos (http://lists.dragonflybsd.org/pipermail/users/2016-February/228597.html) Videos from FOSDEM 2016. BSD dev room was k4601 (http://video.fosdem.org/2016/) Feedback/Questions Andrew - SMART Tests (http://slexy.org/view/s2F39XEu9w) JT - Secure File Delete (http://slexy.org/view/s20kk6lzc9) Jordan - Migrate (http://slexy.org/view/s21zjZ0ci8) Lars - Pros and Cons of VM (http://slexy.org/view/s2Hqbt0Uq8) Alex - IPSEC (http://slexy.org/view/s2HnO1hxSO) ***

The New Stack Analysts
#66: Champagne Wishes and DockerCon Dreams

The New Stack Analysts

Play Episode Listen Later Dec 1, 2015 46:48


On the eve of DockerConEU 2015, a collection of intrepid souls seeking community and adventure convened at a Mediterranean gastrobar nestled in Les Corts along Barcelona's Avinguda Diagonal. On that night at Metric Market, in the multifunction basement space called “The Bunker,” which is dominated by a beautiful sofa, The New Stack founder Alex Williams and our friends at Container Solutions hosted this eclectic group for a “champagne podcast,” perhaps the first of its kind ever in the vast, silicon sweep of digital history. As you listen, perhaps after refreshing your own secondarily fermented and chilled beverage, imagine this crowd, if you will, to be code slam poets riffing metaphoric on the zoology of distributed infrastructure, or grizzled prospectors huddled bent-kneed around a promising lode of microservices interconnectivity, and enjoy. Captured in all of its Catalonian splendor, the podcast features Container Solutions' CTO Pini Reznik and Chief Scientist Adrien Mouat, as well as Casey Bisson, Product Manager at Joyent, and Mark Coleman, the CEO of Implicit Explicit, the company who, along with Container Solutions, runs the Software Circus conference. Also contributing were numerous attendees who each revealed what brought them to DockerCon in Spain, and who shared their stories with each other, and with us, for this installment of The New Stack Analysts. Watch on YouTube: https://www.youtube.com/watch?v=Z-p_DT4wkt4 Learn more at: https://thenewstack.io/tns-analysts-show-66-champagne-podcast-wishes-dockercon-dreams/

Greatest Hits – Software Engineering Daily
Containers with Bryan Cantrill from Joyent

Greatest Hits – Software Engineering Daily

Play Episode Listen Later Aug 25, 2015 55:06


Container infrastructure has benefits of security, scalability and efficiency. Containers are a central component of the DevOps movement. Joyent provides simple, secure deployment of containers with bare metal speed on container-native infrastructure Bryan Cantrill is the CTO of Joyent, the father of DTrace and an OS kernel developer for 20 years. Questions: Why are containers The post Containers with Bryan Cantrill from Joyent appeared first on Software Engineering Daily.

Cloud Engineering – Software Engineering Daily
Containers with Bryan Cantrill from Joyent

Cloud Engineering – Software Engineering Daily

Play Episode Listen Later Aug 25, 2015 55:06


Container infrastructure has benefits of security, scalability and efficiency. Containers are a central component of the DevOps movement. Joyent provides simple, secure deployment of containers with bare metal speed on container-native infrastructure Bryan Cantrill is the CTO of Joyent, the father of DTrace and an OS kernel developer for 20 years. Questions: Why are containers The post Containers with Bryan Cantrill from Joyent appeared first on Software Engineering Daily.

BSD Now
103: Ubuntu Slaughters Kittens

BSD Now

Play Episode Listen Later Aug 19, 2015 120:27


Allan's away at BSDCam this week, but we've still got an exciting episode for you. We sat down with Bryan Cantrill, CTO of Joyent, to talk about a wide variety of topics: dtrace, ZFS, pkgsrc, containers and much more. This is easily our longest interview to date! This episode was brought to you by Interview - Bryan Cantrill - bryan@joyent.com (mailto:bryan@joyent.com) / @bcantrill (https://twitter.com/bcantrill) BSD and Solaris history, illumos, dtrace, Joyent, pkgsrc, various topics (and rants) Feedback/Questions Randy writes in (http://slexy.org/view/s2b6dA7fAr) Jared writes in (http://slexy.org/view/s2vABMHiok) Steve writes in (http://slexy.org/view/s2194ADVUL) ***

BSD Now
100: Straight from the Src

BSD Now

Play Episode Listen Later Jul 29, 2015 73:39


We've finally reached a hundred episodes, and this week we'll be talking to Sebastian Wiedenroth about pkgsrc. Though originally a NetBSD project, now it runs pretty much everywhere, and he even runs a conference about it! This episode was brought to you by Headlines Remote DoS in the TCP stack (https://blog.team-cymru.org/2015/07/another-day-another-patch/) A pretty devious bug in the BSD network stack has been making its rounds for a while now, allowing remote attackers to exhaust the resources of a system with nothing more than TCP connections While in the LAST_ACK state, which is one of the final stages of a connection's lifetime, the connection can get stuck and hang there indefinitely This problem has a slightly confusing history that involves different fixes at different points in time from different people Juniper originally discovered the bug and announced a fix (https://kb.juniper.net/InfoCenter/index?page=content&id=JSA10686) for their proprietary networking gear on June 8th On June 29th, FreeBSD caught wind of it and fixed the bug in their -current branch (https://svnweb.freebsd.org/base/head/sys/netinet/tcp_output.c?view=patch&r1=284941&r2=284940&pathrev=284941), but did not issue a security notice or MFC the fix back to the -stable branches On July 13th, two weeks later, OpenBSD fixed the issue (https://www.marc.info/?l=openbsd-cvs&m=143682919807388&w=2) in their -current branch with a slightly different patch, citing the FreeBSD revision from which the problem was found Immediately afterwards, they merged it back to -stable and issued an errata notice (http://ftp.openbsd.org/pub/OpenBSD/patches/5.7/common/010_tcp_persist.patch.sig) for 5.7 and 5.6 On July 21st, three weeks after their original fix, FreeBSD committed yet another slightly different fix (https://svnweb.freebsd.org/base/head/sys/netinet/tcp_output.c?view=patch&r1=285777&r2=285776&pathrev=285777) and issued a security notice (https://lists.freebsd.org/pipermail/freebsd-announce/2015-July/001655.html) for the problem (which didn't include the first fix) After the second fix from FreeBSD, OpenBSD gave them both another look and found their single fix to be sufficient, covering the timer issue in a more general way NetBSD confirmed they were vulnerable too, and applied another completely different fix (http://cvsweb.netbsd.org/bsdweb.cgi/src/sys/netinet/tcp_output.c.diff?r1=1.183&r2=1.184&only_with_tag=MAIN) to -current on July 24th, but haven't released a security notice yet DragonFly is also investigating the issue now to see if they're affected as well *** c2k15 hackathon reports (http://undeadly.org/cgi?action=article&sid=20150721180312&mode=flat) Reports from OpenBSD's latest hackathon (http://www.openbsd.org/hackathons.html), held in Calgary this time, are starting to roll in (there were over 40 devs there, so we might see a lot more of these) The first one, from Ingo Schwarze, talks about some of the mandoc work he did at the event He writes, "Did you ever look at a huge page in man, wanted to jump to the definition of a specific term - say, in ksh, to the definition of the "command" built-in command - and had to step through dozens of false positives with the less '/' and 'n' search keys before you finally found the actual definition?" With mandoc's new internal jump targets, this is a problem of the past now Jasper also sent in a report (http://undeadly.org/cgi?action=article&sid=20150723124332&mode=flat), doing his usual work with Puppet (and specifically "Facter," a tool used by Puppet to gather various bits of system information) Aside from that and various ports-related work, Jasper worked on adding tame support to some userland tools, fixing some Octeon stuff and introduced something that OpenBSD has oddly lacked until now: an "-i" flag for sed (hooray!) Antoine Jacoutot gave a report (http://undeadly.org/cgi?action=article&sid=20150722205349&mode=flat) on what he did at the hackathon as well, including improvements to the rcctl tool (for configuring startup services) It now has an "ls" subcommand with status parsing, allowing you to list running services, stopped services or even ones that failed to start or are supposed to be running (he calls this "the poor man's service monitoring tool") He also reworked some of the rc.d system to allow smoother operation of multiple instances of the same daemon to run (using tor with different config files as an example) His list also included updating ports, updating ports documentation, updating the hotplug daemon and laying out some plans for automatic sysmerge for future upgrades Foundation director Ken Westerback was also there (http://undeadly.org/cgi?action=article&sid=20150722105658&mode=flat), getting some disk-related and laptop work done He cleaned up and committed the 4k sector softraid code that he'd been working on, as well as fixing some trackpad issues Stefan Sperling, OpenBSD's token "wireless guy," had a lot to say (http://undeadly.org/cgi?action=article&sid=20150722182236&mode=flat) about the hackathon and what he did there (and even sent in his write-up before he got home) He taught tcpdump about some new things, including 802.11n metadata beacons (there's a lot more specific detail about this one in the report) Bringing a bag full of USB wireless devices with him, he set out to get the unsupported ones working, as well as fix some driver bugs in the ones that already did work One quote from Stefan's report that a lot of people seem to be talking about: "Partway through the hackathon tedu proposed an old diff of his to make our base ls utility display multi-byte characters. This led to a long discussion about how to expand UTF-8 support in base. The conclusion so far indicates that single-byte locales (such as ISO-8859-1 and KOI-8) will be removed from the base OS after the 5.8 release is cut. This simplifies things because the whole system only has to care about a single character encoding. We'll then have a full release cycle to bring UTF-8 support to more base system utilities such as vi, ksh, and mg. To help with this plan, I started organizing a UTF-8-focused hackathon for some time later this year." Jeremy Evans wrote in (http://undeadly.org/cgi?action=article&sid=20150725180527&mode=flat) to talk about updating lots of ports, moving the ruby ports up to the latest version and also creating perl and ruby wrappers for the new tame subsystem While he's mainly a ports guy, he got to commit fixes to ports, the base system and even the kernel during the hackathon Rafael Zalamena, who got commit access at the event, gives his very first report (http://undeadly.org/cgi?action=article&sid=20150725183439&mode=flat) on his networking-related hackathon activities With Rafael's diffs and help from a couple other developers, OpenBSD now has support for VPLS (https://en.wikipedia.org/wiki/Virtual_Private_LAN_Service) Jonathan Gray got a lot done (http://undeadly.org/cgi?action=article&sid=20150728184743&mode=flat) in the area of graphics, working on OpenGL and Mesa, updating libdrm and even working with upstream projects to remove some GNU-specific code As he's become somewhat known for, Jonathan was also busy running three things in the background: clang's fuzzer, cppcheck and AFL (looking for any potential crashes to fix) Martin Pieuchot gave an write-up (http://undeadly.org/cgi?action=article&sid=20150724183210&mode=flat) on his experience: "I always though that hackathons were the best place to write code, but what's even more important is that they are the best (well actually only) moment where one can discuss and coordinate projects with other developers IRL. And that's what I did." He laid out some plans for the wireless stack, discussed future plans for PF, made some routing table improvements and did various other bits to the network stack Unfortunately, most of Martin's secret plans seem to have been left intentionally vague, and will start to take form in the next release cycle We're still eagerly awaiting a report from one of OpenBSD's newest developers (https://twitter.com/phessler/status/623291827878137856), Alexandr Nedvedicky (the Oracle guy who's working on SMP PF and some other PF fixes) OpenBSD 5.8's "beta" status was recently reverted, with the message "take that as a hint (https://www.marc.info/?l=openbsd-cvs&m=143766883514831&w=2)," so that may mean more big changes are still to come... *** FreeBSD quarterly status report (https://www.freebsd.org/news/status/report-2015-04-2015-06.html) FreeBSD has published their quarterly status report for the months of April to June, citing it to be the largest one so far It's broken down into a number of sections: team reports, projects, kernel, architectures, userland programs, ports, documentation, Google Summer of Code and miscellaneous others Starting off with the cluster admin, some machines were moved to the datacenter at New York Internet, email services are now more resilient to failure, the svn mirrors (now just "svn.freebsd.org") are now using GeoGNS with official SSL certs and general redundancy was increased In the release engineering space, ARM and ARM64 work continues to improve on the Cavium ThunderX, more focus is being put into cloud platforms and the 10.2-RELEASE cycle is reaching its final stages The core team has been working on phabricator, the fancy review system, and is considering to integrate oauth support soon Work also continues on bhyve, and more operating systems are slowly gaining support (including the much-rumored Windows Server 2012) The report also covers recent developments in the Linux emulation layer, and encourages people using 11-CURRENT to help test out the 64bit support Multipath TCP was also a hot topic, and there's a brief summary of the current status on that patch (it will be available publicly soon) ZFSguru, a project we haven't talked about a lot, also gets some attention in the report - version 0.3 is set to be completed in early August PCIe hotplug support is also mentioned, though it's still in the development stages (basic hot-swap functions are working though) The official binary packages are now built more frequently than before with the help of additional hardware, so AMD64 and i386 users will have fresher ports without the need for compiling Various other small updates on specific areas of ports (KDE, XFCE, X11...) are also included in the report Documentation is a strong focus as always, a number of new documentation committers were added and some of the translations have been improved a lot Many other topics were covered, including foundation updates, conference plans, pkgsrc support in pkgng, ZFS support for UEFI boot and much more *** The OpenSSH bug that wasn't (http://bsdly.blogspot.com/2015/07/the-openssh-bug-that-wasnt.html) There's been a lot of discussion (https://www.marc.info/?t=143766048000005&r=1&w=2) about a supposed flaw (https://kingcope.wordpress.com/2015/07/16/openssh-keyboard-interactive-authentication-brute-force-vulnerability-maxauthtries-bypass/) in OpenSSH, allowing attackers to substantially amplify the number of password attempts they can try per session (without leaving any abnormal log traces, even) There's no actual exploit to speak of; this bug would only help someone get more bruteforce tries in with a fewer number of connections (https://lists.mindrot.org/pipermail/openssh-unix-dev/2015-July/034209.html) FreeBSD in its default configuration, with PAM (https://en.wikipedia.org/wiki/Pluggable_authentication_module) and ChallengeResponseAuthentication enabled, was the only one vulnerable to the problem - not upstream OpenSSH (https://www.marc.info/?l=openbsd-misc&m=143767296016252&w=2), nor any of the other BSDs, and not even the majority of Linux distros If you disable all forms of authentication except public keys, like you're supposed to (https://stribika.github.io/2015/01/04/secure-secure-shell.html), then this is also not a big deal for FreeBSD systems Realistically speaking, it's more of a PAM bug (https://www.marc.info/?l=openbsd-misc&m=143782167322500&w=2) than anything else OpenSSH added an additional check (https://anongit.mindrot.org/openssh.git/patch/?id=5b64f85bb811246c59ebab) for this type of setup that will be in 7.0, but simply changing your sshd_config is enough to mitigate the issue for now on FreeBSD (or you can run freebsd-update (https://lists.freebsd.org/pipermail/freebsd-security-notifications/2015-July/000248.html)) *** Interview - Sebastian Wiedenroth - wiedi@netbsd.org (mailto:wiedi@netbsd.org) / @wied0r (https://twitter.com/wied0r) pkgsrc (https://en.wikipedia.org/wiki/Pkgsrc) and pkgsrcCon (http://pkgsrc.org/pkgsrcCon/) News Roundup Now served by OpenBSD (https://tribaal.io/this-now-served-by-openbsd.html) We've mentioned that you can also install OpenBSD on DO droplets, and this blog post is about someone who actually did it The use case for the author was for a webserver, so he decided to try out the httpd in base Configuration is ridiculously simple, and the config file in his example provides an HTTPS-only webserver, with plaintext requests automatically redirecting TLS 1.2 by default, strong ciphers with LibreSSL and HSTS (https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security) combined give you a pretty secure web server *** FreeBSD laptop playbooks (https://github.com/sean-/freebsd-laptops) A new project has started up on Github for configuring FreeBSD on various laptops, unsurprisingly named "freebsd-laptops" It's based on ansible, and uses the playbook format for automatic set up and configuration Right now, it's only working on a single Lenovo laptop, but the plan is to add instructions for many more models Check the Github page for instructions on how to get started, and maybe get involved if you're running FreeBSD on a laptop *** NetBSD on the NVIDIA Jetson TK1 (https://blog.netbsd.org/tnf/entry/netbsd_on_the_nvidia_jetson) If you've never heard of the Jetson TK1 (https://developer.nvidia.com/jetson-tk1), we can go ahead and spoil the secret here: NetBSD runs on it As for the specs, it has a quad-core ARMv7 CPU at 2.3GHz, 2 gigs of RAM, gigabit ethernet, SATA, HDMI and mini-PCIE This blog post shows which parts of the board are working with NetBSD -current (which seems to be almost everything) You can even run X11 on it, pretty sweet *** DragonFly power mangement options (http://lists.dragonflybsd.org/pipermail/users/2015-July/207911.html) DragonFly developer Sepherosa, who we've had on the show, has been doing some ACPI work over there In this email, he presents some of DragonFly's different power management options: ACPI P-states, C-states, mwait C-states and some Intel-specific bits as well He also did some testing with each of them and gave his findings about power saving If you've been thinking about running DragonFly on a laptop, this would be a good one to read *** OpenBSD router under FreeBSD bhyve (https://www.quernus.co.uk/2015/07/27/openbsd-as-freebsd-router/) If one BSD just isn't enough for you, and you've only got one machine, why not run two at once This article talks about taking a FreeBSD server running bhyve and making a virtualized OpenBSD router with it If you've been considering switching over your router at home or the office, doing it in a virtual machine is a good way to test the waters before committing to real hardware The author also includes a little bit of history on how he got into both operating systems There are lots of mixed opinions about virtualizing core network components, so we'll leave it up to you to do your research Of course, the next logical step is to put that bhyve host under Xen on NetBSD... *** Feedback/Questions Kevin writes in (http://slexy.org/view/s2yPVV5Wyp) Logan writes in (http://slexy.org/view/s21zcz9rut) Peter writes in (http://slexy.org/view/s21CRmiPwK) Randy writes in (http://slexy.org/view/s211zfIXff) ***

BSD Now
97: Big Network, SmallWall

BSD Now

Play Episode Listen Later Jul 8, 2015 78:20


Coming up this time on the show, we'll be chatting with Lee Sharp. He's recently revived the m0n0wall codebase, now known as SmallWall, and we'll find out what the future holds for this new addition to the BSD family. Answers to your emails and all this week's news, on BSD Now - the place to B.. SD. This episode was brought to you by Headlines BSDCan and pkgsrcCon videos (https://www.youtube.com/channel/UCAEx6zhR2sD2pAGKezasAjA/videos) Even more BSDCan 2015 videos are slowly but surely making their way to the internet Nigel Williams, Multipath TCP for FreeBSD (https://www.youtube.com/watch?v=P3vB_FWtyIs) Stephen Bourne, Early days of Unix and design of sh (https://www.youtube.com/watch?v=2kEJoWfobpA) John Criswell, Protecting FreeBSD with Secure Virtual Architecture (https://www.youtube.com/watch?v=hRIC_aF_u24) Shany Michaely, Expanding RDMA capability over Ethernet in FreeBSD (https://www.youtube.com/watch?v=stsaeKvF3no) John-Mark Gurney, Adding AES-ICM and AES-GCM to OpenCrypto (https://www.youtube.com/watch?v=JaufZ7yCrLU) Sevan Janiyan, Adventures in building (https://www.youtube.com/watch?v=-HMXyzybgdM) open source software (https://www.youtube.com/watch?v=Xof-uKnQ6cY) And finally, the BSDCan 2015 closing (https://www.youtube.com/watch?v=Ynm0bGnYdfY) Some videos (https://vimeo.com/channels/pkgsrccon/videos) from this year's pkgsrcCon (http://pkgsrc.org/pkgsrcCon/2015/) are also starting to appear online Sevan Janiyan, A year of pkgsrc 2014 - 2015 (https://vimeo.com/channels/pkgsrccon/132767946) Pierre Pronchery, pkgsrc meets pkg-ng (https://vimeo.com/channels/pkgsrccon/132766052) Jonathan Perkin, pkgsrc at Joyent (https://vimeo.com/channels/pkgsrccon/132760863) Jörg Sonnenberger, pkg_install script framework (https://vimeo.com/channels/pkgsrccon/132757658) Benny Siegert, New Features in BulkTracker (https://vimeo.com/channels/pkgsrccon/132751897) This is the first time we've ever seen recordings from the conference - hopefully they continue this trend *** OPNsense 15.7 released (https://forum.opnsense.org/index.php?topic=839.0) The OPNsense team has released version 15.7, almost exactly six months after their initial debut (http://www.bsdnow.tv/episodes/2015_01_14-common_sense_approach) In addition to pulling in the latest security fixes from upstream FreeBSD, 15.7 also includes new integration of an intrusion detection system (and new GUI for it) as well as new blacklisting options for the proxy server Taking a note from upstream PF's playbook, ALTQ traffic shaping support has finally been retired as of this release (it was deprecated from OpenBSD a few years ago, and the code was completely removed (http://undeadly.org/cgi?action=article&sid=20140419151959) just over a year ago) The LibreSSL flavor has been promoted to production-ready, and users can easily migrate over from OpenSSL via the GUI - switching between the two is simple; no commitment needed Various third party ports have also been bumped up to their latest versions to keep things fresh, and there's the usual round of bug fixes included Shortly afterwards, 15.7.1 (https://forum.opnsense.org/index.php?topic=915.0) was released with a few more small fixes *** NetBSD at Open Source Conference 2015 Okinawa (https://mail-index.netbsd.org/netbsd-advocacy/2015/07/04/msg000688.html) If you liked last week's episode (http://www.bsdnow.tv/episodes/2015_07_01-lost_technology) then you'll probably know what to expect with this one The NetBSD users group of Japan hit another open source conference, this time in Okinawa This time, they had a few interesting NetBSD machines on display that we didn't get to see in the interview last week We'd love to see something like this in North America or Europe too - anyone up for installing BSD on some interesting devices and showing them off at a Linux con? *** OpenBSD BGP and VRFs (http://firstyear.id.au/entry/21) "VRFs (https://en.wikipedia.org/wiki/Virtual_routing_and_forwarding), or in OpenBSD rdomains, are a simple, yet powerful (and sometimes confusing) topic" This article aims to explain both BGP and rdomains, using network diagrams, for some network isolation goodness With multiple rdomains, it's also possible to have two upstream internet connections, but lock different groups of your internal network to just one of them The idea of a "guest network" can greatly benefit from this separation as well, even allowing for the same IP ranges to be used without issues Combining rdomains with the BGP protocol allows for some very selective and precise blocking/passing of traffic between networks, which is also covered in detail here The BSDCan talk on rdomains (https://www.youtube.com/watch?v=BizrC8Zr-YY) expands on the subject a bit more if you haven't seen it, as well as a few related (https://www.packetmischief.ca/2011/09/20/virtualizing-the-openbsd-routing-table/) posts (http://cybermashup.com/2013/05/21/complex-routing-with-openbsd/) *** Interview - Lee Sharp - lee@smallwall.org (mailto:lee@smallwall.org) SmallWall (http://smallwall.org), a continuation of m0n0wall News Roundup Solaris adopts more BSD goodies (https://blogs.oracle.com/solarisfw/entry/pf_for_solaris) We mentioned a while back that Oracle developers have begun porting a current version of OpenBSD's PF firewall to their next version, even contributing back patches for SMP and other bug fixes They recently published an article about PF, talking about what's different about it on their platform compared to others - not especially useful for BSD users, but interesting to read if you like firewalls Darren Moffat, who was part of originally getting an SSH implementation into Solaris, has a second blog post (https://blogs.oracle.com/darren/entry/openssh_in_solaris_11_3) up about their "SunSSH" fork Going forward, their next version is going to offer a completely vanilla OpenSSH option as well, with the plan being to phase out SunSSH after that The article talks a bit about the history of getting SSH into the OS, forking the code and also lists some of the differences between the two In a third blog post (https://blogs.oracle.com/darren/entry/solaris_new_system_calls_getentropy), they talk about a new system call they're borrowing from OpenBSD, getentropy(2) (http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man2/getentropy.2), as well as the addition of arc4random (http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man3/arc4random.3) to their libc With an up-to-date and SMP-capable PF, ZFS with native encryption, jail-like Zones, unaltered OpenSSH and secure entropy calls… is Solaris becoming better than us? Look forward to the upcoming "Solaris Now" podcast (not really) *** EuroBSDCon 2015 talks and tutorials (https://2015.eurobsdcon.org/talks/) This year's EuroBSDCon is set to be held in Sweden at the beginning of October, and the preliminary list of accepted presentations has been published The list looks pretty well-balanced between the different BSDs, something Paul would be happy to see if he was still with us It even includes an interesting DragonFly talk and a couple talks from NetBSD developers, in addition to plenty of FreeBSD and OpenBSD of course There are also a few tutorials (https://2015.eurobsdcon.org/tutorials/) planned for the event, some you've probably seen already and some you haven't Registration for the event will be opening very soon (likely this week or next) *** Using ZFS replication to improve offsite backups (https://www.iceflatline.com/2015/07/using-zfs-replication-features-in-freebsd-to-improve-my-offsite-backups/) If you take backups seriously, you're probably using ZFS and probably keeping an offsite copy of the data This article covers doing just that, but with a focus on making use of the replication capability It'll walk you through taking a snapshot of your pool and then replicating it to another remote system, using "zfs send" and SSH - this has the benefit of only transferring the files that have changed since the last time you did it Steps are also taken to allow a regular user to take and manage snapshots, so you don't need to be root for the SSH transfer Data integrity is a long process - filesystem-level checksums, resistance to hardware failure, ECC memory, multiple copies in different locations... they all play a role in keeping your files secure; don't skip out on any of them One thing the author didn't mention in his post: having an offline copy of the data, ideally sealed in a safe place, is also important *** Block encryption in OpenBSD (http://anadoxin.org/blog/blog/20150705/block-encryption-in-openbsd/) We've covered (http://www.bsdnow.tv/tutorials/fde) ways to do fully-encrypted installations of OpenBSD (and FreeBSD) before, but that requires dedicating a whole drive or partition to the sensitive data This blog post takes you through the process of creating encrypted containers in OpenBSD, à la TrueCrypt - that is, a file-backed virtual device with an encrypted filesystem It goes through creating a file that looks like random data, pointing vnconfig at it, setting up the crypto and finally using it as a fake storage device The encrypted container method offers the advantage of being a bit more portable across installations than other ways *** Docker hits FreeBSD ports (https://svnweb.freebsd.org/ports?view=revision&revision=391421) The inevitable has happened, and an early FreeBSD port of docker is finally here Some details and directions (https://github.com/kvasdopil/docker/blob/freebsd-compat/FREEBSD-PORTING.md) are available to read if you'd like to give it a try, as well as a list of which features work and which don't There was also some Hacker News discussion (https://news.ycombinator.com/item?id=9840025) on the topic *** Microsoft donates to OpenSSH (http://undeadly.org/cgi?action=article&sid=20150708134520&mode=flat) We've talked about big businesses using BSD and contributing back before, even mentioning a few other large public donations - now it's Microsoft's turn With their recent decision to integrate OpenSSH into an upcoming Windows release, Microsoft has donated a large sum of money to the OpenBSD foundation, making them a gold-level sponsor They've also posted some contract work offers on the OpenSSH mailing list, and say that their changes will be upstreamed if appropriate - we're always glad to see this *** Feedback/Questions Joe writes in (http://slexy.org/view/s2NqbhwOoH) Mike writes in (http://slexy.org/view/s2T3NEia98) Randy writes in (http://slexy.org/view/s20RlTK6Ha) Tony writes in (http://slexy.org/view/s2rjCd0bGX) Kevin writes in (http://slexy.org/view/s21PfSIyG5) ***

The Changelog
The Future of Node.js

The Changelog

Play Episode Listen Later May 15, 2015 82:31


Scott Hammond, the CEO of Joyent, joined the show to talk about the history of Node, Joyent’s interest in Node, how they’ve handled the stewardship of Node over the years, their support of io.js joining Node Foundation, the convergence of the code bases for a stronger more inclusive Node community. At the tail end of the show, just when you think it’s over, keep listening because we got Scott back on the call to discuss the news that came this week of the io.js TC voting to join Node Foundation.

Changelog Master Feed
The Future of Node.js (The Changelog #155)

Changelog Master Feed

Play Episode Listen Later May 15, 2015 82:31


Scott Hammond, the CEO of Joyent, joined the show to talk about the history of Node, Joyent’s interest in Node, how they’ve handled the stewardship of Node over the years, their support of io.js joining Node Foundation, the convergence of the code bases for a stronger more inclusive Node community. At the tail end of the show, just when you think it’s over, keep listening because we got Scott back on the call to discuss the news that came this week of the io.js TC voting to join Node Foundation.

RadioJS
Выпуск 21: Когда io.js поглотил joyent

RadioJS

Play Episode Listen Later Apr 10, 2015


Всегда приятно, когда российские компании выпускают в свет opensource проекты. В этом выпуске обсуждаем набор инструментов в помощь разработчикам от 2ГИС. В рубрике “FRP — не только модно” обсуждаем Observables в React. Конечно мы не могли пройти мимо релиза Babel 5.0 и TypeScript 1.5. И, раз уж мы заговорили об этом, что происходит с Dart?

BSD Now
75: From the Foundation (Part 1)

BSD Now

Play Episode Listen Later Feb 4, 2015 85:29


This week on the show, we'll be starting a two-part series detailing the activities of various BSD foundations. Ed Maste from the FreeBSD foundation will be joining us this time, and we'll talk about what all they've been up to lately. All this week's news and answers to viewer-submitted questions, coming up on BSD Now - the place to B.. SD. This episode was brought to you by Headlines Key rotation in OpenSSH 6.8 (http://blog.djm.net.au/2015/02/key-rotation-in-openssh-68.html) Damien Miller (http://www.bsdnow.tv/episodes/2013_12_18-cryptocrystalline) posted a new blog entry about one of the features in the upcoming OpenSSH 6.8 Times changes, key types change, problems are found with old algorithms and we switch to new ones In OpenSSH (and the SSH protocol) however, there hasn't been an easy way to rotate host keys... until now With this change, when you connect to a server, it will log all the server's public keys in your known_hosts file, instead of just the first one used during the key exchange Keys that are in your known_hosts file but not on the server will get automatically removed This fixes the problem of old servers still authenticating with ancient DSA or small RSA keys, as well as providing a way for the server to rotate keys every so often There are some instructions in the blog post for how you'll be able to rotate host keys and eventually phase out the older ones - it's really simple There are a lot of big changes coming in OpenSSH 6.8, so we'll be sure to cover them all when it's released *** NetBSD Banana Pi images (https://mail-index.netbsd.org/port-arm/2015/01/30/msg002809.html) We've talked about the Banana Pi (http://www.bananapi.org/p/product.html) a bit before - it's a small ARM board that's comparable to the popular Raspberry Pi Some NetBSD -current images were posted on the mailing list, so now you can get some BSD action on one of these little devices There are even a set of prebuilt pkgsrc packages, so you won't have to compile everything initially The email includes some steps to get everything working and an overview of what comes with the image Also check the wiki page (https://wiki.netbsd.org/ports/evbarm/allwinner/) for some related boards and further instructions on getting set up On a related note, NetBSD also recently got GPU acceleration working (https://blog.netbsd.org/tnf/entry/raspberry_pi_gpu_acceleration_in) for the Raspberry Pi (which is a first for their ARM port) *** LibreSSL shirts and other BSD goodies (https://www.marc.info/?l=openbsd-misc&m=142255048510669&w=2) If you've been keeping up with the LibreSSL saga and want a shirt to show your support, they're finally available to buy online There are two versions, either "keep calm and use LibreSSL (https://shop.openbsdeurope.com/images/shop_openbsdeurope_com/products/large/TSHIRTLSSL.jpg)" or the slightly more snarky "keep calm and abandon OpenSSL (https://shop.openbsdeurope.com/images/shop_openbsdeurope_com/products/large/TSHIRTOSSL.jpg)" While on the topic, we thought it would be good to make people aware of shirts for other BSD projects too You can get some FreeBSD, PCBSD (https://www.freebsdmall.com/cgi-bin/fm/scan/fi=prod_bsd/se=pc-bsd) and FreeNAS stuff (https://www.freebsdmall.com/cgi-bin/fm/scan/fi=prod_bsd/se=shirts) from the FreeBSD mall site (https://www.freebsdmall.com/cgi-bin/fm/scan/fi=prod_bsd/se=tshirt) OpenBSD recently launched their new store (https://www.openbsdstore.com), but the selection is still a bit limited right now NetBSD has a couple places (https://www.netbsd.org/gallery/devotionalia.html#cafepress) where you can buy shirts and other apparel with the flag logo on it We couldn't find any DragonFlyBSD shirts unfortunately, which is a shame since their logo (http://www.dragonflybsd.org/images/small_logo.png) is pretty cool Profits from the sale of the gear go back to the projects, so pick up some swag and support your BSD of choice (and of course wear them at any Linux events you happen to go to) *** OPNsense 15.1.4 released (https://forum.opnsense.org/index.php?topic=35.0) The OPNsense guys have been hard at work since we spoke to them (http://www.bsdnow.tv/episodes/2015_01_14-common_sense_approach), fixing lots of bugs and keeping everything up to date A number of versions have come out since then, with 15.1.4 being the latest (assuming they haven't updated it again by the time this airs) This version includes the latest round of FreeBSD kernel security patches, as well as minor SSL and GUI fixes They're doing a great job of getting upstream fixes pushed out to users quickly, a very welcome change A developer has also posted an interesting write-up titled "Development Workflow in OPNsense (http://lastsummer.de/development-workflow-in-opnsense/)" If any of our listeners are trying OPNsense as their gateway firewall, let us know how you like it *** Interview - Ed Maste - board@freebsdfoundation.org (mailto:board@freebsdfoundation.org) The FreeBSD foundation (https://www.freebsdfoundation.org/donate)'s activities News Roundup Rolling with OpenBSD snapshots (http://homing-on-code.blogspot.com/2015/02/rolling-with-snapshots.html) One of the cool things about the -current branch of OpenBSD is that it doesn't require any compiling There are signed binary snapshots being continuously re-rolled and posted on the FTP sites for every architecture This provides an easy method to get onboard with the latest features, and you can also easily upgrade between them without reformatting or rebuilding This blog post will walk you through the process of using snapshots to stay on the bleeding edge of OpenBSD goodness After using -current for seven weeks, the author comes to the conclusion that it's not as unstable as people might think He's now helping test out patches and new ports since he's running the same code as the developers *** Signing pkgsrc packages (https://mail-index.netbsd.org/tech-pkg/2015/02/02/msg014224.html) As of the time this show airs, the official pkgsrc (http://www.bsdnow.tv/tutorials/pkgsrc) packages aren't cryptographically signed Someone from Joyent has been working on that, since they'd like to sign their pkgsrc packages for SmartOS Using GNUPG pulled in a lot of dependencies, and they're trying to keep the bootstrapping process minimal Instead, they're using netpgpverify, a fork of NetBSD's netpgp (https://en.wikipedia.org/wiki/Netpgp) utility Maybe someday this will become the official way to sign packages in NetBSD? *** FreeBSD support model changes (https://lists.freebsd.org/pipermail/freebsd-announce/2015-February/001624.html) Starting with 11.0-RELEASE, which won't be for a few months probably, FreeBSD releases are going to have a different support model The plan is to move "from a point release-based support model to a set of releases from a branch with a guaranteed support lifetime" There will now be a five-year lifespan for each major release, regardless of how many minor point releases it gets This new model should reduce the turnaround time for errata and security patches, since there will be a lot less work involved to build and verify them Lots more detail can be found in the mailing list post, including some important changes to the -STABLE branch, so give it a read *** OpenSMTPD, Dovecot and SpamAssassin (http://guillaumevincent.com/2015/01/31/OpenSMTPD-Dovecot-SpamAssassin.html) We've been talking about setting up your own BSD-based mail server on the last couple episodes Here we have another post from a user setting up OpenSMTPD, including Dovecot for IMAP and SpamAssassin for spam filtering A lot of people regularly ask the developers (http://permalink.gmane.org/gmane.mail.opensmtpd.general/2265) how to combine OpenSMTPD with spam filtering, and this post should finally reveal the dark secrets In addition, it also covers SSL certificates, PKI and setting up MX records - some things that previous posts have lacked Just be sure to replace those "apt-get" commands and "eth0" interface names with something a bit more sane… In related news, OpenSMTPD has got some interesting new features coming soon (http://article.gmane.org/gmane.mail.opensmtpd.general/2272) They're also planning to switch to LibreSSL by default (https://github.com/OpenSMTPD/OpenSMTPD/issues/534) for the portable version *** FreeBSD 10 on the Thinkpad T400 (http://lastsummer.de/freebsd-desktop-on-the-t400/) BSD laptop articles are becoming popular it seems - this one is about FreeBSD on a T400 Like most of the ones we've mentioned before, it shows you how to get a BSD desktop set up with all the little tweaks you might not think to do This one differs in that it takes a more minimal approach to graphics: instead of a full-featured environment like XFCE or KDE, it uses the i3 tiling window manager If you're a commandline junkie that basically just uses X11 to run more than one terminal at once, this might be an ideal setup for you The post also includes some bits about the DRM and KMS in the 10.x branch, as well as vt *** PC-BSD 10.1.1 Released (http://blog.pcbsd.org/2015/02/1810/) Automatic background updater now in Shiny new Qt5 utils OVA files for VM's Full disk encryption with GELI v7 *** Feedback/Questions Camio writes in (http://slexy.org/view/s2MsjllAyU) Sha'ul writes in (http://slexy.org/view/s20eYELsAg) John writes in (http://slexy.org/view/s20Y2GN1az) Sean writes in (http://slexy.org/view/s20ARVQ1T6) (TJ's lengthy reply (http://slexy.org/view/s212XezEYt)) Christopher writes in (http://slexy.org/view/s2DRgEv4j8) *** Mailing List Gold Special Instructions (https://lists.freebsd.org/pipermail/freebsd-questions/2015-February/264010.html) Pretending to be a VT220 (https://mail-index.netbsd.org/netbsd-users/2015/01/19/msg015669.html) ***

CodeWinds - Leading edge web developer news and training | javascript / React.js / Node.js / HTML5 / web development - Jeff B
001 Daniel Shaw interview discussing The Node Firm's public Node.js training courses

CodeWinds - Leading edge web developer news and training | javascript / React.js / Node.js / HTML5 / web development - Jeff B

Play Episode Listen Later Sep 17, 2013 23:59


An interview with Daniel Shaw, CEO, The Node Firm regarding the upcoming public Node.js training courses at Joyent

Digital Nibbles Podcast
Integrating Compute/Storage and Cloud App Resiliency – Digital Nibbles Podcast episode 39

Digital Nibbles Podcast

Play Episode Listen Later Jul 10, 2013 38:13


First up in this week’s Digital Nibbles is Jason Hoffman (@jasonh), the CTO and Founder of Joyent, who talks about Joyent’s new product offering, Joyent Manta, which combines compute and storage (i.e. natively running compute instances directly on the object store where the objects lie). More info is available at www.joynet.com. Then Mohit Lad (@mohitlad), the CEO and co-founder of Thousand Eyes, stops by to talk about the company, which just came out of stealth phase. Thousand Eye provides trouble shooting across customers’ networks, the Internet and service providers – to help identify where issues are happening and resolving problems faster. More info is available at www.thousandeyes.com. 0:00 – Introductions and News of the Week 9:40 – Interview with Jason Hoffman 22:25 – Interview with Mohit Lad 36:57 – Wrap up

Intel CitC
The Smart Data Center with Joyent - Intel® CitC episode 15

Intel CitC

Play Episode Listen Later May 20, 2012 6:46


Chief Architect James Duncan from Joyent discusses the evolution of the smart data center and best practices.

Web Directions Podcast
Tom Hughes-Croucher - Up and Running with Node.js

Web Directions Podcast

Play Episode Listen Later Nov 6, 2011 52:21


Learn how to build high performance Internet and web applications with Node.js. In is session Tom Hughes-Croucher will demonstrate how to quickly build a high performance chat server using Node.js. This live coding exercise will provide a real insight into what it looks like to build a project in server-side Javascript. We will also cover how to deploy Node applications in production and look at just how far Node can really scale… A million connections and beyond? Tom Hughes-Croucher is the Chief Evangelist at Joyent, sponsors of the Node.js project. Tom mostly spends his days helping companies build really exciting projects with Node and seeing just how far it will scale. Tom is also the author of the O’Reilly book "Up and running with Node.js". Tom has worked for many well known organizations including Yahoo, NASA and Tesco. Follow Tom on Twitter: @sh1mmer Licensed as Creative Commons Attribution NonCommercial ShareAlike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/).

HELDENfunk
HF059: Illumos, OpenIndiana, Niche11

HELDENfunk

Play Episode Listen Later Jun 9, 2011 38:12


Heute haben wir für Euch eine echte OpenSolaris-Folge: Ein Interview mit Illumos-Vater Garrett D'Amore und eines mit den Machern von OpenIndiana. Dazu ein Rückblick zur Niche11. Im Studio: Rolf Kersten und Moderator Constantin Gonzalez.