POPULARITY
Bryan Cantrill is the co-founder and CTO of Oxide Computer Company. We discuss why the biggest cloud providers don't use off the shelf hardware, how scaling data centers at samsung's scale exposed problems with hard drive firmware, how the values of NodeJS are in conflict with robust systems, choosing Rust, and the benefits of Oxide Computer's rack scale approach. This is an extended version of an interview posted on Software Engineering Radio. Related links Oxide Computer Oxide and Friends Illumos Platform as a Reflection of Values RFD 26 bhyve CockroachDB Heterogeneous Computing with Raja Koduri Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today I am talking to Bryan Cantrill. He's the co-founder and CTO of Oxide computer company, and he was previously the CTO of Joyent and he also co-authored the DTrace Tracing framework while he was at Sun Microsystems. [00:00:14] Jeremy: Bryan, welcome to Software Engineering radio. [00:00:17] Bryan: Uh, awesome. Thanks for having me. It's great to be here. [00:00:20] Jeremy: You're the CTO of a company that makes computers. But I think before we get into that, a lot of people who built software, now that the actual computer is abstracted away, they're using AWS or they're using some kind of cloud service. So I thought we could start by talking about, data centers. [00:00:41] Jeremy: 'cause you were. Previously working at Joyent, and I believe you got bought by Samsung and you've previously talked about how you had to figure out, how do I run things at Samsung's scale. So how, how, how was your experience with that? What, what were the challenges there? Samsung scale and migrating off the cloud [00:01:01] Bryan: Yeah, I mean, so at Joyent, and so Joyent was a cloud computing pioneer. Uh, we competed with the likes of AWS and then later GCP and Azure. Uh, and we, I mean, we were operating at a scale, right? We had a bunch of machines, a bunch of dcs, but ultimately we know we were a VC backed company and, you know, a small company by the standards of, certainly by Samsung standards. [00:01:25] Bryan: And so when, when Samsung bought the company, I mean, the reason by the way that Samsung bought Joyent is Samsung's. Cloud Bill was, uh, let's just say it was extremely large. They were spending an enormous amount of money every year on, on the public cloud. And they realized that in order to secure their fate economically, they had to be running on their own infrastructure. [00:01:51] Bryan: It did not make sense. And there's not, was not really a product that Samsung could go buy that would give them that on-prem cloud. Uh, I mean in that, in that regard, like the state of the market was really no different. And so they went looking for a company, uh, and bought, bought Joyent. And when we were on the inside of Samsung. [00:02:11] Bryan: That we learned about Samsung scale. And Samsung loves to talk about Samsung scale. And I gotta tell you, it is more than just chest thumping. Like Samsung Scale really is, I mean, just the, the sheer, the number of devices, the number of customers, just this absolute size. they really wanted to take us out to, to levels of scale, certainly that we had not seen. [00:02:31] Bryan: The reason for buying Joyent was to be able to stand up on their own infrastructure so that we were gonna go buy, we did go buy a bunch of hardware. Problems with server hardware at scale [00:02:40] Bryan: And I remember just thinking, God, I hope Dell is somehow magically better. I hope the problems that we have seen in the small, we just. You know, I just remember hoping and hope is hope. It was of course, a terrible strategy and it was a terrible strategy here too. Uh, and the we that the problems that we saw at the large were, and when you scale out the problems that you see kind of once or twice, you now see all the time and they become absolutely debilitating. [00:03:12] Bryan: And we saw a whole series of really debilitating problems. I mean, many ways, like comically debilitating, uh, in terms of, of showing just how bad the state-of-the-art. Yes. And we had, I mean, it should be said, we had great software and great software expertise, um, and we were controlling our own system software. [00:03:35] Bryan: But even controlling your own system software, your own host OS, your own control plane, which is what we had at Joyent, ultimately, you're pretty limited. You go, I mean, you got the problems that you can obviously solve, the ones that are in your own software, but the problems that are beneath you, the, the problems that are in the hardware platform, the problems that are in the componentry beneath you become the problems that are in the firmware. IO latency due to hard drive firmware [00:04:00] Bryan: Those problems become unresolvable and they are deeply, deeply frustrating. Um, and we just saw a bunch of 'em again, they were. Comical in retrospect, and I'll give you like a, a couple of concrete examples just to give, give you an idea of what kinda what you're looking at. one of the, our data centers had really pathological IO latency. [00:04:23] Bryan: we had a very, uh, database heavy workload. And this was kind of right at the period where you were still deploying on rotating media on hard drives. So this is like, so. An all flash buy did not make economic sense when we did this in, in 2016. This probably, it'd be interesting to know like when was the, the kind of the last time that that actual hard drives made sense? [00:04:50] Bryan: 'cause I feel this was close to it. So we had a, a bunch of, of a pathological IO problems, but we had one data center in which the outliers were actually quite a bit worse and there was so much going on in that system. It took us a long time to figure out like why. And because when, when you, when you're io when you're seeing worse io I mean you're naturally, you wanna understand like what's the workload doing? [00:05:14] Bryan: You're trying to take a first principles approach. What's the workload doing? So this is a very intensive database workload to support the, the object storage system that we had built called Manta. And that the, the metadata tier was stored and uh, was we were using Postgres for that. And that was just getting absolutely slaughtered. [00:05:34] Bryan: Um, and ultimately very IO bound with these kind of pathological IO latencies. Uh, and as we, you know, trying to like peel away the layers to figure out what was going on. And I finally had this thing. So it's like, okay, we are seeing at the, at the device layer, at the at, at the disc layer, we are seeing pathological outliers in this data center that we're not seeing anywhere else. [00:06:00] Bryan: And that does not make any sense. And the thought occurred to me. I'm like, well, maybe we are. Do we have like different. Different rev of firmware on our HGST drives, HGST. Now part of WD Western Digital were the drives that we had everywhere. And, um, so maybe we had a different, maybe I had a firmware bug. [00:06:20] Bryan: I, this would not be the first time in my life at all that I would have a drive firmware issue. Uh, and I went to go pull the firmware, rev, and I'm like, Toshiba makes hard drives? So we had, I mean. I had no idea that Toshiba even made hard drives, let alone that they were our, they were in our data center. [00:06:38] Bryan: I'm like, what is this? And as it turns out, and this is, you know, part of the, the challenge when you don't have an integrated system, which not to pick on them, but Dell doesn't, and what Dell would routinely put just sub make substitutes, and they make substitutes that they, you know, it's kind of like you're going to like, I don't know, Instacart or whatever, and they're out of the thing that you want. [00:07:03] Bryan: So, you know, you're, someone makes a substitute and like sometimes that's okay, but it's really not okay in a data center. And you really want to develop and validate a, an end-to-end integrated system. And in this case, like Toshiba doesn't, I mean, Toshiba does make hard drives, but they are a, or the data they did, uh, they basically were, uh, not competitive and they were not competitive in part for the reasons that we were discovering. [00:07:29] Bryan: They had really serious firmware issues. So the, these were drives that would just simply stop a, a stop acknowledging any reads from the order of 2,700 milliseconds. Long time, 2.7 seconds. Um. And that was a, it was a drive firmware issue, but it was highlighted like a much deeper issue, which was the simple lack of control that we had over our own destiny. [00:07:53] Bryan: Um, and it's an, it's, it's an example among many where Dell is making a decision. That lowers the cost of what they are providing you marginally, but it is then giving you a system that they shouldn't have any confidence in because it's not one that they've actually designed and they leave it to the customer, the end user, to make these discoveries. [00:08:18] Bryan: And these things happen up and down the stack. And for every, for whether it's, and, and not just to pick on Dell because it's, it's true for HPE, it's true for super micro, uh, it's true for your switch vendors. It's, it's true for storage vendors where the, the, the, the one that is left actually integrating these things and trying to make the the whole thing work is the end user sitting in their data center. AWS / Google are not buying off the shelf hardware but you can't use it [00:08:42] Bryan: There's not a product that they can buy that gives them elastic infrastructure, a cloud in their own DC The, the product that you buy is the public cloud. Like when you go in the public cloud, you don't worry about the stuff because that it's, it's AWS's issue or it's GCP's issue. And they are the ones that get this to ground. [00:09:02] Bryan: And they, and this was kind of, you know, the eye-opening moment. Not a surprise. Uh, they are not Dell customers. They're not HPE customers. They're not super micro customers. They have designed their own machines. And to varying degrees, depending on which one you're looking at. But they've taken the clean sheet of paper and the frustration that we had kind of at Joyent and beginning to wonder and then Samsung and kind of wondering what was next, uh, is that, that what they built was not available for purchase in the data center. [00:09:35] Bryan: You could only rent it in the public cloud. And our big belief is that public cloud computing is a really important revolution in infrastructure. Doesn't feel like a different, a deep thought, but cloud computing is a really important revolution. It shouldn't only be available to rent. You should be able to actually buy it. [00:09:53] Bryan: And there are a bunch of reasons for doing that. Uh, one in the one we we saw at Samsung is economics, which I think is still the dominant reason where it just does not make sense to rent all of your compute in perpetuity. But there are other reasons too. There's security, there's risk management, there's latency. [00:10:07] Bryan: There are a bunch of reasons why one might wanna to own one's own infrastructure. But, uh, that was very much the, the, so the, the genesis for oxide was coming out of this very painful experience and a painful experience that, because, I mean, a long answer to your question about like what was it like to be at Samsung scale? [00:10:27] Bryan: Those are the kinds of things that we, I mean, in our other data centers, we didn't have Toshiba drives. We only had the HDSC drives, but it's only when you get to this larger scale that you begin to see some of these pathologies. But these pathologies then are really debilitating in terms of those who are trying to develop a service on top of them. [00:10:45] Bryan: So it was, it was very educational in, in that regard. And you're very grateful for the experience at Samsung in terms of opening our eyes to the challenge of running at that kind of scale. [00:10:57] Jeremy: Yeah, because I, I think as software engineers, a lot of times we, we treat the hardware as a, as a given where, [00:11:08] Bryan: Yeah. [00:11:08] Bryan: Yeah. There's software in chard drives [00:11:09] Jeremy: It sounds like in, in this case, I mean, maybe the issue is not so much that. Dell or HP as a company doesn't own every single piece that they're providing you, but rather the fact that they're swapping pieces in and out without advertising them, and then when it becomes a problem, they're not necessarily willing to, to deal with the, the consequences of that. [00:11:34] Bryan: They just don't know. I mean, I think they just genuinely don't know. I mean, I think that they, it's not like they're making a deliberate decision to kind of ship garbage. It's just that they are making, I mean, I think it's exactly what you said about like, not thinking about the hardware. It's like, what's a hard drive? [00:11:47] Bryan: Like what's it, I mean, it's a hard drive. It's got the same specs as this other hard drive and Intel. You know, it's a little bit cheaper, so why not? It's like, well, like there's some reasons why not, and one of the reasons why not is like, uh, even a hard drive, whether it's rotating media or, or flash, like that's not just hardware. [00:12:05] Bryan: There's software in there. And that the software's like not the same. I mean, there are components where it's like, there's actually, whether, you know, if, if you're looking at like a resistor or a capacitor or something like this Yeah. If you've got two, two parts that are within the same tolerance. Yeah. [00:12:19] Bryan: Like sure. Maybe, although even the EEs I think would be, would be, uh, objecting that a little bit. But the, the, the more complicated you get, and certainly once you get to the, the, the, the kind of the hardware that we think of like a, a, a microprocessor, a a network interface card, a a, a hard driver, an NVME drive. [00:12:38] Bryan: Those things are super complicated and there's a whole bunch of software inside of those things, the firmware, and that's the stuff that, that you can't, I mean, you say that software engineers don't think about that. It's like you, no one can really think about that because it's proprietary that's kinda welded shut and you've got this abstraction into it. [00:12:55] Bryan: But the, the way that thing operates is very core to how the thing in aggregate will behave. And I think that you, the, the kind of, the, the fundamental difference between Oxide's approach and the approach that you get at a Dell HP Supermicro, wherever, is really thinking holistically in terms of hardware and software together in a system that, that ultimately delivers cloud computing to a user. [00:13:22] Bryan: And there's a lot of software at many, many, many, many different layers. And it's very important to think about, about that software and that hardware holistically as a single system. [00:13:34] Jeremy: And during that time at Joyent, when you experienced some of these issues, was it more of a case of you didn't have enough servers experiencing this? So if it would happen, you might say like, well, this one's not working, so maybe we'll just replace the hardware. What, what was the thought process when you were working at that smaller scale and, and how did these issues affect you? UEFI / Baseboard Management Controller [00:13:58] Bryan: Yeah, at the smaller scale, you, uh, you see fewer of them, right? You just see it's like, okay, we, you know, what you might see is like, that's weird. We kinda saw this in one machine versus seeing it in a hundred or a thousand or 10,000. Um, so you just, you just see them, uh, less frequently as a result, they are less debilitating. [00:14:16] Bryan: Um, I, I think that it's, when you go to that larger scale, those things that become, that were unusual now become routine and they become debilitating. Um, so it, it really is in many regards a function of scale. Uh, and then I think it was also, you know, it was a little bit dispiriting that kind of the substrate we were building on really had not improved. [00:14:39] Bryan: Um, and if you look at, you know, the, if you buy a computer server, buy an x86 server. There is a very low layer of firmware, the BIOS, the basic input output system, the UEFI BIOS, and this is like an abstraction layer that has, has existed since the eighties and hasn't really meaningfully improved. Um, the, the kind of the transition to UEFI happened with, I mean, I, I ironically with Itanium, um, you know, two decades ago. [00:15:08] Bryan: but beyond that, like this low layer, this lowest layer of platform enablement software is really only impeding the operability of the system. Um, you look at the baseboard management controller, which is the kind of the computer within the computer, there is a, uh, there is an element in the machine that needs to handle environmentals, that needs to handle, uh, operate the fans and so on. [00:15:31] Bryan: Uh, and that traditionally has this, the space board management controller, and that architecturally just hasn't improved in the last two decades. And, you know, that's, it's a proprietary piece of silicon. Generally from a company that no one's ever heard of called a Speed, uh, which has to be, is written all on caps, so I guess it needs to be screamed. [00:15:50] Bryan: Um, a speed has a proprietary part that has a, there is a root password infamously there, is there, the root password is encoded effectively in silicon. So, uh, which is just, and for, um, anyone who kind of goes deep into these things, like, oh my God, are you kidding me? Um, when we first started oxide, the wifi password was a fraction of the a speed root password for the bmc. [00:16:16] Bryan: It's kinda like a little, little BMC humor. Um, but those things, it was just dispiriting that, that the, the state-of-the-art was still basically personal computers running in the data center. Um, and that's part of what, what was the motivation for doing something new? [00:16:32] Jeremy: And for the people using these systems, whether it's the baseboard management controller or it's the The BIOS or UF UEFI component, what are the actual problems that people are seeing seen? Security vulnerabilities and poor practices in the BMC [00:16:51] Bryan: Oh man, I, the, you are going to have like some fraction of your listeners, maybe a big fraction where like, yeah, like what are the problems? That's a good question. And then you're gonna have the people that actually deal with these things who are, did like their heads already hit the desk being like, what are the problems? [00:17:06] Bryan: Like what are the non problems? Like what, what works? Actually, that's like a shorter answer. Um, I mean, there are so many problems and a lot of it is just like, I mean, there are problems just architecturally these things are just so, I mean, and you could, they're the problems spread to the horizon, so you can kind of start wherever you want. [00:17:24] Bryan: But I mean, as like, as a really concrete example. Okay, so the, the BMCs that, that the computer within the computer that needs to be on its own network. So you now have like not one network, you got two networks that, and that network, by the way, it, that's the network that you're gonna log into to like reset the machine when it's otherwise unresponsive. [00:17:44] Bryan: So that going into the BMC, you can are, you're able to control the entire machine. Well it's like, alright, so now I've got a second net network that I need to manage. What is running on the BMC? Well, it's running some. Ancient, ancient version of Linux it that you got. It's like, well how do I, how do I patch that? [00:18:02] Bryan: How do I like manage the vulnerabilities with that? Because if someone is able to root your BMC, they control the system. So it's like, this is not you've, and now you've gotta go deal with all of the operational hair around that. How do you upgrade that system updating the BMC? I mean, it's like you've got this like second shadow bad infrastructure that you have to go manage. [00:18:23] Bryan: Generally not open source. There's something called open BMC, um, which, um, you people use to varying degrees, but you're generally stuck with the proprietary BMC, so you're generally stuck with, with iLO from HPE or iDRAC from Dell or, or, uh, the, uh, su super micros, BMC, that H-P-B-M-C, and you are, uh, it is just excruciating pain. [00:18:49] Bryan: Um, and that this is assuming that by the way, that everything is behaving correctly. The, the problem is that these things often don't behave correctly, and then the consequence of them not behaving correctly. It's really dire because it's at that lowest layer of the system. So, I mean, I'll give you a concrete example. [00:19:07] Bryan: a customer of theirs reported to me, so I won't disclose the vendor, but let's just say that a well-known vendor had an issue with their, their temperature sensors were broken. Um, and the thing would always read basically the wrong value. So it was the BMC that had to like, invent its own ki a different kind of thermal control loop. [00:19:28] Bryan: And it would index on the, on the, the, the, the actual inrush current. It would, they would look at that at the current that's going into the CPU to adjust the fan speed. That's a great example of something like that's a, that's an interesting idea. That doesn't work. 'cause that's actually not the temperature. [00:19:45] Bryan: So like that software would crank the fans whenever you had an inrush of current and this customer had a workload that would spike the current and by it, when it would spike the current, the, the, the fans would kick up and then they would slowly degrade over time. Well, this workload was spiking the current faster than the fans would degrade, but not fast enough to actually heat up the part. [00:20:08] Bryan: And ultimately over a very long time, in a very painful investigation, it's customer determined that like my fans are cranked in my data center for no reason. We're blowing cold air. And it's like that, this is on the order of like a hundred watts, a server of, of energy that you shouldn't be spending and like that ultimately what that go comes down to this kind of broken software hardware interface at the lowest layer that has real meaningful consequence, uh, in terms of hundreds of kilowatts, um, across a data center. So this stuff has, has very, very, very real consequence and it's such a shadowy world. Part of the reason that, that your listeners that have dealt with this, that our heads will hit the desk is because it is really aggravating to deal with problems with this layer. [00:21:01] Bryan: You, you feel powerless. You don't control or really see the software that's on them. It's generally proprietary. You are relying on your vendor. Your vendor is telling you that like, boy, I don't know. You're the only customer seeing this. I mean, the number of times I have heard that for, and I, I have pledged that we're, we're not gonna say that at oxide because it's such an unaskable thing to say like, you're the only customer saying this. [00:21:25] Bryan: It's like, it feels like, are you blaming me for my problem? Feels like you're blaming me for my problem? Um, and what you begin to realize is that to a degree, these folks are speaking their own truth because the, the folks that are running at real scale at Hyperscale, those folks aren't Dell, HP super micro customers. [00:21:46] Bryan: They're actually, they've done their own thing. So it's like, yeah, Dell's not seeing that problem, um, because they're not running at the same scale. Um, but when you do run, you only have to run at modest scale before these things just become. Overwhelming in terms of the, the headwind that they present to people that wanna deploy infrastructure. The problem is felt with just a few racks [00:22:05] Jeremy: Yeah, so maybe to help people get some perspective at, at what point do you think that people start noticing or start feeling these problems? Because I imagine that if you're just have a few racks or [00:22:22] Bryan: do you have a couple racks or the, or do you wonder or just wondering because No, no, no. I would think, I think anyone who deploys any number of servers, especially now, especially if your experience is only in the cloud, you're gonna be like, what the hell is this? I mean, just again, just to get this thing working at all. [00:22:39] Bryan: It is so it, it's so hairy and so congealed, right? It's not designed. Um, and it, it, it, it's accreted it and it's so obviously accreted that you are, I mean, nobody who is setting up a rack of servers is gonna think to themselves like, yes, this is the right way to go do it. This all makes sense because it's, it's just not, it, I, it feels like the kit, I mean, kit car's almost too generous because it implies that there's like a set of plans to work to in the end. [00:23:08] Bryan: Uh, I mean, it, it, it's a bag of bolts. It's a bunch of parts that you're putting together. And so even at the smallest scales, that stuff is painful. Just architecturally, it's painful at the small scale then, but at least you can get it working. I think the stuff that then becomes debilitating at larger scale are the things that are, are worse than just like, I can't, like this thing is a mess to get working. [00:23:31] Bryan: It's like the, the, the fan issue that, um, where you are now seeing this over, you know, hundreds of machines or thousands of machines. Um, so I, it is painful at more or less all levels of scale. There's, there is no level at which the, the, the pc, which is really what this is, this is a, the, the personal computer architecture from the 1980s and there is really no level of scale where that's the right unit. Running elastic infrastructure is the hardware but also, hypervisor, distributed database, api, etc [00:23:57] Bryan: I mean, where that's the right thing to go deploy, especially if what you are trying to run. Is elastic infrastructure, a cloud. Because the other thing is like we, we've kinda been talking a lot about that hardware layer. Like hardware is, is just the start. Like you actually gotta go put software on that and actually run that as elastic infrastructure. [00:24:16] Bryan: So you need a hypervisor. Yes. But you need a lot more than that. You, you need to actually, you, you need a distributed database, you need web endpoints. You need, you need a CLI, you need all the stuff that you need to actually go run an actual service of compute or networking or storage. I mean, and for, for compute, even for compute, there's a ton of work to be done. [00:24:39] Bryan: And compute is by far, I would say the simplest of the, of the three. When you look at like networks, network services, storage services, there's a whole bunch of stuff that you need to go build in terms of distributed systems to actually offer that as a cloud. So it, I mean, it is painful at more or less every LE level if you are trying to deploy cloud computing on. What's a control plane? [00:25:00] Jeremy: And for someone who doesn't have experience building or working with this type of infrastructure, when you talk about a control plane, what, what does that do in the context of this system? [00:25:16] Bryan: So control plane is the thing that is, that is everything between your API request and that infrastructure actually being acted upon. So you go say, Hey, I, I want a provision, a vm. Okay, great. We've got a whole bunch of things we're gonna provision with that. We're gonna provision a vm, we're gonna get some storage that's gonna go along with that, that's got a network storage service that's gonna come out of, uh, we've got a virtual network that we're gonna either create or attach to. [00:25:39] Bryan: We've got a, a whole bunch of things we need to go do for that. For all of these things, there are metadata components that need, we need to keep track of this thing that, beyond the actual infrastructure that we create. And then we need to go actually, like act on the actual compute elements, the hostos, what have you, the switches, what have you, and actually go. [00:25:56] Bryan: Create these underlying things and then connect them. And there's of course, the challenge of just getting that working is a big challenge. Um, but getting that working robustly, getting that working is, you know, when you go to provision of vm, um, the, all the, the, the steps that need to happen and what happens if one of those steps fails along the way? [00:26:17] Bryan: What happens if, you know, one thing we're very mindful of is these kind of, you get these long tails of like, why, you know, generally our VM provisioning happened within this time, but we get these long tails where it takes much longer. What's going on? What, where in this process are we, are we actually spending time? [00:26:33] Bryan: Uh, and there's a whole lot of complexity that you need to go deal with that. There's a lot of complexity that you need to go deal with this effectively, this workflow that's gonna go create these things and manage them. Um, we use a, a pattern that we call, that are called sagas, actually is a, is a database pattern from the eighties. [00:26:51] Bryan: Uh, Katie McCaffrey is a, is a database reCrcher who, who, uh, I, I think, uh, reintroduce the idea of, of sagas, um, in the last kind of decade. Um, and this is something that we picked up, um, and I've done a lot of really interesting things with, um, to allow for, to this kind of, these workflows to be, to be managed and done so robustly in a way that you can restart them and so on. [00:27:16] Bryan: Uh, and then you guys, you get this whole distributed system that can do all this. That whole distributed system, that itself needs to be reliable and available. So if you, you know, you need to be able to, what happens if you, if you pull a sled or if a sled fails, how does the system deal with that? [00:27:33] Bryan: How does the system deal with getting an another sled added to the system? Like how do you actually grow this distributed system? And then how do you update it? How do you actually go from one version to the next? And all of that has to happen across an air gap where this is gonna run as part of the computer. [00:27:49] Bryan: So there are, it, it is fractally complicated. There, there is a lot of complexity here in, in software, in the software system and all of that. We kind of, we call the control plane. Um, and it, this is the what exists at AWS at GCP, at Azure. When you are hitting an endpoint that's provisioning an EC2 instance for you. [00:28:10] Bryan: There is an AWS control plane that is, is doing all of this and has, uh, some of these similar aspects and certainly some of these similar challenges. Are vSphere / Proxmox / Hyper-V in the same category? [00:28:20] Jeremy: And for people who have run their own servers with something like say VMware or Hyper V or Proxmox, are those in the same category? [00:28:32] Bryan: Yeah, I mean a little bit. I mean, it kind of like vSphere Yes. Via VMware. No. So it's like you, uh, VMware ESX is, is kind of a key building block upon which you can build something that is a more meaningful distributed system. When it's just like a machine that you're provisioning VMs on, it's like, okay, well that's actually, you as the human might be the control plane. [00:28:52] Bryan: Like, that's, that, that's, that's a much easier problem. Um, but when you've got, you know, tens, hundreds, thousands of machines, you need to do it robustly. You need something to coordinate that activity and you know, you need to pick which sled you land on. You need to be able to move these things. You need to be able to update that whole system. [00:29:06] Bryan: That's when you're getting into a control plane. So, you know, some of these things have kind of edged into a control plane, certainly VMware. Um, now Broadcom, um, has delivered something that's kind of cloudish. Um, I think that for folks that are truly born on the cloud, it, it still feels somewhat, uh, like you're going backwards in time when you, when you look at these kind of on-prem offerings. [00:29:29] Bryan: Um, but, but it, it, it's got these aspects to it for sure. Um, and I think that we're, um, some of these other things when you're just looking at KVM or just looks looking at Proxmox you kind of need to, to connect it to other broader things to turn it into something that really looks like manageable infrastructure. [00:29:47] Bryan: And then many of those projects are really, they're either proprietary projects, uh, proprietary products like vSphere, um, or you are really dealing with open source projects that are. Not necessarily aimed at the same level of scale. Um, you know, you look at a, again, Proxmox or, uh, um, you'll get an OpenStack. [00:30:05] Bryan: Um, and you know, OpenStack is just a lot of things, right? I mean, OpenStack has got so many, the OpenStack was kind of a, a free for all, for every infrastructure vendor. Um, and I, you know, there was a time people were like, don't you, aren't you worried about all these companies together that, you know, are coming together for OpenStack? [00:30:24] Bryan: I'm like, haven't you ever worked for like a company? Like, companies don't get along. By the way, it's like having multiple companies work together on a thing that's bad news, not good news. And I think, you know, one of the things that OpenStack has definitely struggled with, kind of with what, actually the, the, there's so many different kind of vendor elements in there that it's, it's very much not a product, it's a project that you're trying to run. [00:30:47] Bryan: But that's, but that very much is in, I mean, that's, that's similar certainly in spirit. [00:30:53] Jeremy: And so I think this is kind of like you're alluding to earlier, the piece that allows you to allocate, compute, storage, manage networking, gives you that experience of I can go to a web console or I can use an API and I can spin up machines, get them all connected. At the end of the day, the control plane. Is allowing you to do that in hopefully a user-friendly way. [00:31:21] Bryan: That's right. Yep. And in the, I mean, in order to do that in a modern way, it's not just like a user-friendly way. You really need to have a CLI and a web UI and an API. Those all need to be drawn from the same kind of single ground truth. Like you don't wanna have any of those be an afterthought for the other. [00:31:39] Bryan: You wanna have the same way of generating all of those different endpoints and, and entries into the system. Building a control plane now has better tools (Rust, CockroachDB) [00:31:46] Jeremy: And if you take your time at Joyent as an example. What kind of tools existed for that versus how much did you have to build in-house for as far as the hypervisor and managing the compute and all that? [00:32:02] Bryan: Yeah, so we built more or less everything in house. I mean, what you have is, um, and I think, you know, over time we've gotten slightly better tools. Um, I think, and, and maybe it's a little bit easier to talk about the, kind of the tools we started at Oxide because we kind of started with a, with a clean sheet of paper at oxide. [00:32:16] Bryan: We wanted to, knew we wanted to go build a control plane, but we were able to kind of go revisit some of the components. So actually, and maybe I'll, I'll talk about some of those changes. So when we, at, For example, at Joyent, when we were building a cloud at Joyent, there wasn't really a good distributed database. [00:32:34] Bryan: Um, so we were using Postgres as our database for metadata and there were a lot of challenges. And Postgres is not a distributed database. It's running. With a primary secondary architecture, and there's a bunch of issues there, many of which we discovered the hard way. Um, when we were coming to oxide, you have much better options to pick from in terms of distributed databases. [00:32:57] Bryan: You know, we, there was a period that now seems maybe potentially brief in hindsight, but of a really high quality open source distributed databases. So there were really some good ones to, to pick from. Um, we, we built on CockroachDB on CRDB. Um, so that was a really important component. That we had at oxide that we didn't have at Joyent. [00:33:19] Bryan: Um, so we were, I wouldn't say we were rolling our own distributed database, we were just using Postgres and uh, and, and dealing with an enormous amount of pain there in terms of the surround. Um, on top of that, and, and, you know, a, a control plane is much more than a database, obviously. Uh, and you've gotta deal with, uh, there's a whole bunch of software that you need to go, right. [00:33:40] Bryan: Um, to be able to, to transform these kind of API requests into something that is reliable infrastructure, right? And there, there's a lot to that. Uh, especially when networking gets in the mix, when storage gets in the mix, uh, there are a whole bunch of like complicated steps that need to be done, um, at Joyent. [00:33:59] Bryan: Um, we, in part because of the history of the company and like, look. This, this just is not gonna sound good, but it just is what it is and I'm just gonna own it. We did it all in Node, um, at Joyent, which I, I, I know it sounds really right now, just sounds like, well, you, you built it with Tinker Toys. You Okay. [00:34:18] Bryan: Uh, did, did you think it was, you built the skyscraper with Tinker Toys? Uh, it's like, well, okay. We actually, we had greater aspirations for the Tinker Toys once upon a time, and it was better than, you know, than Twisted Python and Event Machine from Ruby, and we weren't gonna do it in Java. All right. [00:34:32] Bryan: So, but let's just say that that experiment, uh, that experiment did ultimately end in a predictable fashion. Um, and, uh, we, we decided that maybe Node was not gonna be the best decision long term. Um, Joyent was the company behind node js. Uh, back in the day, Ryan Dahl worked for Joyent. Uh, and then, uh, then we, we, we. [00:34:53] Bryan: Uh, landed that in a foundation in about, uh, what, 2015, something like that. Um, and began to consider our world beyond, uh, beyond Node. Rust at Oxide [00:35:04] Bryan: A big tool that we had in the arsenal when we started Oxide is Rust. Um, and so indeed the name of the company is, is a tip of the hat to the language that we were pretty sure we were gonna be building a lot of stuff in. [00:35:16] Bryan: Namely Rust. And, uh, rust is, uh, has been huge for us, a very important revolution in programming languages. you know, there, there, there have been different people kind of coming in at different times and I kinda came to Rust in what I, I think is like this big kind of second expansion of rust in 2018 when a lot of technologists were think, uh, sick of Node and also sick of Go. [00:35:43] Bryan: And, uh, also sick of C++. And wondering is there gonna be something that gives me the, the, the performance, of that I get outta C. The, the robustness that I can get out of a C program but is is often difficult to achieve. but can I get that with kind of some, some of the velocity of development, although I hate that term, some of the speed of development that you get out of a more interpreted language. [00:36:08] Bryan: Um, and then by the way, can I actually have types, I think types would be a good idea? Uh, and rust obviously hits the sweet spot of all of that. Um, it has been absolutely huge for us. I mean, we knew when we started the company again, oxide, uh, we were gonna be using rust in, in quite a, quite a. Few places, but we weren't doing it by fiat. [00:36:27] Bryan: Um, we wanted to actually make sure we're making the right decision, um, at, at every different, at every layer. Uh, I think what has been surprising is the sheer number of layers at which we use rust in terms of, we've done our own embedded firmware in rust. We've done, um, in, in the host operating system, which is still largely in C, but very big components are in rust. [00:36:47] Bryan: The hypervisor Propolis is all in rust. Uh, and then of course the control plane, that distributed system on that is all in rust. So that was a very important thing that we very much did not need to build ourselves. We were able to really leverage, uh, a terrific community. Um. We were able to use, uh, and we've done this at Joyent as well, but at Oxide, we've used Illumos as a hostos component, which, uh, our variant is called Helios. [00:37:11] Bryan: Um, we've used, uh, bhyve um, as a, as as that kind of internal hypervisor component. we've made use of a bunch of different open source components to build this thing, um, which has been really, really important for us. Uh, and open source components that didn't exist even like five years prior. [00:37:28] Bryan: That's part of why we felt that 2019 was the right time to start the company. And so we started Oxide. The problems building a control plane in Node [00:37:34] Jeremy: You had mentioned that at Joyent, you had tried to build this in, in Node. What were the, what were the, the issues or the, the challenges that you had doing that? [00:37:46] Bryan: Oh boy. Yeah. again, we, I kind of had higher hopes in 2010, I would say. When we, we set on this, um, the, the, the problem that we had just writ large, um. JavaScript is really designed to allow as many people on earth to write a program as possible, which is good. I mean, I, I, that's a, that's a laudable goal. [00:38:09] Bryan: That is the goal ultimately of such as it is of JavaScript. It's actually hard to know what the goal of JavaScript is, unfortunately, because Brendan Ike never actually wrote a book. so that there is not a canonical, you've got kind of Doug Crockford and other people who've written things on JavaScript, but it's hard to know kind of what the original intent of JavaScript is. [00:38:27] Bryan: The name doesn't even express original intent, right? It was called Live Script, and it was kind of renamed to JavaScript during the Java Frenzy of the late nineties. A name that makes no sense. There is no Java in JavaScript. that is kind of, I think, revealing to kind of the, uh, the unprincipled mess that is JavaScript. [00:38:47] Bryan: It, it, it's very pragmatic at some level, um, and allows anyone to, it makes it very easy to write software. The problem is it's much more difficult to write really rigorous software. So, uh, and this is what I should differentiate JavaScript from TypeScript. This is really what TypeScript is trying to solve. [00:39:07] Bryan: TypeScript is like. How can, I think TypeScript is a, is a great step forward because TypeScript is like, how can we bring some rigor to this? Like, yes, it's great that it's easy to write JavaScript, but that's not, we, we don't wanna do that for Absolutely. I mean that, that's not the only problem we solve. [00:39:23] Bryan: We actually wanna be able to write rigorous software and it's actually okay if it's a little harder to write rigorous software that's actually okay if it gets leads to, to more rigorous artifacts. Um, but in JavaScript, I mean, just a concrete example. You know, there's nothing to prevent you from referencing a property that doesn't actually exist in JavaScript. [00:39:43] Bryan: So if you fat finger a property name, you are relying on something to tell you. By the way, I think you've misspelled this because there is no type definition for this thing. And I don't know that you've got one that's spelled correctly, one that's spelled incorrectly, that's often undefined. And then the, when you actually go, you say you've got this typo that is lurking in your what you want to be rigorous software. [00:40:07] Bryan: And if you don't execute that code, like you won't know that's there. And then you do execute that code. And now you've got a, you've got an undefined object. And now that's either gonna be an exception or it can, again, depends on how that's handled. It can be really difficult to determine the origin of that, of, of that error, of that programming. [00:40:26] Bryan: And that is a programmer error. And one of the big challenges that we had with Node is that programmer errors and operational errors, like, you know, I'm out of disk space as an operational error. Those get conflated and it becomes really hard. And in fact, I think the, the language wanted to make it easier to just kind of, uh, drive on in the event of all errors. [00:40:53] Bryan: And it's like, actually not what you wanna do if you're trying to build a reliable, robust system. So we had. No end of issues. [00:41:01] Bryan: We've got a lot of experience developing rigorous systems, um, again coming out of operating systems development and so on. And we want, we brought some of that rigor, if strangely, to JavaScript. So one of the things that we did is we brought a lot of postmortem, diagnos ability and observability to node. [00:41:18] Bryan: And so if, if one of our node processes. Died in production, we would actually get a core dump from that process, a core dump that we could actually meaningfully process. So we did a bunch of kind of wild stuff. I mean, actually wild stuff where we could actually make sense of the JavaScript objects in a binary core dump. JavaScript values ease of getting started over robustness [00:41:41] Bryan: Um, and things that we thought were really important, and this is the, the rest of the world just looks at this being like, what the hell is this? I mean, it's so out of step with it. The problem is that we were trying to bridge two disconnected cultures of one developing really. Rigorous software and really designing it for production, diagnosability and the other, really designing it to software to run in the browser and for anyone to be able to like, you know, kind of liven up a webpage, right? [00:42:10] Bryan: Is kinda the origin of, of live script and then JavaScript. And we were kind of the only ones sitting at the intersection of that. And you begin when you are the only ones sitting at that kind of intersection. You just are, you're, you're kind of fighting a community all the time. And we just realized that we are, there were so many things that the community wanted to do that we felt are like, no, no, this is gonna make software less diagnosable. It's gonna make it less robust. The NodeJS split and why people left [00:42:36] Bryan: And then you realize like, I'm, we're the only voice in the room because we have got, we have got desires for this language that it doesn't have for itself. And this is when you realize you're in a bad relationship with software. It's time to actually move on. And in fact, actually several years after, we'd already kind of broken up with node. [00:42:55] Bryan: Um, and it was like, it was a bit of an acrimonious breakup. there was a, uh, famous slash infamous fork of node called IoJS Um, and this was viewed because people, the community, thought that Joyent was being what was not being an appropriate steward of node js and was, uh, not allowing more things to come into to, to node. [00:43:19] Bryan: And of course, the reason that we of course, felt that we were being a careful steward and we were actively resisting those things that would cut against its fitness for a production system. But it's some way the community saw it and they, and forked, um, and, and I think the, we knew before the fork that's like, this is not working and we need to get this thing out of our hands. Platform is a reflection of values node summit talk [00:43:43] Bryan: And we're are the wrong hands for this? This needs to be in a foundation. Uh, and so we kind of gone through that breakup, uh, and maybe it was two years after that. That, uh, friend of mine who was um, was running the, uh, the node summit was actually, it's unfortunately now passed away. Charles er, um, but Charles' venture capitalist great guy, and Charles was running Node Summit and came to me in 2017. [00:44:07] Bryan: He is like, I really want you to keynote Node Summit. And I'm like, Charles, I'm not gonna do that. I've got nothing nice to say. Like, this is the, the, you don't want, I'm the last person you wanna keynote. He's like, oh, if you have nothing nice to say, you should definitely keynote. You're like, oh God, okay, here we go. [00:44:22] Bryan: He's like, no, I really want you to talk about, like, you should talk about the Joyent breakup with NodeJS. I'm like, oh man. [00:44:29] Bryan: And that led to a talk that I'm really happy that I gave, 'cause it was a very important talk for me personally. Uh, called Platform is a reflection of values and really looking at the values that we had for Node and the values that Node had for itself. And they didn't line up. [00:44:49] Bryan: And the problem is that the values that Node had for itself and the values that we had for Node are all kind of positives, right? Like there's nobody in the node community who's like, I don't want rigor, I hate rigor. It's just that if they had the choose between rigor and making the language approachable. [00:45:09] Bryan: They would choose approachability every single time. They would never choose rigor. And, you know, that was a, that was a big eye-opener. I do, I would say, if you watch this talk. [00:45:20] Bryan: because I knew that there's, like, the audience was gonna be filled with, with people who, had been a part of the fork in 2014, I think was the, the, the, the fork, the IOJS fork. And I knew that there, there were, there were some, you know, some people that were, um, had been there for the fork and. [00:45:41] Bryan: I said a little bit of a trap for the audience. But the, and the trap, I said, you know what, I, I kind of talked about the values that we had and the aspirations we had for Node, the aspirations that Node had for itself and how they were different. [00:45:53] Bryan: And, you know, and I'm like, look in, in, in hindsight, like a fracture was inevitable. And in 2014 there was finally a fracture. And do people know what happened in 2014? And if you, if you, you could listen to that talk, everyone almost says in unison, like IOJS. I'm like, oh right. IOJS. Right. That's actually not what I was thinking of. [00:46:19] Bryan: And I go to the next slide and is a tweet from a guy named TJ Holloway, Chuck, who was the most prolific contributor to Node. And it was his tweet also in 2014 before the fork, before the IOJS fork explaining that he was leaving Node and that he was going to go. And you, if you turn the volume all the way up, you can hear the audience gasp. [00:46:41] Bryan: And it's just delicious because the community had never really come, had never really confronted why TJ left. Um, there. And I went through a couple folks, Felix, bunch of other folks, early Node folks. That were there in 2010, were leaving in 2014, and they were going to go primarily, and they were going to go because they were sick of the same things that we were sick of. [00:47:09] Bryan: They, they, they had hit the same things that we had hit and they were frustrated. I I really do believe this, that platforms do reflect their own values. And when you are making a software decision, you are selecting value. [00:47:26] Bryan: You should select values that align with the values that you have for that software. That is, those are, that's way more important than other things that people look at. I think people look at, for example, quote unquote community size way too frequently, community size is like. Eh, maybe it can be fine. [00:47:44] Bryan: I've been in very large communities, node. I've been in super small open source communities like AUMs and RAs, a bunch of others. there are strengths and weaknesses to both approaches just as like there's a strength to being in a big city versus a small town. Me personally, I'll take the small community more or less every time because the small community is almost always self-selecting based on values and just for the same reason that I like working at small companies or small teams. [00:48:11] Bryan: There's a lot of value to be had in a small community. It's not to say that large communities are valueless, but again, long answer to your question of kind of where did things go south with Joyent and node. They went south because the, the values that we had and the values the community had didn't line up and that was a very educational experience, as you might imagine. [00:48:33] Jeremy: Yeah. And, and given that you mentioned how, because of those values, some people moved from Node to go, and in the end for much of what oxide is building. You ended up using rust. What, what would you say are the, the values of go and and rust, and how did you end up choosing Rust given that. Go's decisions regarding generics, versioning, compilation speed priority [00:48:56] Bryan: Yeah, I mean, well, so the value for, yeah. And so go, I mean, I understand why people move from Node to Go, go to me was kind of a lateral move. Um, there were a bunch of things that I, uh, go was still garbage collected, um, which I didn't like. Um, go also is very strange in terms of there are these kind of like. [00:49:17] Bryan: These autocratic kind of decisions that are very bizarre. Um, there, I mean, generics is kind of a famous one, right? Where go kind of as a point of principle didn't have generics, even though go itself actually the innards of go did have generics. It's just that you a go user weren't allowed to have them. [00:49:35] Bryan: And you know, it's kind of, there was, there was an old cartoon years and years ago about like when a, when a technologist is telling you that something is technically impossible, that actually means I don't feel like it. Uh, and there was a certain degree of like, generics are technically impossible and go, it's like, Hey, actually there are. [00:49:51] Bryan: And so there was, and I just think that the arguments against generics were kind of disingenuous. Um, and indeed, like they ended up adopting generics and then there's like some super weird stuff around like, they're very anti-assertion, which is like, what, how are you? Why are you, how is someone against assertions, it doesn't even make any sense, but it's like, oh, nope. [00:50:10] Bryan: Okay. There's a whole scree on it. Nope, we're against assertions and the, you know, against versioning. There was another thing like, you know, the Rob Pike has kind of famously been like, you should always just run on the way to commit. And you're like, does that, is that, does that make sense? I mean this, we actually built it. [00:50:26] Bryan: And so there are a bunch of things like that. You're just like, okay, this is just exhausting and. I mean, there's some things about Go that are great and, uh, plenty of other things that I just, I'm not a fan of. Um, I think that the, in the end, like Go cares a lot about like compile time. It's super important for Go Right? [00:50:44] Bryan: Is very quick, compile time. I'm like, okay. But that's like compile time is not like, it's not unimportant, it's doesn't have zero importance. But I've got other things that are like lots more important than that. Um, what I really care about is I want a high performing artifact. I wanted garbage collection outta my life. Don't think garbage collection has good trade offs [00:51:00] Bryan: I, I gotta tell you, I, I like garbage collection to me is an embodiment of this like, larger problem of where do you put cognitive load in the software development process. And what garbage collection is saying to me it is right for plenty of other people and the software that they wanna develop. [00:51:21] Bryan: But for me and the software that I wanna develop, infrastructure software, I don't want garbage collection because I can solve the memory allocation problem. I know when I'm like, done with something or not. I mean, it's like I, whether that's in, in C with, I mean it's actually like, it's really not that hard to not leak memory in, in a C base system. [00:51:44] Bryan: And you can. give yourself a lot of tooling that allows you to diagnose where memory leaks are coming from. So it's like that is a solvable problem. There are other challenges with that, but like, when you are developing a really sophisticated system that has garbage collection is using garbage collection. [00:51:59] Bryan: You spend as much time trying to dork with the garbage collector to convince it to collect the thing that you know is garbage. You are like, I've got this thing. I know it's garbage. Now I need to use these like tips and tricks to get the garbage collector. I mean, it's like, it feels like every Java performance issue goes to like minus xx call and use the other garbage collector, whatever one you're using, use a different one and using a different, a different approach. [00:52:23] Bryan: It's like, so you're, you're in this, to me, it's like you're in the worst of all worlds where. the reason that garbage collection is helpful is because the programmer doesn't have to think at all about this problem. But now you're actually dealing with these long pauses in production. [00:52:38] Bryan: You're dealing with all these other issues where actually you need to think a lot about it. And it's kind of, it, it it's witchcraft. It, it, it's this black box that you can't see into. So it's like, what problem have we solved exactly? And I mean, so the fact that go had garbage collection, it's like, eh, no, I, I do not want, like, and then you get all the other like weird fatwahs and you know, everything else. [00:52:57] Bryan: I'm like, no, thank you. Go is a no thank you for me, I, I get it why people like it or use it, but it's, it's just, that was not gonna be it. Choosing Rust [00:53:04] Bryan: I'm like, I want C. but I, there are things I didn't like about C too. I was looking for something that was gonna give me the deterministic kind of artifact that I got outta C. But I wanted library support and C is tough because there's, it's all convention. you know, there's just a bunch of other things that are just thorny. And I remember thinking vividly in 2018, I'm like, well, it's rust or bust. Ownership model, algebraic types, error handling [00:53:28] Bryan: I'm gonna go into rust. And, uh, I hope I like it because if it's not this, it's gonna like, I'm gonna go back to C I'm like literally trying to figure out what the language is for the back half of my career. Um, and when I, you know, did what a lot of people were doing at that time and people have been doing since of, you know, really getting into rust and really learning it, appreciating the difference in the, the model for sure, the ownership model people talk about. [00:53:54] Bryan: That's also obviously very important. It was the error handling that blew me away. And the idea of like algebraic types, I never really had algebraic types. Um, and the ability to, to have. And for error handling is one of these really, uh, you, you really appreciate these things where it's like, how do you deal with a, with a function that can either succeed and return something or it can fail, and the way c deals with that is bad with these kind of sentinels for errors. [00:54:27] Bryan: And, you know, does negative one mean success? Does negative one mean failure? Does zero mean failure? Some C functions, zero means failure. Traditionally in Unix, zero means success. And like, what if you wanna return a file descriptor, you know, it's like, oh. And then it's like, okay, then it'll be like zero through positive N will be a valid result. [00:54:44] Bryan: Negative numbers will be, and like, was it negative one and I said airo, or is it a negative number that did not, I mean, it's like, and that's all convention, right? People do all, all those different things and it's all convention and it's easy to get wrong, easy to have bugs, can't be statically checked and so on. Um, and then what Go says is like, well, you're gonna have like two return values and then you're gonna have to like, just like constantly check all of these all the time. Um, which is also kind of gross. Um, JavaScript is like, Hey, let's toss an exception. If, if we don't like something, if we see an error, we'll, we'll throw an exception. [00:55:15] Bryan: There are a bunch of reasons I don't like that. Um, and you look, you'll get what Rust does, where it's like, no, no, no. We're gonna have these algebra types, which is to say this thing can be a this thing or that thing, but it, but it has to be one of these. And by the way, you don't get to process this thing until you conditionally match on one of these things. [00:55:35] Bryan: You're gonna have to have a, a pattern match on this thing to determine if it's a this or a that, and if it in, in the result type that you, the result is a generic where it's like, it's gonna be either the thing that you wanna return. It's gonna be an okay that contains the thing you wanna return, or it's gonna be an error that contains your error and it forces your code to deal with that. [00:55:57] Bryan: And what that does is it shifts the cognitive load from the person that is operating this thing in production to the, the actual developer that is in development. And I think that that, that to me is like, I, I love that shift. Um, and that shift to me is really important. Um, and that's what I was missing, that that's what Rust gives you. [00:56:23] Bryan: Rust forces you to think about your code as you write it, but as a result, you have an artifact that is much more supportable, much more sustainable, and much faster. Prefer to frontload cognitive load during development instead of at runtime [00:56:34] Jeremy: Yeah, it sounds like you would rather take the time during the development to think about these issues because whether it's garbage collection or it's error handling at runtime when you're trying to solve a problem, then it's much more difficult than having dealt with it to start with. [00:56:57] Bryan: Yeah, absolutely. I, and I just think that like, why also, like if it's software, if it's, again, if it's infrastructure software, I mean the kinda the question that you, you should have when you're writing software is how long is this software gonna live? How many people are gonna use this software? Uh, and if you are writing an operating system, the answer for this thing that you're gonna write, it's gonna live for a long time. [00:57:18] Bryan: Like, if we just look at plenty of aspects of the system that have been around for a, for decades, it's gonna live for a long time and many, many, many people are gonna use it. Why would we not expect people writing that software to have more cognitive load when they're writing it to give us something that's gonna be a better artifact? [00:57:38] Bryan: Now conversely, you're like, Hey, I kind of don't care about this. And like, I don't know, I'm just like, I wanna see if this whole thing works. I've got, I like, I'm just stringing this together. I don't like, no, the software like will be lucky if it survives until tonight, but then like, who cares? Yeah. Yeah. [00:57:52] Bryan: Gar garbage clock. You know, if you're prototyping something, whatever. And this is why you really do get like, you know, different choices, different technology choices, depending on the way that you wanna solve the problem at hand. And for the software that I wanna write, I do like that cognitive load that is upfront. With LLMs maybe you can get the benefit of the robust artifact with less cognitive load [00:58:10] Bryan: Um, and although I think, I think the thing that is really wild that is the twist that I don't think anyone really saw coming is that in a, in an LLM age. That like the cognitive load upfront almost needs an asterisk on it because so much of that can be assisted by an LLM. And now, I mean, I would like to believe, and maybe this is me being optimistic, that the the, in the LLM age, we will see, I mean, rust is a great fit for the LLMH because the LLM itself can get a lot of feedback about whether the software that's written is correct or not. [00:58:44] Bryan: Much more so than you can for other environments. [00:58:48] Jeremy: Yeah, that is a interesting point in that I think when people first started trying out the LLMs to code, it was really good at these maybe looser languages like Python or JavaScript, and initially wasn't so good at something like Rust. But it sounds like as that improves, if. It can write it then because of the rigor or the memory management or the error handling that the language is forcing you to do, it might actually end up being a better choice for people using LLMs. [00:59:27] Bryan: absolutely. I, it, it gives you more certainty in the artifact that you've delivered. I mean, you know a lot about a Rust program that compiles correctly. I mean, th there are certain classes of errors that you don't have, um, that you actually don't know on a C program or a GO program or a, a JavaScript program. [00:59:46] Bryan: I think that's gonna be really important. I think we are on the cusp. Maybe we've already seen it, this kind of great bifurcation in the software that we writ
Your image deserves honesty, not “enhancements,” when viewed on a professional display. We sit down with Bram Desmet, CEO of Flanders Scientific, to explore how true reference monitors are built and why that difference matters every time a client asks, “Is this correct?” From ditching mass-market chipsets in favor of custom FPGAs to calibrating each unit individually, Bram lifts the hood on a process designed for one goal: confidence in every pixel.We break down the journey from panel sourcing to firmware, and why QD-OLED is shifting the landscape for colorists and DITs alike. You'll hear how FSI's Gaia Color AutoCal allows you to plug a probe directly into the display, run its own test patches, and map thousands of states from a single master calibration, without a laptop or third-party software. We also dig into the practical wins of QD-OLED: additive RGB for white, exceptional off-axis stability, strong HDR performance, and multiple sizes that make single-monitor rooms a reality.We also talk brightness and the future of reference displays. Is 4,000 nits the sweet spot for HDR grading, or should we be chasing 10,000? Bram shares his thoughts on the creative value of headroom, and why broad QD-OLED adoption across TVs, gaming, phones, and automotive gives this technology real staying power. We chat about the “panel lottery,” FSI's quality-control safeguards, and why every unit ships with verified calibration reports. We also touch on how one accurate display in the finishing suite helps teams focus on creative intent instead of negotiating screen differences.If you care about color accuracy, translation, and saving hours of guesswork, this conversation is for you. Subscribe, share with a fellow color nerd, and leave a review telling us what your current monitoring setup looks like and what you're upgrading next.Guest Links:IG – https://www.instagram.com/bramrdesmet/Website – https://www.flandersscientific.com/Send us a textPixelToolsModern Color Grading Tools and Presets for DaVinci Resolve Flanders Scientific Inc. (FSI)High-Quality Reference Displays for Editors, Colorists and DITSDeMystify ColorColor Training and Color Grading ToolsDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showLike the show? Leave a review!This episode is brought to you by FSI, DeMystify Color, and PixelToolsFollow Us on Social: Instagram @colorandcoffeepodcast YouTube @ColorandCoffee Produced by Bowdacious Media LLC
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Peter Schmidt Nielsen, who is building FPGA-accelerated servers at Saturn Data. The conversation explores why servers need FPGAs, how these field-programmable gate arrays work as "IO expanders" for massive memory bandwidth, and why they're particularly well-suited for vector database and search applications. Peter breaks down the technical realities of FPGAs - including why they "really suck" in many ways compared to GPUs and CPUs - while explaining how his company is leveraging them to provide terabyte-per-second bandwidth to 1.3 petabytes of flash storage. The discussion ranges from distributed systems challenges and the CAP theorem to the hardware-software relationship in modern computing, offering insights into both the philosophical aspects of search technology and the nuts-and-bolts engineering of memory controllers and routing fabrics.For more information about Peter's work, you can reach him on Twitter at @PTRSCHMDTNLSN or find his website at saturndata.com.Timestamps00:00 Introduction to FPGAs and Their Role in Servers02:47 Understanding FPGA Limitations and Use Cases05:55 Exploring Different Types of Servers08:47 The Importance of Memory and Bandwidth11:52 Philosophical Insights on Search and Access Patterns14:50 The Relationship Between Hardware and Search Queries17:45 Challenges of Distributed Systems20:47 The CAP Theorem and Its Implications23:52 The Evolution of Technology and Knowledge Management26:59 FPGAs as IO Expanders29:35 The Trade-offs of FPGAs vs. ASICs and GPUs32:55 The Future of AI Applications with FPGAs35:51 Exciting Developments in Hardware and BusinessKey Insights1. FPGAs are fundamentally "crappy ASICs" with serious limitations - Despite being programmable hardware, FPGAs perform far worse than general-purpose alternatives in most cases. A $100,000 high-end FPGA might only match the memory bandwidth of a $600 gaming GPU. They're only valuable for specific niches like ultra-low latency applications or scenarios requiring massive parallel I/O operations, making them unsuitable for most computational workloads where CPUs and GPUs excel.2. The real value of FPGAs lies in I/O expansion, not computation - Rather than using FPGAs for their processing power, Saturn Data leverages them primarily as cost-effective ways to access massive amounts of DRAM controllers and NVMe interfaces. Their server design puts 200 FPGAs in a 2U enclosure with 1.3 petabytes of flash storage and terabyte-per-second read bandwidth, essentially using FPGAs as sophisticated I/O expanders.3. Access patterns determine hardware performance more than raw specs - The way applications access data fundamentally determines whether specialized hardware will provide benefits. Applications that do sparse reads across massive datasets (like vector databases) benefit from Saturn Data's architecture, while those requiring dense computation or frequent inter-node communication are better served by traditional hardware. Understanding these patterns is crucial for matching workloads to appropriate hardware.4. Distributed systems complexity stems from failure tolerance requirements - The difficulty of distributed systems isn't inherent but depends on what failures you need to tolerate. Simple approaches that restart on any failure are easy but unreliable, while Byzantine fault tolerance (like Bitcoin) is extremely complex. Most practical systems, including banks, find middle ground by accepting occasional unavailability rather than trying to achieve perfect consistency, availability, and partition tolerance simultaneously.5. Hardware specialization follows predictable cycles of generalization and re-specialization - Computing hardware consistently follows "Makimoto's Wave" - specialized hardware becomes more general over time, then gets leapfrogged by new specialized solutions. CPUs became general-purpose, GPUs evolved from fixed graphics pipelines to programmable compute, and now companies like Etched are creating transformer-specific ASICs. This cycle repeats as each generation adds programmability until someone strips it away for performance gains.6. Memory bottlenecks are reshaping the hardware landscape - The AI boom has created severe memory shortages, doubling costs for DRAM components overnight. This affects not just GPU availability but creates opportunities for alternative architectures. When everyone faces higher memory costs, the relative premium for specialized solutions like FPGA-based systems becomes more attractive, potentially shifting the competitive landscape for memory-intensive applications.7. Search applications represent ideal FPGA use cases due to their sparse access patterns - Vector databases and search workloads are particularly well-suited to FPGA acceleration because they involve searching through massive datasets with sparse access patterns rather than dense computation. These applications can effectively utilize the high bandwidth to flash storage and parallel I/O capabilities that FPGAs provide, making them natural early adopters for this type of specialized hardware architecture.
In this episode of the Crazy Wisdom podcast, host Stewart Alsop interviews Marcin Dymczyk, CPO and co-founder of SevenSense Robotics, exploring the fascinating world of advanced robotics and AI. Their conversation covers the evolution from traditional "standard" robotics with predetermined pathways to advanced robotics that incorporates perception, reasoning, and adaptability - essentially the AGI of physical robotics. Dymczyk explains how his company builds "the eyes and brains of mobile robots" using camera-based autonomy algorithms, drawing parallels between robot sensing systems and human vision, inner ear balance, and proprioception. The discussion ranges from the technical challenges of sensor fusion and world models to broader topics including robotics regulation across different countries, the role of federalism in innovation, and how recent geopolitical changes are driving localized high-tech development, particularly in defense applications. They also touch on the democratization of robotics for small businesses and the philosophical implications of increasingly sophisticated AI systems operating in physical environments. To learn more about SevenSense, visit www.sevensense.ai.Check out this GPT we trained on the conversationTimestamps00:00 Introduction to Robotics and Personal Journey05:27 The Evolution of Robotics: From Standard to Advanced09:56 The Future of Robotics: AI and Automation12:09 The Role of Edge Computing in Robotics17:40 FPGA and AI: The Future of Robotics Processing21:54 Sensing the World: How Robots Perceive Their Environment29:01 Learning from the Physical World: Insights from Robotics33:21 The Intersection of Robotics and Manufacturing35:01 Journey into Robotics: Education and Passion36:41 Practical Robotics Projects for Beginners39:06 Understanding Particle Filters in Robotics40:37 World Models: The Future of AI and Robotics41:51 The Black Box Dilemma in AI and Robotics44:27 Safety and Interpretability in Autonomous Systems49:16 Regulatory Challenges in Robotics and AI51:19 Global Perspectives on Robotics Regulation54:43 The Future of Robotics in Emerging Markets57:38 The Role of Engineers in Modern WarfareKey Insights1. Advanced robotics transcends traditional programming through perception and intelligence. Dymczyk distinguishes between standard robotics that follows rigid, predefined pathways and advanced robotics that incorporates perception and reasoning. This evolution enables robots to make autonomous decisions about navigation and task execution, similar to how humans adapt to unexpected situations rather than following predetermined scripts.2. Camera-based sensing systems mirror human biological navigation. SevenSense Robotics builds "eyes and brains" for mobile robots using multiple cameras (up to eight), IMUs (accelerometers/gyroscopes), and wheel encoders that parallel human vision, inner ear balance, and proprioception. This redundant sensing approach allows robots to navigate even when one system fails, such as operating in dark environments where visual sensors are compromised.3. Edge computing dominates industrial robotics due to connectivity and security constraints. Many industrial applications operate in environments with poor connectivity (like underground grocery stores) or require on-premise solutions for confidentiality. This necessitates powerful local processing capabilities rather than cloud-dependent AI, particularly in automotive factories where data security about new models is paramount.4. Safety regulations create mandatory "kill switches" that bypass AI decision-making. European and US regulatory bodies require deterministic safety systems that can instantly stop robots regardless of AI reasoning. These systems operate like human reflexes, providing immediate responses to obstacles while the main AI brain handles complex navigation and planning tasks.5. Modern robotics development benefits from increasingly affordable optical sensors. The democratization of 3D cameras, laser range finders, and miniature range measurement chips (costing just a few dollars from distributors like DigiKey) enables rapid prototyping and innovation that was previously limited to well-funded research institutions.6. Geopolitical shifts are driving localized high-tech development, particularly in defense applications. The changing role of US global leadership and lessons from Ukraine's drone warfare are motivating countries like Poland to develop indigenous robotics capabilities. Small engineering teams can now create battlefield-effective technology using consumer drones equipped with advanced sensors.7. The future of robotics lies in natural language programming for non-experts. Dymczyk envisions a transformation where small business owners can instruct robots using conversational language rather than complex programming, similar to how AI coding assistants now enable non-programmers to build applications through natural language prompts.
Michaela Eichinger, Product Solutions Physicist at Quantum Machines and quantum content creator, joins Mira and Chris on ML4Q&A to discuss her journey from academic research to working in a deep-tech startup. She is representative of a generation of PhD students and postdocs from labs working on qubit technologies that join the emerging quantum industry. Her PhD work focused on gatemons and stencil-based nanofabrication of superconducting qubits at the Niels Bohr Institute. Now she works at Quantum Machines, a company developing control electronics for quantum computers that aims to provide hardware capable of meeting the demands of fault-tolerant quantum architectures. In this episode, Michaela reflects on the 2025 Nobel Prize, talks about cleanroom and measurement challenges during her PhD, and compares writing papers with writing patents. She explains her role at Quantum Machines and the company's mission to harness FPGAs—the key electronic component—to control qubits on nanosecond timescales. In addition to synthesizing control pulses and processing readout signals, intricate classical computations must be performed in real time, for example to track qubit errors to correct them on the fly. Despite growing sophistication these advanced electronic products must remain easy to deploy. She also discusses the importance of science communication, which led her to launch a newsletter to help educate a broad audience about the latest breakthroughs and explain key ideas in quantum computing. Whether you are an aspiring quantum researcher or simply curious about the state of the quantum industry, this episode conveys the excitement of living on this technological frontier.
In this episode, Kevin Bowers returns to explain Jump's expansion from trading to core infrastructure, centered on what he calls the "great inversion": the real bottleneck in tech isn't compute, but data and I/O. He introduces Shelby, a new storage network, as a direct challenge to the "Hotel California for Data" model used by cloud providers. This same focus on efficient data flow—not just processing power—was the key to scaling Solana with Fire Dancer. Finally, Kevin explains how FPGAs from high-frequency trading are the critical hardware solution, allowing blockchains to bypass software's inefficient "Tower of Babel" and "get close to the wire" for true high performance. 00:00 - Expanding Beyond Trading and the Vision for Shelby 02:31 - Challenges in Storage and Data Management 04:37 - Building High-Performance Systems 08:04 - The Evolution of Jump's Technology 11:55 - The Economics of Cloud Storage 29:07 - Fire Dancer and Frankendancer 42:03 - The Cost of Optimization 42:48 - Machine Learning and Custom Networks 43:47 - Project Prioritization and Entropy 46:48 - Challenges in High-Performance Computing 56:38 - The Role of FPGAs in Trading and Blockchain 01:13:45 - Future of Hardware Acceleration in Blockchain 01:18:54 -Conclusion and Final Thoughts Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
FPGAs alias rekonfigurierbare Logik gehören zu den exotischeren Chips. Wie sie funktionieren, besprechen wir in Folge 2025/21 des Podcasts Bit-Rauschen.
Join Alex Golding as he sits down with Austin Federa, Co-founder of DoubleZero, to explore how they're building permissionless high-performance fiber infrastructure that could revolutionize blockchain performance. Austin shares the technical vision behind creating a parallel internet for distributed systems, starting with Solana validators as their initial market.DoubleZero: https://doublezero.xyz
Host: Sebastian HassingerGuest: Andrew Dzurak (CEO, Diraq)In this enlightening episode, Sebastian Hassinger interviews Professor Andrew Dzurak. Andrew is the CEO and co-founder of Diraq and concurrently a Scientia Professor in Quantum Engineering at UNSW Sydney, an ARC Laureate Fellow and a Member of the Executive Board of the Sydney Quantum Academy. Diraq is a quantum computing startup pioneering silicon spin qubits, based in Australia. The discussion delves into the technical foundations, manufacturing breakthroughs, scalability, and future roadmap of silicon-based quantum computers—all with an industrial and commercial focus.Key Topics and Insights1. What Sets Diraq ApartDiraq's quantum computers use silicon spin qubits, differing from the industry's more familiar modalities like superconducting, trapped ion, or neutral atom qubits.Their technology leverages quantum dots—tiny regions where electrons are trapped within modified silicon transistors. The quantum information is encoded in the spin direction of these trapped electrons—a method with roots stretching over two decades1.2. Manufacturing & ScalabilityDiraq modifies standard CMOS transistors, making qubits that are tens of nanometers in size, compared to the much larger superconducting devices. This means millions of qubits can fit on a single chip.The company recently demonstrated high-fidelity qubit manufacturing on standard 300mm wafers at commercial foundries (GlobalFoundries, IMEC), matching or surpassing previous experimental results—all fidelity metrics above 99%.3. Architectural InnovationsDiraq's chips integrate both quantum and conventional classical electronics side by side, using standard silicon design toolchains like Cadence. This enables leveraging existing chip design and manufacturing expertise, speeding progress towards scalable quantum chips.Movement of electrons (and thus qubits) across the chip uses CMOS bucket-brigade techniques, similar to charge-coupled devices. This means fast (
It was time for a little trip again. I went to Munich to the three-day FPGA conference of PLC2 and the Vogel Verlag. * FPGA Conference the biggest event in Europe * Some facts * 3 days * 430 Participants * 128 Lectures * 110 Speakers * 39 Exhibitors * This year triple anniversary * 40 years FPGA * 30 years PLC2 * 10 years FPGA Conference * All my visited talks * Welcome to the Post-European Cyber Resilience Act (CRA) Era * FPGAs and the Cyber Resilience Act * Cyber Resilience Act: Planning your Security Future * Making Simple FPGA Testbenches – Utilising Important Quality Measures * A Cuckoo Hash-Based CAM Architecture for FPGA and ASIC Implementations * Elevate your Design: Security and Power Efficiency with AMD Spartan UltraScale+ FPGAs * Faster Change of Probe Signals using the Vivado Logic Analyzer * Warning! Your FPGAs & SoC FPGAs are Under Attack * Functional Safety for Hardware and Software * Security, Regulations and FPGA-Based Systems – How to Make Your System Secure * Verify the Bits that Fly : A Demonstration of Bitstream to HDL Equivalence Checking * Why VUnit? * Managing and Versioning Gateware Source Code on Git with Hog * A Baseboard Management Controller for FPGA/SoC Board Supervision and Faster Bringup * GateMate FPGA: Qualification for Radiation-Tolerant Applications * GateMate FPGA: High-Speed Transceiver (SerDes) Hands-On * Project-Based and Non-Project-Based Scripting in Vivado * Multi-Run Management Using Vivado * How to Drive Parallel High-Speed Circuits from an AMD FPGA Next FPGA conference is from 30 June - 2 July 2026 And for now come into our Newsletter and also follow us on LinkedIn. The post WFP030 – FPGA Conference 2025 appeared first on World of FPGA by David Kirchner.
In our latest Electronic Specifier Insights podcast, Managing Editor Paige West speaks with Hussein Osman, Marketing Director, Lattice Semiconductor, all about optimising Edge AI with low-power FPGAs.
How can FPGAs calculate so fast? The secret inside an FPGA is a digital signal processing block. Content of this Episode: * What does DSP stand for? * Common parts inside an DSP block * Function representation * Important facts And for now come into our Newsletter and also follow us on LinkedIn. The post WFP029 – FPGA DSP appeared first on World of FPGA by David Kirchner.
Episode 28 of the World of FPGA Podcast. Talking about our favorite FPGA companies. Welcome to the FPGA Talk with the new co-host Glenn Kirilow
Send us a textIn this episode of Embedded Insiders, Ken O'Neil, Space Systems Architect at AMD, joins us to explore the company's advancements in AI for space. He delves into how AMD is enabling on-board processing for satellites and spacecraft, including the adoption of FPGAs in spaceflight applications.But first, Embedded Computing Design's Editor-in-Chief, Ken Briodagh, shares highlights from Computex 2025, providing insights into the top technological innovations unveiled at the event, including advancements in AI and edge computing.For more information, visit embeddedcomputing.com
Discover how Rackspace Spot is democratizing cloud infrastructure with an open-market, transparent option for cloud servers. Kevin Carter, Product Director at Rackspace Technology, discusses Rackspace Spot's hypothesis and the impact of an open marketplace for cloud resources. Discover how this novel approach is transforming the industry. TIMESTAMPS[00:00:00] – Introduction & Kevin Carter's Background[00:02:00] – Journey to Rackspace and Open Source[00:04:00] – Engineering Culture and Pushing Boundaries[00:06:00] – Rackspace Spot and Market-Based Compute[00:08:00] – Cognitive vs. Technical Barriers in Cloud Adoption[00:10:00] – Tying Spot to OpenStack and Resource Scheduling[00:12:00] – Product Roadmap and Expansion of Spot[00:16:00] – Hardware Constraints and Power Consumption[00:18:00] – Scrappy Startups and Emerging Hardware Solutions[00:20:00] – Programming Languages for Accelerators (e.g., Mojo)[00:22:00] – Evolving Role of Software Engineers[00:24:00] – Importance of Collaboration and Communication[00:28:00] – Building Personal Networks Through Open Source[00:30:00] – The Power of Asking and Offering Help[00:34:00] – A Question No One Asks: Mentors[00:38:00] – The Power of Educators and Mentorship[00:40:00] – Rackspace's OpenStack and Spot Ecosystem Strategy[00:42:00] – Open Source Communities to Join[00:44:00] – Simplifying Complex Systems[00:46:00] – Getting Started with Rackspace Spot and GitHub[00:48:00] – Human Skills in the Age of GenAI - Post Interview Conversation[00:54:00] – Processing Feedback with Emotional Intelligence[00:56:00] – Encouraging Inclusive and Clear Collaboration QUOTESCHARNA PARKEY“If you can't engage with this infrastructure in a way that's going to help you, then I guarantee you it's not up to par for the direction that we're going. [...] This democratization — if you don't know how to use it — it's not doing its job.”KEVIN CARTER“Those scrappy startups are going to be the ones that solve it. They're going to figure out new and interesting ways to leverage instructions. [...] You're going to see a push from them into the hardware manufacturers to enhance workloads on FPGAs, leveraging AVX 512 instruction sets that are historically on CPU silicon, not on a GPU.”
Colin O'Flynn returns to The Amp Hour for a 3rd time to talk about recent developments in security, FPGAs, small scale electronics manufacturing, and the world of academia.
(2:40) - Cancer-on-a-chip technology advances our understanding of how cancer operatesThis episode was brought to you by Mouser, our favorite place to get electronics parts for any project, whether it be a hobby at home or a prototype for work. Click HERE to learn more about the role of field programmable gate arrays (FPGAs) in the medical world! Become a founding reader of our newsletter: http://read.thenextbyte.com/ As always, you can find these and other interesting & impactful engineering articles on Wevolver.com.
In episode 8 of Open Source Ready, Brian and John sit down with Daniel Mangum, CTO of Golioth, to discuss his journey from distributed systems and Kubernetes to open source hardware. They explore the rise of RISC-V, the potential of decentralized social media, and how AI is shaping the future of computing. Plus, Daniel shares insights into FPGAs, the AT Protocol, and why open source innovation matters more than ever.
Send us a textIn this episode of Embedded Insiders, Winston Leung, Senior Product Marketing Manager at QNX, and Jay Thomas, Director of Field Development at LDRA join the podcast to discuss the fusion of robotics and automation in modern manufacturing, and how functional safety is shaping this transformative shift.Then, Rich and Vin are back with another Dev Talk featuring Steve Mensor, Vice President at Achronix Semiconductor. The three dive into the game-changing potential of embedded FPGAs and the benefits of blending the strengths of hardware and software.But first, Rich and Ken kick things off with a look at what's in store for embedded world 2025, and a recent announcement involving Apple's first custom modem chip for iPhones.For more information, visit embeddedcomputing.com
In this episode, we sit down with Skot, electrical engineer and creator of Bitaxe, the first open-source ASIC bitcoin miner. Skot shares his journey from IoT hardware development to designing hardware for bitcoin mining, including his early experiments with FPGAs in a university lab. We discuss the significance of open-source mining for bitcoin's future, the challenges of hardware and firmware development, and how Bitaxe compares to industry giants like Bitmain. Skot explains the potential for decentralized mining, the risks of centralization in mining hardware and pools, and why making mining accessible and transparent is critical. We also touch on the concept of incorporating miners into household devices and the role of small-scale miners in bitcoin's long-term resilience.SUPPORT THE PODCAST:→ Subscribe→ Leave a review→ Share the show with your friends and family→ Send us an email podcast@unchained.com→ Learn more about Unchained: https://unchained.com/?utm_source=youtube&utm_medium=video&utm_campaign=TBF-podcast-description→ Book a free call with a bitcoin expert: https://unchained.com/consultation?utm_source=youtube&utm_medium=video&utm_campaign=TBF-podcast-description→ Buy bitcoin in an IRA—sign up today and get your first year free: unchained.com/frontierTIMESTAMPS:00:00 - Intro01:16 - Meet Skot, the mind behind Bitaxe02:34 - From IoT projects to bitcoin mining innovation05:20 - Skot's bitcoin origin story and the PayPal saga09:35 - Mining bitcoin with university lab FPGAs12:11 - Why Skot started the first open-source ASIC miner14:55 - The challenges of open-source bitcoin mining hardware19:59 - Bitaxe vs. Bitmain: efficiency showdown21:25 - Who's building Bitaxes and why?25:01 - The importance of open-source mining for bitcoin's future28:26 - Can Bitaxe become a significant part of bitcoin's hash rate?33:39 - The dream of decentralized mining in household devices41:02 - The future of bitcoin mining: big grids vs. home miners47:06 - Skot's take on mining centralization risks53:37 - How open-source mining could save bitcoinWHERE TO FOLLOW US:→ Unchained Twitter: https://twitter.com/unchainedcom→ Unchained LinkedIn: https://www.linkedin.com/company/unchainedcom → Unchained Newsletter: https://unchained.com/newsletter → Joe Burnett (Host) on Twitter: https://twitter.com/IIICapital→ Jose Burgos (Director of Media Production) on Twitter: https://x.com/DeFBeD→ Skot on Twitter: https://x.com/skot9000
Welcome to episode 285 of the Explain it to me Like I'm 5 Podcast, formerly known as The Cloud Pod – where the forecast is always cloudy! We've got a lot of news this week, including the last of our coverage from re:Invent, ChatGTP Pro, FPGA, and even some major staffing turnovers. Titles we almost went with this week: Throw $200 dollars in a fire with ChatGPT Pro Jeff Barr is wrapped up by Agentic AI The Tribble with Trilliums The Wind in the Quantum Willows Rise of the dead instances FPGA and PowerPC Jeff Barr is replaced by Nova The Cloud Pod: Return of the dead instances types After 6 year Jeff Barr hands over the reigns to the CloudPod For our 6th birthday Jeff barr Retires For our 6th birthday jeff barr delegates announcements to the cloud pod 6 years of meaningless PR drivel 6 years of cloud news and we still don't know what Quantum computing is A big thanks to this week's sponsor: We're sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You've come to the right place! Send us an email or hit us up on our slack channel for more info. General News HAPPY 6th BIRTHDAY! 2:00 HashiCorp at re:Invent 2024: Security Lifecycle Management with AWS Hashi is a big sponsor of re:Invent, so of course they had some news of their own to release. HCP Vault Secrets auto-rotation is now generally available. Dynamic secrets are generally available via HCP Vault Secrets. Secrets sync will help keep your secrets synced with AWS Secrets Manager. It still appears to be one direction, but you can now also view secrets in AWS Secrets Manager that are managed by vault. HCP Vault Radar, now in beta, automates the detection and identification of unmanaged secrets in your code, including AWS infrastructure configurations 03:10 Matthew – “This qualifies under the category of things that I feel like we talked about so long ago, I just already assumed was GA. I’m surprised that it wasn’t.” 03:34 HashiCorp at re:Invent 2024: Infrastructure Lifecycle Management with AWS Terraform AWS provider is now at 3 billion downloads. The
AWS Morning Brief for the week of December 16th, 2024, with Corey Quinn. Links:Amazon Bedrock Guardrails reduces pricing by up to 85%Amazon CloudWatch now provides centralized visibility into telemetry configurationsAmazon EC2 F2 instances, featuring up to 8 FPGAs, are generally availableAmazon SES now offers Global Endpoints for multi-region sending resilienceAWS Toolkit for Visual Studio Code now includes Amazon CloudWatch Logs Live TailAccelerate your AWS Graviton adoption with the AWS Graviton Savings DashboardCapture data changes while restoring an Amazon DynamoDB tableUnderstand the benefits of physical replication in Amazon RDS for PostgreSQL Blue/Green DeploymentsHow AWS sales uses Amazon Q Business for customer engagementAWS Network Firewall Geographic IP Filtering launchIssue with DynamoDB local - CVE-2022-1471
Lattice Developers Conference Key Insights! What's the latest in Edge AI innovation from Lattice Semiconductor? Hosts Daniel Newman and Patrick Moorhead are joined by Lattice Semiconductor's CEO, Ford Tamer, Chief Strategy and Marketing Officer, Esam Elashmawi, and CVP of Product Marketing and Planning, Dan Mansur, for a conversation on Lattice Semiconductor's strategic growth, key announcements, and unique market position in the Edge AI sector on this episode of Six Five On The Road at Lattice DevCon 2024. Highlights include ⤵️ Ford Tamer's insights and experiences since joining Lattice. Lattice's growth strategy and the strengths driving this growth. Key product announcements and solutions introduced at the DevCon. The unique position of Lattice FPGA's in the burgeoning Edge AI market. An introduction to Nexus 2 and continued investment in small FPGAs.
Here are some of the things we talked about: https://www.media.mit.edu/projects/seeing-around-objects/overview/ https://en.wikipedia.org/wiki/Light_field_camera Jiska's inactivity reboot research 404 Media Article about inactivity reboot Joe Grand's YouTube Ken Shiriff - https://www.righto.com/ John McMaster - https://siliconpr0n.org/ Piotr Esden-Tempski https://1bitsquared.com/ Ferrite for iOS Azeria - Arm book - https://azeria-labs.com/ https://x.com/fox0x01 https://leg-assembly.com/ Marcan Asahi linux Joe Fitz Foone Bunnie Huang - Turning Everyday Gadgets into Bombs is a Bad Idea Kill decision book by Daniel Suarez Maddie Stone Malware Unicorn POSEE Show stats as of December 1 2024: Title Release Date Unique Downloads duration_seconds duration_formatted 001 - Success! Aug 14, 2017 4,421 2567.41 0:42:47 002 - Cheap And Easy Aug 29, 2017 3,763 3300.19 0:55:00 003 - Barbies and Keyboards Sep 17, 2017 2,741 2025.26 0:33:45 004 - 0x0FF the Rails Oct 12, 2017 3,461 4142.3 1:09:02 005 - Circuits That Go Nowhere Nov 05, 2017 3,866 3439.2 0:57:19 006 - Marketing Via Stickers Dec 05, 2017 2,832 1029.78 0:17:09 007 - Candy Coated Jan 03, 2018 3,439 2767.85 0:46:07 008 - T0015! Part 0x1 Mar 09, 2018 3,600 1663.35 0:27:43 009 - T0015! Part 0x2: All Ur sigs R belong to uS. Apr 14, 2018 3,651 1710.66 0:28:30 010 - T0015! Part 0x3 - Debug Interfaces May 14, 2018 3,421 2127.58 0:35:27 011 - Making Too Many Assumptions Jun 03, 2018 3,199 2715.12 0:45:15 012 - Cheese vs. SDR Jun 29, 2018 3,218 3987.09 1:06:27 013 - It's Not Magic Jul 04, 2018 3,552 4965.74 1:22:45 014 - Ferrycast Jul 09, 2018 3,209 2419.78 0:40:19 015 - Updates! Aug 30, 2018 2,990 1172.03 0:19:32 016 - Supercon 2018 Part 1 Nov 10, 2018 2,731 2678.26 0:44:38 017 - Supercon 2018 Part 2 Nov 11, 2018 2,776 2866.56 0:47:46 018 - Ghidra Mar 15, 2019 3,433 1277.65 0:21:17 019 - It's Still Not Magic Apr 06, 2019 3,244 2701.62 0:45:01 020 - Hardwear.io CTF Interviews Jun 16, 2019 2,607 1747.3 0:29:07 021 - Silent Disco Wizards Jun 23, 2019 2,549 1616.32 0:26:56 022 - Bits Through the Microscope Jun 30, 2019 2,583 1455.57 0:24:15 023 - Magic Moonbeams Jul 08, 2019 2,599 1968.33 0:32:48 024 - Cars, Servers, and FPGAs! Jul 14, 2019 4,046 5272.64 1:27:52 025 - Opaque Magisterium Aug 14, 2019 3,418 5891.14 1:38:11 026 - You Can Lose in so Many Colors! Aug 30, 2019 3,768 7437.72 2:03:57 027 - The Box Sep 08, 2019 2,728 1978.28 0:32:58 028 - Everyone Has a Bag of Tricks Sep 15, 2019 3,326 4440.62 1:14:00 026a Easter Egg Extra Sep 21, 2019 6,666 78.3 0:01:18 029 - Old Timey Name Droppin' Oct 16, 2019 3,464 6108.44 1:41:48 030 - Supercon 2019 Dec 01, 2019 2,797 2193.4 0:36:33 031 - The Title Isn't DibbleDabble Dec 27, 2019 3,150 2691.97 0:44:51 032 - High Molarity Rants Feb 25, 2020 3,239 4232.96 1:10:32 033 - All Over the Place Apr 27, 2020 2,924 2725.33 0:45:25 034 - Mechanical RE Jun 22, 2020 2,715 5550.24 1:32:30 035 - Giving it all away (Listener Survery) Jul 04, 2020 1,936 379.36 0:06:19 036 - ADDVulcan - Hack-a-sat Part 1 Jul 20, 2020 2,406 3438.36 0:57:18 037 - 2020 Survey Results Aug 04, 2020 2,100 1856.98 0:30:56 038 - My Favorite Random Number is 5 Aug 22, 2020 2,892 4540.08 1:15:40 039 - Changing the Nature of Reality Sep 13, 2020 2,667 3678.39 1:01:18 040 - Uh-tastic Oct 03, 2020 2,405 1612.5 0:26:52 041 - What did you fail at this week? Nov 07, 2020 2,956 5504.59 1:31:44 042 - Diwali in the Morning Nov 24, 2020 2,612 3324.93 0:55:24 043 - Filling In Zeros Dec 21, 2020 2,599 4542.95 1:15:42 044 - Scots Army Knife Jan 03, 2021 3,355 6217.34 1:43:37 045 - Rizin and Cutter Feb 15, 2021 3,271 4879.4 1:21:19 046 - Never Reveal the Prestige Mar 18, 2021 2,903 5659.94 1:34:19 047 - The Sun, The Moon, The Stars May 16, 2021 2,650 3042.08 0:50:42 048 - A Bad Case of Kubernitis Jun 06, 2021 3,504 4561.54 1:16:01 049 - Reversing Your Childhood One Game At a Time Jul 10, 2021 3,117 3657.36 1:00:57 050 - Four Years In Aug 22, 2021 2,737 3182.26 0:53:02 051 - Collecting Students With Similar Names Oct 05, 2021 3,113 5296.86 1:28:16 052 - Twitter Is My Lab Notebook Oct 26, 2021 3,607 7612.66 2:06:52 053 - It's Hammer Time! Dec 16, 2021 3,678 6024.75 1:40:24 054 - It's A Calibration, Not An Update! Feb 11, 2022 3,816 4582.27 1:16:22 055 - Stacks Of Bricked Chips Mar 13, 2022 3,626 3716.49 1:01:56 056 - Listening to Jupiter Mar 16, 2022 3,981 4377.36 1:12:57 057 - I Did Not Expect Sharks! May 09, 2022 3,949 5855.03 1:37:35 058 - Technically Met the Specs Jun 15, 2022 3,424 5321.05 1:28:41 059 - Instant Nerd Snipe Jul 04, 2022 3,578 3736.53 1:02:16 060 - The Brie List Aug 12, 2022 3,504 3173.46 0:52:53 061 - A Case of the Sniffles Nov 09, 2022 3,185 3599.93 0:59:59 062 - Keymap Rain Dance Dec 30, 2022 3,581 4588.83 1:16:28 063 - I Read Online That It's Impossible Mar 26, 2023 4,054 4501.32 1:15:01 064 - MS-DOS Malware Chose Me May 21, 2023 3,317 4093.39 1:08:13 065 - Multitalented Grinch Jul 30, 2023 2,932 3831.25 1:03:51 066 - Use Your Scope! Dec 09, 2023 3,135 6012.58 1:40:12 067 - I Don't Know What I'm Doing Mar 02, 2024 2,021 927.97 0:15:27 068 - The Monkey Button Apr 07, 2024 2,458 4131 1:08:51 069 - Canned Cheese and Onion Rings Apr 17, 2024 2,576 4977.68 1:22:57 070 - I Have a DediProblem Jun 09, 2024 3,135 6972.11 1:56:12 071 - Snerd Niped Sep 07, 2024 4,077 5462.2 1:31:02 Have comments or suggestions for us? Find us on twitter @unnamed_show , or email us at show@unnamedre.com . Music by TeknoAxe ( http://www.youtube.com/user/teknoaxe )
Is it possible for a startup to compete with NVIDIA, the $1 trillion behemoth dominating AI hardware? Thomas Sohmers and his team at Positron are betting they can and they're moving at lightning speed to prove it. In this episode, Chris Saad sits down with Thomas Sohmers, founder and CEO of Positron, to explore how this ambitious startup is aiming to disrupt the AI chip market currently dominated by NVIDIA. In this episode, you will: Discover how Positron achieved working hardware in just 18 months, defying typical timelines for chip startups Learn about Positron's strategy for compatibility with existing AI frameworks like HuggingFace Understand the performance and efficiency advantages Positron claims over NVIDIA's offerings Explore how the team's deep industry experience from companies like Grok and F5 Networks gives them an edge Gain insights into Positron's innovative use of FPGAs for rapid iteration before moving to custom chips Hear about the company's funding journey and current Series A raise Uncover valuable lessons for founders on competing against industry giants Learn more about the startup Skyfire Website: https://www.positron.ai/ Follow Thomas Sohmers on Linkedin: https://www.linkedin.com/in/trsohmers/ The Pact Honour The Startup Podcast Pact! If you have listened to TSP and gotten value from it, please: Follow, rate, and review us in your listening app Subscribe to the TSP Mailing List at https://thestartuppodcast.beehiiv.com/subscribe Secure your official TSP merchandise at https://shop.tsp.show/ Follow us on YouTube at https://www.youtube.com/@startup-podcast Give us a public shout-out on LinkedIn or anywhere you have a social media following. Key links The Startup Podcast is sponsored by Vanta. Vanta helps businesses get and stay compliant by automating up to 90% of the work for the most in demand compliance frameworks. With over 200 integrations, you can easily monitor and secure the tools your business relies on. For a limited-time offer of US$1,000 off, go to www.vanta.com/tsp. Get your question in for our next Q&A episode: https://forms.gle/NZzgNWVLiFmwvFA2A The Startup Podcast website: https://tsp.show Learn more about Chris and Yaniv Work 1:1 with Chris: http://chrissaad.com/advisory/ Follow Chris on Linkedin: https://www.linkedin.com/in/chrissaad/ Follow Yaniv on Linkedin: https://www.linkedin.com/in/ybernstein/ Credits Editor: Justin McArthur Content Strategist: Carolina Franco https://www.linkedin.com/in/francocarolina/ Intro Voice: Jeremiah Owyang
Antoine van Gelder spoke to us about making digital musical instruments, USB, and FPGAs. Antoine works for Great Scott Gadgets, specifically on the Cynthion USB protocol analysis tool that can be used in conjunction with Python and GSG's FaceDancer to act as a new USB device. While bonding over MurderBot Diaries was a given, Antoine also mentioned NAND2Tetris which Elecia countered with The Elements of Computing Systems: Building a Modern Computer from First Principles, the book that covers the NAND2Tetris material. Memfault is a leading embedded device observability platform that empowers teams to build better IoT products, faster. Its off-the-shelf solution is specifically designed for bandwidth-constrained devices, offering device performance and product analytics, debugging, and over-the-air capabilities. Trusted by leading brands such as Bose, Lyft, Logitech, Panasonic, and Augury, Memfault improves the reliability of devices across consumer electronics and mission-critical industries such as access control, point of sale, energy, and healthcare. To learn more, visit memfault.com.
I'm always interested in what factors shape the design of a programming language. This week we're taking a look at a language that's wholly shaped by its need to support a very specific kind of program - audio processing. Anything from creating a simple echo sound effect, to building an entire digital instrument based on a 17th-century harpsichord.The language in question is Faust, and this week we're joined by Romain Michon, who works on and teaches Faust, as we look at how it's designed, what kind of programmers it's for, and how it does the job of turning audio-pipeline definitions into executable code.And one of the surprising parts of that compilation strategy is the decision to have it compile to multiple targets, from the expected ones like C and Rust, to the exotic destination of FPGAs (Field Programmable Gate Arrays). FPGAs are like reprogrammable circuit boards, and Romain dives into Faust's attempts to go from a high-level description of an audio program, all the way down to instructions that tell a chip exactly how it should wire itself.So rather aptly for a technology podcast, we start this week with what your ear can hear and go all the way down to logic gates and circuit boards…–Try Faust in the Browser: https://faustide.grame.fr/Faust Online Course: https://www.kadenze.com/courses/real-time-audio-signal-processing-in-faust/infoFPGAs: https://en.wikipedia.org/wiki/Field-programmable_gate_arrayVHDL: https://en.wikipedia.org/wiki/VHDLVerilog: https://en.wikipedia.org/wiki/VerilogGrame: https://www.grame.fr/The (Strawberry Jam) Gramophone: https://www.grame.fr/articles/gramophoneGramophone Workshops: https://www.grame.fr/evenements/atelier-gramophones-65ca16b19fec4Support Developer Voices on Patreon: https://patreon.com/DeveloperVoicesSupport Developer Voices on YouTube: https://www.youtube.com/@developervoices/joinKris on Mastodon: http://mastodon.social/@krisajenkinsKris on LinkedIn: https://www.linkedin.com/in/krisjenkins/Kris on Twitter: https://twitter.com/krisajenkins
Dan is joined by Clay Johnson, CEO of CacheQ. Clay has decades of executive experience in computing, FPGAs and development flows, including serving as Vice President of the Xilinx Spartan Business Unit which was acquired by AMD. Clay discusses the changes occurring in system design to leverage AI/ML and technologies such as large… Read More
Bitluni joins Chris on The Amp Hour to discuss FPGAs, ESP32 projects, custom silicon, building around memes, and continually challenging yourself to learn something new.
Space & Satellite Business Tourism, Communications, & Rockets AZ TRT S05 EP25 (240) 6-23-2024 What We Learned This Week: · Business Model of Space is expanding, from Satellite delivery, to rockets, to space tourism, to future colonies on the Moon & Mars. · Satellite Communication and technology industries, expanding by the day · Long-term, both cell phones and Internet may be delivered worldwide via satellite · Space Aviation companies improving Rocket technology to put more satellites in the orbit at a lower cost Notes: Seg. 1 Rocket Lab Bio https://www.marketwatch.com/investing/stock/rklb https://www.rocketlabusa.com/updates/rocket-lab-usa-poised-to-change-the-space-industry/ US Aerospace company Rocket Lab is developing a world-first launch vehicle to deliver satellites into orbit cheaper and faster than ever before. Rocket Lab announced today its plan to revolutionize the global space industry with the creation of Electron, a lightweight, cost-effective rocket, making it easier for companies to launch small satellites into orbit. Rocket Lab is building the world's first carbon-composite launch vehicle at its Auckland, New Zealand facility. The development of Electron will reduce the price of delivering a satellite into orbit. At a cost of less than $5 million dollars, this represents a drastic cost reduction compared to existing dedicated launch services[1]. The lead-time for businesses to launch a satellite will also be reduced from years[2] down to weeks through vertical integration with Rocket Lab's private launch facility. Rocket Lab has already garnered strong commercial demand with commitments for its first 30 launches. Rocket Lab's principal funder is top-tier Silicon Valley venture firm, Khosla Ventures, which has a long track record of backing breakthrough technologies that revolutionize industries. Vinod Khosla, founder of Khosla Ventures, says it is exciting to see to the technology and innovation coming out of Rocket Lab. “We are thrilled to be investing in the next chapter of Rocket Lab's development as they drive down the cost of launch vehicles to provide greater access to space,” said Mr. Khosla. “The company's technical innovations will truly transform the space industry.” About Rocket Lab Rocket Lab is an aerospace company founded in 2006 by New Zealander, Peter Beck. The company is focused on delivering innovative, high quality technologies to the space industry. Rocket Lab was created to cater to the growing requirement within the international market for fast, low cost methods of delivering payloads to space. Since inception, the company has successfully developed a number of leading rocket-based systems, from sounding rockets through to new advanced propulsion technologies. Rocket Lab is an American company with a subsidiary and head office in Auckland, New Zealand. Rocket Lab was the first private company to reach space in the southern hemisphere in 2009 with its Atea 1 suborbital sounding rocket. Following this success the company won contracts with aerospace giants Lockheed Martin, DARPA and Aeroject Rocket-dyne. Who are rocket Labs' competitors? The main competitors of Rocket Lab USA include AST SpaceMobile (ASTS), Hub Group (HUBG), Walker & Dunlop (WD), Matterport (MTTR), Joby Aviation (JOBY), Air Transport Services Group (ATSG), ChargePoint (CHPT), Forward Air (FWRD), Park-Ohio (PKOH), and United Parcel Service (UPS) Market Cap: Rocket Lab $2.2B vs. Hub Group $2.7B SapceX https://en.wikipedia.org/wiki/SpaceX Space Exploration Technologies Corporation, commonly referred to as SpaceX, is an American spacecraft manufacturer, launch service provider and satellite communications company headquartered in Hawthorne, California. The company was founded in 2002 by Elon Musk with the goal of reducing space transportation costs and ultimately developing a sustainable colony on Mars. The company currently produces and operates the Falcon 9 and Falcon Heavy rockets along with the Dragon and Starship spacecraft. The company offers internet service via its Starlink subsidiary, which became the largest-ever satellite constellation in January 2020 and, as of April 2024, comprised more than 6,000 small satellites in orbit.[8] https://medium.com/how-do-they-make-money/how-does-spacex-make-money How does SpaceX make money? SpaceX is an American aerospace manufacturer and space transport services company founded in 2002 by Elon Musk. The company's mission is to revolutionize space transportation and eventually enable the colonization of Mars. One of the primary ways that SpaceX makes money is through contracts with government agencies and commercial customers for launches of its Falcon 9 and Falcon Heavy rockets. SpaceX has a backlog of over 100 launches, with contracts from both government and commercial customers. The company's contracts with government agencies, such as NASA, have been particularly lucrative, with SpaceX receiving billions of dollars in funding to develop and launch rockets for various missions. In addition to launch services, SpaceX also makes money through the production and sale of satellite hardware. The company manufactures a range of satellite products, including the Starlink satellite constellation, which is designed to provide high-speed internet to remote and underserved areas around the world. The Starlink constellation currently consists of over 1,000 satellites, with plans to eventually have over 12,000 in orbit. SpaceX generates revenue from the sale of hardware and services to customers that use the Starlink system. Another way that SpaceX makes money is through research and development contracts. The company has received funding from the government and private organizations to develop new technologies, such as its Raptor rocket engine and its Starship spacecraft. These contracts provide SpaceX with a steady stream of revenue and help the company advance its goals of developing reusable rockets and enabling human spaceflight. SpaceX also generates revenue from its launch facilities and other assets. The company operates launch sites at the Kennedy Space Center in Florida and at Vandenberg Air Force Base in California, as well as a facility in Texas where it tests its rocket engines. SpaceX also owns a number of other assets, including a fleet of cargo ships and recovery vessels that it uses to support its launches and recover rocket boosters. https://en.wikipedia.org/wiki/Blue_Origin Blue Origin Enterprises, L.P.,[2] commonly referred to as Blue Origin[3] is an American aerospace manufacturer, government contractor, launch service provider,[4][5] and space technologies[6] company headquartered in Kent, Washington, United States. The company makes rocket engines for United Launch Alliance (ULA)'s Vulcan rocket and manufactures their own rockets, spacecraft, satellites,[7] and heavy-lift launch vehicles. The company is the second provider of lunar lander services for NASA's Artemis program and was awarded a $3.4 billion contract.[8] The four rocket engines the company has in production are the BE-3U, BE-3PM, BE-4 and the BE-7.[9] The organization was awarded the Robert J. Collier Trophy in 2016 for demonstrating rocket booster reusability with their New Shepard Rocket Program.[10] The award is administered by the U.S. National Aeronautic Association (NAA) and is presented to those who have made "the greatest achievement in aeronautics or astronautics in America, with respect to improving the performance, efficiency, and safety of air or space vehicles, the value of which has been thoroughly demonstrated by actual use during the preceding year."[11] https://www.strategyzer.com/library/space-as-a-business-model-arena Industry forces Here we can analyze our supply chain — the ISS. Not only will other governments be able to take a ride, but anyone with the budget and a business plan, could launch a business from the ISS. Other considerations: Competitors: Governmental Organizations such as NASA, ESA, and more than 9 countries have orbital launch capabilities. New Entrants: Private Companies such as SpaceX, Blue Origin, Virgin Galactic, Bigelow Aerospace, Stratolaunch, Rocket Lab, and Planetary Resources to name a few. Supply Chain: NASA recently announced that the International Space Station will be open for commercial business for an approximate cost of $52M. Starting in 2020, Astro-preneurs with deep pockets can use the ISS for off-earth manufacturing, research or tourism. https://www.relativityspace.com/ A rocket company at the core, Relativity Space is on a mission to become the next great commercial launch company. With an ever-growing need for space infrastructure, demand for launch services is continuously outpacing supply. Our reusable rockets can meet this demand, offering customers the right size payload capacity at the right cost. Using an iterative development approach, we are strategically focused on reducing vehicle complexity, cost, and time to market. Our patented technologies enable innovative designs once thought impossible and unlock new value propositions in the booming space economy. Seg. 2 Space Tourism https://apnews.com/article/virgin-galactic-tourist-spaceflight-branson-4c0904e4f222bd1aa4194c1a43777dd2 August 10, 2023 TRUTH OR CONSEQUENCES, N.M. (AP) — Virgin Galactic rocketed to the edge of space with its first tourists Thursday, a former British Olympian who bought his ticket 18 years ago and a mother-daughter duo from the Caribbean. The space plane glided back to a runway landing at Spaceport America in the New Mexico desert, after a brief flight that gave passengers a few minutes of weightlessness. This first private customer flight had been delayed for years; its success means Richard Branson's Virgin Galactic can now start offering monthly rides, joining Jeff Bezos' Blue Origin and Elon Musk's SpaceX in the space tourism business. “That was by far the most awesome thing I've ever done in my life,” said Jon Goodwin, who competed in canoeing in the 1972 Olympics. Goodwin, 80, was among the first to buy a Virgin Galactic ticket in 2005 and feared, after later being diagnosed with Parkinson's disease, that he'd be out of luck. Since then he's climbed Mount Kilimanjaro and cycled back down, and said he hopes his spaceflight shows others with Parkinson's and other illnesses that ”it doesn't stop you doing things.” Ticket prices were $200,000 when Goodwin signed up. The cost is now $450,000. https://finance.yahoo.com/video/5-space-stocks-investors-watch-183956447.html The 5 space stocks investors need to watch Yahoo Finance - Mon, Jun 24, 2024 The space industry is counting down to lift off with major investments pouring into the sector from multiple superpowers. Many space-related companies have profited off this new space race, giving new avenues for investors to add this sector to their portfolios. So which space related stocks should investors at least be keeping their eye on right now for potential investment here? The first on the list is intuitive machines. LUNR This is an infrastructure play. The company made history back in February, its commercial lander. Odysseus successfully landed on the moon. The stock had skyrocketed leading up to the landing, but subsequently crashed when the lander permanently faded with no chance of waking up on the moon. The landing paved the way for some future missions, including one slated for late this year. number two on the list is Iridium, a commonly viewed company as a satellite phone company with a network built for mobile applications. Iridium Communications Inc NASDAQ: IRDM Whether that be on devices that people are using or the Internet of things, Iridium boasts that it's the only network that has 100% Earth coverage where it's delivered. The company is profitable as it's been around for more than 25 years. Number three on the list is Planet Labs, the company found by three NASA scientists. - Planet Labs PBC It designs, builds and operates the largest earth observation fleet of imaging satellites.It has over 1000 customers, including entities involved with agriculture, forestry, education and government agencies. Heightened security needs, increased sustainability and global climate risk are some of the trends that have been driving demand for their earth imaging. number four is spire global. SPIR This is a Data and Analytics company that uses satellites to collect information from space. Think whether ocean winds, shipping information and anything else that can be observed from space. The company has over 800 customers from about over 50 countries. About half are from governments.The other half come from commercial entities. number five on the list is Rocket Lab. Rocket Lab USA, Inc. (RKLB) Stock The Rocket launch service company launched its 50th electron rocket in June. Electron has become the landing commercial small launch vehicle in Western countries, and the company remains on track for another year of record electron launches during Rocket Lab UH, it's their May earnings management mentioned. The company was awarded a second mission from the US Space Force for a space test programme that's carrying out research and experiments for the Department of Defence. space ETF UFO started in 2019, and that focuses on companies that are significantly engaged in the space industry. So it includes companies from around the world, not just the US, and its fund invests in at least 80% of its Net assets and those companies that derive at least half of their revenue or profit from space related businesses. Ark Invest Arc X that was started in March 2021 at the height of the market. The fund aims at providing exposure to companies involved in space related businesses like reusable rockets, satellites, drones and other sub or aircrafts. Large cap stocks are the most common holdings of that, ETF represented about 40 42% of the portfolio.Medium cap represents about 31% and the rest are small cap and then you've got the spider, S and P Aerospace and Defence X they are. It is an ETF focus on aerospace and defence, just like the name sounds it launched in 2011. And funds largest holdings include Arrow Environment, for example, a defence company that manufactures drones and unmanned vehicles. https://investorplace.com/2024/04/lunr-stock-alert-intuitive-machines-nabs-nasa-contract/ LUNR Stock Alert: Intuitive Machines Nabs NASA Contract By Larry Ramer, InvestorPlace Contributor Apr 4, 2024 Intuitive Machines (LUNR) stock is trending after NASA awarded the company a contract. Under the deal, Intuitive will help develop a Lunar Terrain Vehicle for an upcoming trip to the moon. The company successfully landed on the moon back in February, deploying “payloads and commercial cargo” on behalf of NASA. Intuitive Machines (NASDAQ:LUNR) is trending on social media and business news websites as LUNR stock moves up today. Shares of the company are up almost 4% as of this writing. This comes after Intuitive Machines won a NASA contract to support the agency's efforts for a mission to the moon. Intuitive will be a “prime contractor” for NASA's Artemis campaign, which is slated to include human exploration of the moon. Intuitive Machines will receive an initial payment of $30 million as part of the contract. LUNR Stock: Intuitive Machines' Contract From NASA Under the agreement, Intuitive Machines will help complete a “Lunar Terrain Vehicle Services Feasibility Assessment.” The LTV feasibility roadmap will also utilize Intuitive's Nova-D cargo-class lunar lander. The company will work on the LTV plans with a number of partners. These include Boeing (NYSE:BA), auto supplier Michelin (OTCMKTS:MGDDY) and huge defense contractor Northrop Grumman (NYSE:NOC). NASA plans to spend a max total of $4.6 billion on the LTV. More About Intuitive Machines Intuitive Machines reports itself to be the “only United States commercial company to deliver science and technology data from the surface of the Moon.” On Feb. 23, the company successfully landed on the moon and deployed “five NASA payloads and commercial cargo.” Intuitive was first launched in 2012 by co-founder, President and CEO Stephen Altemus, who was previously the Deputy Director of NASA's Johnson Space Center. Meanwhile, co-founder and Chairman Dr. Kamal Ghaffarian previously “held numerous technical and management positions” at Lockheed Martin (NYSE:LMT), Ford Aerospace and Loral. https://seekingalpha.com/article/4700964-rocket-lab-stock-weakness-is-opportunity Rocket Lab Stock: Weakness Is Opportunity Jun. 25, 2024 Rocket Lab USA, Inc. (RKLB) Stock When it comes to investing in small companies successfully, investors need to be ready to go through periods where improvements to company fundamentals will yield little to no returns. Rocket Lab's stock has declined despite promising developments, including a $515 million government contract and a new deal with Synspective for 10 Electron launches. Rocket Lab's pipeline is strengthening with new contracts, and the company's Space Systems business is expected to drive growth. Rocket Lab's fundamentals are improving, with revenue expected to accelerate to over $430 million this year and high double-digit growth projected for the next five years, potentially leading to profitability by 2027. Clips used from Past Shows in Seg 1: Stock Investing Info from Earnings Hub w/ Hamid Shojaee AZ TRT S05 EP23 (238) 6-9-2024 What We Learned This Week: Earnings Hub is a platform where you can find all the information on a company, when their earnings are coming out, & quarterly calls Earnings info for Public Co's is often hard to find, and the income for stocks is crucial to the price Hamid is a long term investor like Buffet, more of buy and hold of good stocks, only owns 8 stocks Concentration Builds Wealth – Diversification Preserves it. Looking for companies that can grow 10x over the next few years, and this is hard with massive companies worth $ trillions like Apple or Microsoft Another company Hamid likes is called Rocket Lab. Stock is $4 and they have a Market Cap of $2 billion vs a competitor like SpaceX valued at $180 billion. Just like SpaceX, Rocket Lab will be putting satellites into orbit. He's a big fan of Rocket Lab, which is in competition with SpaceX and its subsidiary Starlink providing satellite internet. This is all about putting satellites into space. Curious to see if Amazon Jeff Bezos space company, Blue Origin will be in the mix later. Full Show: HERE BRT S03 EP25 (124) 6-12-2022 – BRT in Space with Satellite Components by Spirit Electronics w/ Marti McCurdy Things We Learned This Week • Spirit Electronics is veteran and women owned tech company providing satellite components to Aerospace and Defense industries • Satellites in Low Earth Orbit – need components built to resist extreme temperatures and still function as expected when built - Radiation Testing – stress test, thermal, pressure • Working with top Defense Contractors, Raytheon, Boeing, Lockhead Martin, helping create products used in Government contracts • Space is on a Comeback – from SpaceX, to Blue Orbit, Space Florida & Kennedy Space Center, now let's talk Space Junk, Satellite Crash, Launch Ops – launch at right time, right orbit, right space • AZ is becoming a Tech Hub: Semiconductors, Aerospace, Defense, EV, Autonomous, AZ Tech Council to Tech Incubators Guest: Marti McCurdy - CEO of Spirit Electronics https://www.linkedin.com/in/marti-mccurdy-1083a936/ https://www.spiritelectronics.com/about-us/ Marti McCurdy, owner and CEO of Spirit Electronics, is a veteran not only of the semiconductor business but also of the United States Air Force. Marti's focus as CEO is to serve the aerospace and defense industry for high reliability components. She exercises her engineering knowledge of space qualified flows and sophisticated testing to deliver flight class devices. Throughout her career as a business owner and most recent position as VP, Marti's goal is to bring her high standard of customer service and cultivated relationships to serve the aerospace sector she is so familiar with. Marti holds a current patent and is a published author in ultrasonic applications. Spirit Electronics is a certified veteran-owned, woman-owned value-added distributor of electronic components. Our product lines and value-added services offer power, memory, FPGAs, ASICs–everything you need to build out a high-reliability board that can perform in even the harshest environments. Spirit builds components for satellites, used in the aerospace and defense industries. Notes: Spirit Electronics manufactures satellite components like Circuit boards Supply chains with defense and aerospace for components Invest idea – materials used in satellites *Low Earth orbit of satellite, not technically space sometimes Examples of co's do biz with: F35 Lightning ll program plane by Lockhead Martin Kyocera, EPC Space, Latham Industries *Space EP (space enhanced plastics) – need to stress test to with stand high & low temps Real World applications of satellites – Data collection by satellites of Earth locations – ie Disney Park Via satellite, get internet on phone while flying on a plane 5 year life span of satellites up in orbit Full Show: HERE Tech Topic: https://brt-show.libsyn.com/category/Tech-Startup-VC-Cybersecurity-Energy-Science Best of Tech: https://brt-show.libsyn.com/size/5/?search=best+of+tech Investing Topic: https://brt-show.libsyn.com/category/Investing-Stocks-Bonds-Retirement ‘Best Of' Topic: https://brt-show.libsyn.com/category/Best+of+BRT Thanks for Listening. Please Subscribe to the BRT Podcast. AZ Tech Roundtable 2.0 with Matt Battaglia The show where Entrepreneurs, Top Executives, Founders, and Investors come to share insights about the future of business. AZ TRT 2.0 looks at the new trends in business, & how classic industries are evolving. Common Topics Discussed: Startups, Founders, Funds & Venture Capital, Business, Entrepreneurship, Biotech, Blockchain / Crypto, Executive Comp, Investing, Stocks, Real Estate + Alternative Investments, and more… AZ TRT Podcast Home Page: http://aztrtshow.com/ ‘Best Of' AZ TRT Podcast: Click Here Podcast on Google: Click Here Podcast on Spotify: Click Here More Info: https://www.economicknight.com/azpodcast/ KFNX Info: https://1100kfnx.com/weekend-featured-shows/ Disclaimer: The views and opinions expressed in this program are those of the Hosts, Guests and Speakers, and do not necessarily reflect the views or positions of any entities they represent (or affiliates, members, managers, employees or partners), or any Station, Podcast Platform, Website or Social Media that this show may air on. All information provided is for educational and entertainment purposes. Nothing said on this program should be considered advice or recommendations in: business, legal, real estate, crypto, tax accounting, investment, etc. Always seek the advice of a professional in all business ventures, including but not limited to: investments, tax, loans, legal, accounting, real estate, crypto, contracts, sales, marketing, other business arrangements, etc.
Cale, Evan and Sujit have an insightful discussion with Niko Pamboukas and Siri Velauthapillai about the new Azure Boost feature that is being deployed into the Azure substrate for VMs. They explain the specialized FPGAs and ARM SoCs that provide extreme IOPs for storage and networking and almost no downtime for system upgrades. Media file: https://azpodcast.blob.core.windows.net/episodes/Episode496.mp3 YouTube: https://youtu.be/4FPQCAmYrzw Resources: Learn more about Azure Boost at https://aka.ms/AzureBoost Learn about accelerated networking with Boost at https://aka.ms/MANA Learn about accelerated storage with Boost at https://aka.ms/NVMe Other updates: Public preview: Azure Load Balancer now supports Admin State | Azure updates | Microsoft Azure https://azure.microsoft.com/en-us/updates/public-preview-azure-bastion-premium/ https://azure.microsoft.com/en-us/updates/kubernetesmetadataandlogsfilteringpublicpreview/ https://azure.microsoft.com/en-us/updates/public-preview-azure-load-balancer-health-event-logs/ https://azure.microsoft.com/en-us/updates/generally-available-azure-chaos-studio-supports-a-new-pause-process-fault-for-windows-virtual-machines/
In this episode of The Circuit, Ben Bajarin interviews Esam Elashmawi, Chief Strategy and Marketing Officer of Lattice, about the world of Field-Programmable Gate Arrays (FPGAs). They discuss the basics of FPGAs, their unique capabilities, and their pervasiveness across various applications. They also explore the advantages of FPGAs over Application-Specific Integrated Circuits (ASICs) and the flexibility they offer in terms of customization and reprogramming. Esam highlights the role of FPGAs in different markets, such as communications, computing, industrial, and automotive, and how Lattice differentiates itself in the FPGA market. They also touch on the challenges of building an FPGA company and the potential of FPGAs in AI applications, both in data centers and at the edge.
Dive into the cutting-edge realm of neuromorphic computing with André van Schaik, a professor of electrical engineering at the Western Sydney University, and director of the International Centre for Neuromorphic Systems, in Penrith, New South Wales, Australia In this episode of Eye on AI, André unveils the capabilities of DeepSouth, an innovative brain-scaled neuromorphic computing system designed to simulate up to 100 billion neurons in real time. Discover how DeepSouth leverages spiking neurons and synapses to process information more efficiently than traditional AI models, and how this technology could transform our understanding of brain computation and unlock new AI architectures. The conversation explores the unique hardware setup of DeepSouth, utilizing FPGAs (Field-Programmable Gate Arrays) for a flexible, reconfigurable approach that mimics the asynchronous and spiking communication of biological neurons. André discusses the initial testing phase focusing on balanced excitation-inhibition networks, reflecting common neural activities in the human cortex, and outlines the system's potential to facilitate large-scale simulations previously unachievable due to computational constraints. André's insights are invaluable for anyone interested in the intersection of neuroscience, artificial intelligence, and computational technology. Don't forget to like, subscribe, and hit the notification bell to stay updated on the latest breakthroughs and discussions in the world of artificial intelligence. This episode is sponsored by Oracle. AI is revolutionizing industries, but needs power without breaking the bank. Enter Oracle Cloud Infrastructure (OCI): the one-stop platform for all your AI needs, with 4-8x the bandwidth of other clouds. Train AI models faster and at half the cost. Be ahead like Uber and Cohere. If you want to do more and spend less like Uber, 8x8, and Databricks Mosaic - take a free test drive of OCI at https://oracle.com/eyeonai Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Preview and Introduction (02:19) André van Schaik's background (03:39) What is neuromorphic computing? (05:45) Differences between Von Neumann and neuromorphic architectures (09:27) How DeepSouth simulates neurons (12:20) What are FPGAs? (16:42) Current status of DeepSouth (19:04) Running neural network architectures on DeepSouth (22:33) DeepSouth as an open source, commercial hardware (24:40) Potential for cheaper model training (28:35) Number of neurons and connections in DeepSouth (30:01) Power consumption comparison (34:21) Mimicking brain structures in DeepSouth (35:42) André van Schaik's background in neuroscience (39:12) Goals of understanding brain activity vs solving problems (41:44) Interest from AI research community (43:03) Summary of DeepSouth's goals
Interview with Diane Tosetti of Microchip's FPGA Business Unit. The FPGA market is continuously growing and so is the demand for low power, reliability and security in FPGAs. In this podcast, Diane Tosetti, Senior Manager Business Development at Microchip, discusses the role of FPGA's in Intelligent Edge applications and the range of interesting applications the PolarFire FPGA and PolarFire SoC families are enabling. Microchip FPGAs help overcome power, system size and security challenges across various applications like embedded vision, automotive, industrial automation, communications, defense and IoT systems. Find out more about Microchip's FPGA products here www.microchip.com/en-us/products/fpgas-and-plds Time Stamps [01:29] – The role of FPGA's in Intelligent Edge [05:25] – How big is the FPGA market? [07:01] – The applications Microchip's FPGA's support [15:41] – Interesting examples of Microchip FPGA applications [18:48] – Collaboration across Microchip Business Units [20:32] – How to get started with FPGA's [22:54] – Takeaway – Microchip's exciting FPGA roadmap Follow Diane Diane Tosetti on LinkedIn: https://www.linkedin.com/in/diane-wilson-tosetti-a47a962/ If you enjoyed this episode, be sure to subscribe to our podcast for more discussions about Microchip's smart, connected and secure embedded control solutions and connect with us on social media to stay updated on upcoming episodes. We'd also appreciate it if you could leave us a review on your favorite podcast platform. Want more? Look out for more upcoming podcasts from Microchip: Beyond the Microchip So much of our daily lives are controlled or influenced by electronics. We rely on GPS to direct us, we hit "brew" on our coffee machines for our mornin' cup of Joe, we wave our hands over a sensor to get running water from a faucet, and press a button to open our garage doors. But do we really know what's going on inside? Are we aware of the universe of technology and calculations going on right under our nose? Beyond the Microchip takes you inside the world of Embedded Control technologies to understand how the chips and sensors we can't see impact our lives in dramatic ways. They remind us why we have and embrace technology, to enhance the human experience. Join us each episode as we look at an aspect of our daily lives that shapes what it means to be human and how we can empower the innovation that enhances that experience through Microchip Technology. Subscribe to Beyond the Microchip wherever you get your podcasts.
Interview with Bob Vampola, Vice President Aerospace and Defense at Microchip Technology Inc. As the No. 1 semiconductor supplier for Aerospace and Defense, Microchip understands the critical importance of building secure, robust and reliable electronic systems for aerospace and defense applications. Microchip offers a range of power management, frequency and timing and RF and microwave products such as ASICs, FPGAs, mixed-signal ICs and MCUs. In our latest podcast, Bob Vampola, Vice President Aerospace and Defense at Microchip, outlines the sub-markets in the Aerospace and Defense sector, the importance of ‘Mission assurance' and the challenge of tackling radiation through testing. Find out more about Aerospace and Defense at Microchip here https://www.microchip.com/en-us/solutions/aerospace-and-defense Time Stamps [01:15] – The 3 sub-markets in the Aerospace and Defense sector [02:12] –The Aerospace and Defense Business Unit approach [03:40] – No. 1 semiconductor supplier for the Aerospace and Defense market [06:55] – The importance of “mission assurance” at Microchip [09:14] – The future of Aerospace and Defense [11:35] – The challenge of radiation [17:08] – New products and technologies in market – Ethernet in Space applications Follow Bob Bob Vampola on LinkedIn: https://www.linkedin.com/in/bob-vampola-aba77615/ If you enjoyed this episode, be sure to subscribe to our podcast for more discussions about Microchip's smart, connected and secure embedded control solutions and connect with us on social media to stay updated on upcoming episodes. We'd also appreciate it if you could leave us a review on your favorite podcast platform. Want more? Look out for more upcoming podcasts from Microchip: Beyond the Microchip So much of our daily lives are controlled or influenced by electronics. We rely on GPS to direct us, we hit "brew" on our coffee machines for our mornin' cup of Joe, we wave our hands over a sensor to get running water from a faucet, and press a button to open our garage doors. But do we really know what's going on inside? Are we aware of the universe of technology and calculations going on right under our nose? Beyond the Microchip takes you inside the world of Embedded Control technologies to understand how the chips and sensors we can't see impact our lives in dramatic ways. They remind us why we have and embrace technology, to enhance the human experience. Join us each episode as we look at an aspect of our daily lives that shapes what it means to be human and how we can empower the innovation that enhances that experience through Microchip Technology. Subscribe to Beyond the Microchip wherever you get your podcasts.
Fun fact: There are more vulnerabilities and exploits below the OS layer than above it! CPUs, BIOS, Firmware, embedded Linux, FPGAs, UEFI, PXE... The list goes on an on. What are we supposed to do about that? Allan asked Yuriy to come down to the 'Ranch to discuss this issue with him. Yuriy is CEO at Eclypsium, member of the Forbes Technology Counsel, Founder of the open source CHIPSEC project, former head of Threat Research at McAfee, form Senior Principle Engineer at Intel… He is uniquely qualified to discuss these issues. Full DISCLAIMER: Allan is CISO at Eclypsium. Note that he asked Yuriy to come on the show, not the other way around. Nobody knows this space like Yuriy and his team. Allan asks Yuriy about: The history of CPU exploits Unauthorized code in chips in network gear The various hacks available at this layer The role of SBOM in all this The open source CHIPSEC project It's an eye-opening show to say the least. Y'all be good now!
An airhacks.fm conversation with Juan Fumero (@snatverk) about: Juan previously appeared in the episode "#250 FPGAs, GPUs or Data Science with Java", using Tornado to run Java programs on GPUs/accelerators, integrating AI models with Java applications, potential of using Tornado and Project Babylon together, discussion around tensor types in Java, Paul Sandoz appeared in the episode "#277 Project Babylon", Heterogeneous Accelerator Toolkit by Gary Frost, TornadoVM and LLama port, Hybrid API for Deep Learning acceleration and the new Panama-based types: TornadoVM talk at JVMLS'23, TornadoVM 1.0 Release notes, Alfonso Peterssen ported llama to Java, Initial Java port from the GraalVM team, Java / AI startup: paravox.ai Juan Fumero on twitter: @snatverk
Tonight CG returns to discuss the HPE acquisition of Juniper along with news in the world of FPGA's. Interview begins at 01:00:40. Settle in as we begin the sixth season of Packets and Bolts with some headlins and a cocktail in this, the first episode of 2024 and the Year of the Dragon! Email the show at packetsandbolts@gmail.com Join us on Discord: https://discord.gg/SXnaRGs2aT Follow us on Mastodon: @PacketsAndBolts@ioc.exchange ... Packets and Bolts - Bringing AM radio to Podcasting since 2019...
oneAPI is an open standard for a unified API to be used across different computing accelerator architectures. This including GPUs, AI accelerators, and FPGAs. The goal of oneAPI is to eliminate the need for developers to maintain separate code bases, multiple programming languages, tools, and workflows for each architecture. James Reinders is an engineer at The post Building a Unified Hardware API at Intel with James Reinders appeared first on Software Engineering Daily.
oneAPI is an open standard for a unified API to be used across different computing accelerator architectures. This including GPUs, AI accelerators, and FPGAs. The goal of oneAPI is to eliminate the need for developers to maintain separate code bases, multiple programming languages, tools, and workflows for each architecture. James Reinders is an engineer at The post Building a Unified Hardware API at Intel with James Reinders appeared first on Software Engineering Daily.
My podcast guest this week is Deepali Trehan, VP, Intel Programmable Solutions Group Product Marketing. Deepali and I are diving head first into the world of chiplets and FPGAs! We discuss why there is a big interest in chiplets lately, the biggest benefits that chiplets provide when it comes to the design and manufacture of programmable logic devices and where Deepali sees chiplet technology headed in the future!
Ryan Condron has been a Bitcoin miner for longer than a decade. He started out with a GPU rack, upgraded his rig to FPGAs, and then was one of the first people to run ASICs. Today, he aims to decentralize hashrate with Lumerin – a hashrate marketplace.
We're back for Episode 120! In this episode Cody and Eric catch up on the news + Battle Of The Systems: Money Puzzle Exchanger (Neo Geo Arcade) vs Super Puzzle Fighter 2 Turbo (Arcade) We are doing news for the first monthly episode and then "catching up" later in the month. Episode Guide ---------------- 6:10 - Quick Questions 31:06 - Patreon Song 35:47 - Tea Time With Tim - Crash 1984 1:09:56 - Cody's Corner - Asteroids Recharged 1:25:38 - News 2:20:10 - Battle Of The Systems: Money Puzzle Exchanger (Neo Geo Arcade) vs Super Puzzle Fighter 2 Turbo (Arcade) News - (cody) Received amico update! Amico Home for Android https://www.timeextension.com/news/2023/11/intellivision-admits-it-doesnt-have-the-funds-to-make-the-amico (Tim) - ZX Spectrum Next from the second Kickstarter are shipping but to Europe first. https://www.kickstarter.com/projects/spectrumnext/zx-spectrum-next-issue-2/comments (Eric) Yeti Mountain Trailer (Commodore 64) - Also, RGN from Discord RGN — 12/05/2023 11:27 PM Here is a sneak peak at the video trailer of Yeti Mountain for the C64 - https://youtube.com/watch?v=-qMFeE-NgMc&si=Y-wKxztKM20P0zpe (Tim) - Golf Monday! Another fantastic Pico8 release from Johan Peitz the creator or Cosmic Collapse. Played this quickly as it just came out. Seems excellent so far! https://johanpeitz.itch.io/golf-monday (Cody) Free SMS Game Alert! https://helpcomputer.itch.io/blast-arena And a Free Gameboy Game! https://www.timeextension.com/news/2023/11/the-melting-apartment-is-a-free-junji-ito-inspired-horror-game-for-game-boy (Eric) - DOS_deck lets you play classic DOS games in your browser on PC or Steam Deck - https://www.tomshardware.com/news/dosdeck-intro-and-deck-guide (Cody) Evercade C64 Collection 3 https://bleedingcool.com/games/evercade-announces-new-cartridge-with-the-c64-collection-3/ (Cody) New 3DO Game for order?! https://www.timeextension.com/news/2023/11/biofury-is-a-new-doom-inspired-fps-for-3do-now-available-for-pre-order (Eric) - Crimbo AGA - Unofficial ZX Spectrum to Amiga conversion - https://www.indieretronews.com/2023/11/crimbo-aga-unofficial-zx-spectrum-to.html (Tim) - Briley Witch Chronicles 2 is now available. https://sarahjaneavory.itch.io/briley-witch-chronicles-2-c64 (Eric) Indie Retro News: SERAPHIMA - An impressive looking game for the ZX Spectrum - https://www.indieretronews.com/2023/11/seraphima-impressive-looking-game-for.html#more (Cody) C64 90 degree adapter from commore4ever https://www.commodore-4ever.com/?page=3 (Eric) Resell digital games - https://game8.co/articles/latest/steam-gog-and-others-must-allow-reselling-of-downloaded-games-in-eu (Tim) - Sonic Drift Is Getting A New 16-bit Reimagining - a small team of developers are busy working on a free Mega Drive / Genesis-style tribute to the Sonic Drift series. https://www.timeextension.com/news/2023/12/sonic-drift-is-getting-a-new-16-bit-reimagining-thanks-to-fans (Cody) Amazing Looking New MSX Game available for PreOrder! https://www.indieretronews.com/2023/11/tiny-magic-high-quality-and-charming.html And more MSX2 Love! https://www.timeextension.com/news/2023/12/concrete-heart-is-a-promising-post-apocalyptic-rpg-coming-to-the-msx2 (Tim) - Rival Gangs EXT is a top-down, open world sandbox game for the ZX Spectrum 128K, taking influence from the original GTA games. https://zxpresh.itch.io/rival-gangs-ext (Eric) - Atari 50th additions https://www.gamespot.com/articles/atari-50-the-anniversary-celebration-gets-a-few-more-forgotten-classics-today/1100-6519699/ (Cody) More NEO GEO love? Cyborg Force https://www.timeextension.com/news/2023/11/cyborg-force-is-a-new-run-n-gunner-for-neo-geo-psp-dreamcast-and-more (Cody) New MANA Game? https://www.timeextension.com/news/2023/12/the-mana-series-returns-with-visions-of-mana-coming-in-2024 (Eric) Software Emulators vs FPGAs https://youtube.com/watch?v=sMMiBEhnizE&si=bhbMVU7yykBZRVly (Cody) I totally forgot about these! https://www.timeextension.com/features/ive-just-resurrected-this-zelda-scratch-card-game-from-1989 NEWS OF THE WEIRD! https://www.timeextension.com/news/2023/12/these-mega-drive-genesis-watches-cost-usd800-each https://www.timeextension.com/news/2023/11/the-contra-rebirth-soundtrack-is-now-available-to-preorder-on-vinyl (Cody) Atari News! https://www.timeextension.com/news/2023/12/atari-50-the-anniversary-celebration-adds-12-new-games-with-free-update https://www.timeextension.com/news/2023/12/atari-taking-pre-orders-for-a-usd299-99-limited-edition-cartridge-set Please give us a review on Apple Podcasts! Thanks for listening! You can always reach us at podcast@pixelgaiden.com. Send us an email if we missed anything in the show notes you need. You can now support us on Patreon. Thank you to Henrik Ladefoged, Roy Fielding, Matthew Ackerman, Josh Malone, Daniel James, 10MARC, Eric Sandgren, Brian Arsenault, Retro Gamer Nation, Maciej Sosnowski, Paradroyd, RAM OK ROM OK, Mitsoyama, David Vincent, Ant Stiller, Mr. Toast, Jason Holland, Mark Scott, Vicky Lamburn, Mark Richardson, Scott Partelow, Paul Jacobson, Steve Rasmussen, Retro Gamer Nation, and Adam from Commodore Chronicles for making this show possible through their generous donation to the show. Support our sponsor Retro Rewind for all of your Commodore needs! Use our page at https://retrorewind.ca/pixelgaiden and our discount code PG10 for 10%
FPGAs take center stage in this week's Fish Fry podcast! But not just any field programmable gate arrays - I'm talking about the Speedster7t FPGAs! My guests Scott Schweitzer and Ron Renwick from Achronix and I chat about why Achronix's FPGAs are particularly well suited for networking and SmartNIC tasks, the advantages of Achronix's accelerated network infrastructure code and the details of their new FPGA-Powered accelerated automatic speech recognition.
Our 142nd episode with a summary and discussion of last week's big AI new. Apologies for this one coming out after a pause, episodes will resume being released regularly as of this week. Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ Email us your questions and feedback at contact@lastweekin.ai Timestamps + Links: (00:00) Intro / Banter Tools & Apps(03:00) Introducing PlayHT 2.0 Turbo ⚡️ - The Fastest Generative AI Text-to-Speech API (07:15) YouTube Music now lets you make your own playlist art with AI (09:23) Sick of meetings? Microsoft's new AI assistant will go in your place (11:54) Anthropic brings Claude AI to more countries, but still no Canada (for now) Applications & Business(14:55) Humanoid robots face a major test with Amazon's Digit pilots (18:40) Figure 01 humanoid takes first public steps (22:31) AI-generating music app Riffusion turns viral success into $4M in funding (23:35) ChatGPT Creator Partners With Abu Dhabi's G42 in Middle East AI Push (25:00) AMD Scores Two Big Wins: Oracle Opts for MI300X, IBM Asks for FPGAs (26:38) Alibaba, Tencent among investors in China's rival to OpenAI with $341 million funding (30:35) AI companies drive demand for office space in tech hubs, new study finds (32:13) OpenAI is in talks to sell shares at an $86 billion valuation Projects & Open Source(35:00) Introducing Video-To-Text and Pegasus-1 (80B) (39:35) Adept Releases Fuyu-8B for Multimodal AI Agents (42:03) MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning (44:53) Meta's Habitat 3.0 simulates real-world environments for intelligent AI robot training (48:22) DeepMind UniSim simulates reality to train robots, game characters (49:13) Jina AI Launches World's First Open-Source 8K Text Embedding, Rivaling OpenAI (51:13) Llemma: An Open Language Model For Mathematics Research & Advancements(53:22) Eliciting Human Preferences with Language Models (57:23) New Nvidia AI agent, powered by GPT-4, can train robots (01:01:38) Unveiling the General Intelligence Factor in Language Models: A Psychometric Approach (01:04:48) AgentTuning: Enabling Generalized Agent Abilities for LLMs (01:09:51) Contrastive Prefence Learning: Learning from Human Feedback without RL (01:11:25) ‘Mind-blowing' IBM chip speeds up AI Policy & Safety(01:14:57) GM Cruise unit suspends all driverless operations after California ban (01:18:52) AI researchers uncover ethical, legal risks to using popular data sets (01:22:22) AI Safety Summit: day 1 and 2 programme (01:25:23) Anthropic's AI chatbot Claude is posting lyrics to popular songs, lawsuit claims (01:26:38) Mike Huckabee says Microsoft and Meta stole his books to train AI (01:27:10) Clearview AI Successfully Appeals $9 Million Fine in the U.K. (01:28:11) North Korea experiments with AI in cyber warfare: US official (01:30:17) OpenAI forms new team to assess ‘catastrophic risks' of AI UK poised to establish global advisory group on AI Synthetic Media & Art(01:32:22) This new data poisoning tool lets artists fight back against generative AI (01:34:32) Amazon now lets advertisers use generative AI to pretty up their product shots (01:36:36) The Beatles: ‘final' song Now and Then to be released thanks to AI technology
When it comes to multi-discipline integration, simply designing FPGAs and handing them over to PCB engineers isn't simply effective! it's key to make sure that you're doing what you can to mitigate any redundant back-and-forth handoffs. I'm your host, Steph Chavez, a Senior Product Marketing Manager with Siemens. And here to join me is Gary Lameris, Technical Marketing Engineer with Siemens. He will help us understand the importance of FPGA/PCB co-design in the electronics industry. In this episode, you'll learn about the crucial role of FPGA/PCB co-design in modern electronics engineering and the importance of collaborative efforts between FPGA engineers and PCB designers. You will also hear more about tools like IO Optimizer and Xpedition Schematic Analysis, which are recommended to automate and streamline the co-design process, ultimately saving time and enhancing efficiency. What You'll Learn in this Episode: The need for collaboration between FPGA engineers and PCB teams. (01:11) Best practices for FPGA/PCB co-design (02:00) Tools like IO Optimizer and Xpedition Schematic Analysis to automate and streamline the co-design process (7:16) Roadblocks to implementing these best practices (09:16) The implementation of automation and AI (12:12) Connect with Gary Lameris LinkedIn Connect with Steph Chavez LinkedIn
On this episode of Data Driven, the focus is on hardware from AI optimized chips to edge computing.Frank and Andy interview Steven Orrin, the CTO of Intel Federal.Intel has developed new CPU instructions to accelerate AI workloads, and FPGAs allow for faster development in custom applications with specific needs. The speaker emphasizes the importance of data curation and wrangling before jumping into machine learning and AI, LinksWebinar: AI application benchmarking on Intel hardware through Red Hat OpenShift Data Science Platform. Register here: https://qrcodes.at/RHODSIntelBenchmarkingWebinarGet a free audiobook on us! http://thedatadrivenbook.com/Moments00:01:59 Hardware and software infrastructure for AI.00:07:18 AI benchmarks show importance of GPUs & CPUs00:14:08 Habana is a two-chip strategy offering AI accelerator chips designed for training flows and inferencing workloads. It is available in the Amazon cloud and data centers. The Habana chips are geared for large-scale training and inference tasks, and they scale with the architecture. One chip, Goya, is for inferencing, while the other chip, Gaudí, is for training. Intel also offers CPUs with added instructions for AI workloads, as well as GPUs for specialized tasks. Custom approaches like using FPGAs and ASICs are gaining popularity, especially for edge computing where low power and performance are essential.00:19:47 Intel's diverse team stays ahead of AI trends by collaborating with specialists and responding to industry needs. They have a large number of software engineers focused on optimizing software for Intel architecture, contributing to open source, and providing resources to help companies run their software efficiently. Intel's goal is to ensure that everyone's software runs smoothly and continues to raise the bar for the industry.00:25:24 Moore's Law drives compute by reducing size. Cloud enables cost-effective edge use cases. Edge brings cloud capabilities to devices.00:31:40 FPGA is programmable hardware allowing customization. It has applications in AI and neuromorphic processing. It is used in cellular and RF communications. Can be rapidly prototyped and deployed in the cloud.00:41:09 Started in biology, became a hacker, joined Intel.00:48:01 Coding as a viable and well-paying career.00:55:50 Looking forward to image-to-code and augmented reality integration in daily life.01:00:46 Tech show, similar to Halt and Catch Fire.Key Topics:Topics Covered:- The role of infrastructure in AI- Hardware optimization for training and inferencing- Intel's range of hardware solutions- Importance of software infrastructure and collaboration with the open source community- Introduction to Havana AI accelerator chips- The concept of collapsing data into a single integer level- Challenges and considerations in data collection and storage- Explanation and future of FPGAs- Moore's Law and its impact on compute- The rise of edge computing and its benefits- Bringing cloud capabilities to devices- Importance of inference and decision-making on the device- Challenges in achieving high performance and energy efficiency in edge computing- The role of diverse teams in staying ahead in the AI world- Overview of Intel Labs and their research domains- Intel's software engineering capabilities and dedication to open source- Intel as collaborators in the industry- Importance of benchmarking across different AI types and stages- The role of CPUs and GPUs in AI workloads- Optimizing workload through software to hardware- Importance of memory...
“It's really exciting for me to see the transfer from… what the best we could do was plugging in 800 FPGAs to our father-in-law's warehouse, to today where we're signing some of the largest energy contracts in America, we're building the backbone of our electric systems day by day, we're integrating across core human infrastructure. So, getting to build a business at the intersection of energy and money, it's the dream of a lifetime.”— Harry SudockTroy Cross is a Professor of Philosopher and Fellow at BPI, and Harry Sudock is Chief Strategy Officer at Griid. This interview was a live recording made at the Bitcoin 2023 Conference in Miami, where we discussed Bitcoin mining: the industry's rapid evolution, how it's optimising other markets, and why its relentless search for cheap energy will facilitate human flourishing. - - - - Bitcoin mining has been the subject of much controversy and debate in mainstream media. The infamous New York Times (NYT) article in April still casts a shadow over the industry: the piece characterised Bitcoin mining as an exploitative parasite feeding off cheap energy at the expense of local users and the environment. And yet, Bitcoin mining is the exact opposite. As Troy Cross states in this live interview, when people get to hear the truth about Bitcoin mining's impact on energy systems it “blows their minds!” Harry Sudock adds more colour by explaining how Bitcoin mining is a black hole that sucks in economic utility and spits it out in its most efficient form, making it a revolutionary tool for human flourishing. Both speakers criticize the media for pushing a biased agenda and cherry-picking data to fit a preconceived narrative. They argue that the truth about Bitcoin mining's impact on energy systems is more complex than the media portrays, but, this doesn't provide the clickbait media outlets are after. The irony is that the NYT's mission is “to seek the truth and help people understand the world.”However, we are optimistic that the tide will soon turn. With this show, both Harry and Troy have been on What Bitcoin Did now a combined 13 times (lucky for us!), and they continue to blow our minds with their tales of the possible worlds opened up by the race for cheap and abundant energy. The other side just doesn't have the calibre of persuasive, authentic and enthusiastic voices we have.- - - - This episode's sponsors:Iris Energy - Bitcoin Mining. Done Sustainably Ledn - Financial services for Bitcoin hodlersBitcasino - The Future of Gaming is hereLedger - State of the art Bitcoin hardware walletWasabi Wallet - Privacy by defaultUnchained - Secure your bitcoin with confidence-----WBD668 - Show Notes-----If you enjoy The What Bitcoin Did Podcast you can help support the show by doing the following:Become a Patron and get access to shows early or help contributeMake a tip:Bitcoin: 3FiC6w7eb3dkcaNHMAnj39ANTAkv8Ufi2SQR Codes: BitcoinIf you do send a tip then please email me so that I can say thank youSubscribe on iTunes | Spotify | Stitcher | SoundCloud | YouTube | Deezer | TuneIn | RSS FeedLeave a review on iTunesShare the show and episodes with your friends and familySubscribe to the newsletter on my websiteFollow me on Twitter Personal | Twitter Podcast | Instagram | Medium | YouTubeIf you are interested in sponsoring the show, you can read more about that here or please feel free to drop me an email to discuss options.
Dimitris Giannakis AKA the Modern Vintage Gamer is here to fill us in on FPGAs and their rising role in the retrogaming/classic video gaming scene. Plus do video games have a negative impact on the mental health of players? And we fill you in on the latest Metaverse happenings.Starring Sarah Lane, Rich Stroffolino, Scott Johnson, Dimitris Giannakis, Roger Chang, Joe.Link to the Show Notes. See acast.com/privacy for privacy and opt-out information. Become a member at https://plus.acast.com/s/dtns.