Podcasts about open compute

  • 28PODCASTS
  • 35EPISODES
  • 37mAVG DURATION
  • ?INFREQUENT EPISODES
  • May 1, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about open compute

Latest podcast episodes about open compute

The Data Center Frontier Show
Nomads at the Frontier: Nabeel Mahmood on the Future of Data Centers and Disruptive Sustainability

The Data Center Frontier Show

Play Episode Listen Later May 1, 2025 28:15


WASHINGTON, D.C.— At this year's Data Center World 2025, held earlier this month at the Walter E. Washington Convention Center, the halls were buzzing with what could only be described as industry sensory overload. As hyperscalers, hardware vendors, and infrastructure specialists converged on D.C., the sheer density of innovation underscored a central truth: the data center sector is in the midst of rapid, almost disorienting, expansion. That made it the perfect setting for the latest episode in our ongoing podcast miniseries with Nomad Futurist, aptly titled Nomads at the Frontier. This time, I sat down in person with Nabeel Mahmood, co-founder and board director of the Nomad Futurist Foundation—a rare face-to-face meeting after years of remote collaboration. “Lovely seeing you in person,” Mahmood said. “It's brilliant to get to spend some quality time at an event that's really started to hit its stride—especially in terms of content.” Mahmood noted a welcome evolution in conference programming: a shift away from vendor-heavy pitches and toward deeper, mission-driven dialogue about the sector's true challenges and future trajectory. “Events like these were getting overloaded by vendor speak,” he said. “We need to talk about core challenges, advancements, and what we're doing to improve and move forward.” A standout example of this renewed focus was a panel on disruptive sustainability, in which Mahmood joined representatives from Microsoft, AWS, and a former longtime lieutenant of Elon Musk's sustainability operations. “It's not just about e-cycling or carbon,” Mahmood emphasized. “We have to build muscle memory. We've got to do things for the right reasons—and start early.” That starting point, he argued, is education—but not in the traditional sense. Instead, Mahmood called for a multi-layered approach that spans K–12, higher education, and workforce reskilling. “We've come out from behind the Wizard of Oz curtain,” he said. “Now we're in the boardroom. We need to teach people not just how technology works, but why we use it—and how to design platforms with real intention.” Mahmood's remarks highlighted a growing consensus among forward-thinking leaders: data is no longer a support function. It is foundational. “There is no business, no government, no economy that can operate today—or in the future—without data,” he said. “So let's measure what we do. That's the KPI. That's the minimum threshold.” Drawing a memorable parallel, Mahmood compared this kind of education to swimming lessons. “Sure, you might not swim for 20 years,” he said. “But if you learned as a kid, you'll still be able to make it back to shore.” Inside-Out Sustainability and Building the Data Center Workforce of Tomorrow As our conversation continued, we circled back to Mahmood's earlier analogy of swimming as a foundational skill—like technology fluency, it stays with you for life. I joked that I could relate, recalling long-forgotten golf lessons from middle school. “I'm a terrible golfer,” I said. “But I still go out and do it. It's muscle memory.” “Exactly,” Mahmood replied. “There's a social element. You're able to enjoy it. But you still know your handicap—and that's part of it too. You know your limits.” Limits and possibilities are central to today's discourse around sustainability, especially as the industry's most powerful players—the hyperscalers—increasingly self-regulate in the absence of comprehensive mandates. I asked Mahmood whether sustainability had truly become “chapter and verse” for major cloud operators, or if it remained largely aspirational, despite high-profile initiatives. His answer was candid. “Yes and no,” he said. “No one's following a perfect process. There are some who use it for market optics—buying carbon credits and doing carbon accounting to claim carbon neutrality. But there are others genuinely trying to meet their own internal expectations.” The real challenge, Mahmood noted, lies in the absence of uniform metrics and definitions around terms like “circularity” or “carbon neutrality.” In his view, too much of today's sustainability push is “still monetarily driven… keeping shareholders happy and share value rising.” He laid out two possible futures. “One is that the government forces us to comply—and that could create friction, because the mandates may come from people who don't understand what our industry really needs. The other is that we educate from within, define our own standards, and eventually shape compliance bodies from the inside out.” Among the more promising developments Mahmood cited was the work of Rob Lawson-Shanks, whose innovations in automated disassembly and robotic circularity are setting a high bar for operational sustainability. “What Rob is doing is amazing,” Mahmood said. “His interest is to give back. But we need thousands of Robs—people who understand how it works and can repurpose that knowledge back into the tech ecosystem.” That call for deeper education led us to the second major theme of our conversation: preparing the next generation of data center professionals. With its hands-on community initiatives, Nomad Futurist is making significant strides in that direction. Mahmood described his foundation as “connective tissue” between industry stakeholders and emerging talent, partnering with organizations like Open Compute, Infrastructure Masons, and the iMasons Climate Accord. Earlier this year, Nomad Futurist launched an online Academy that now features five training modules, with over 200 hours of content development in the pipeline. Just as importantly, the foundation has built a community collaboration platform—native to the Academy itself—that allows learners to directly engage with content creators. “If a student has a question and the instructor was me or someone like you, they can just ask it directly within the platform,” Mahmood explained. “It creates comfort and accessibility.” In parallel, the foundation has beta launched a job board, in partnership with Infrastructure Masons, and is developing a career pathways platform. The goal: to create clear entry points into the data center industry for people of all backgrounds and education levels—and to help them grow once they're in. “Those old jobs, like the town whisperer, they don't exist anymore,” Mahmood quipped. “Now it's Facebook, Twitter, social media. That's how people get jobs. So we're adapting to that.” By providing tools for upskilling, career matching, and community-building, Mahmood sees Nomad Futurist playing a key role in preparing the sector for the inevitable generational shift ahead. “As we start aging out of this industry over the next 10 to 20 years,” he said, “we need to give people a foundation—and a reason—to take it forward.”

Gestalt IT Rundown
Exciting Developments from Open Compute Summit | The Gestalt IT Rundown: October 23, 2024

Gestalt IT Rundown

Play Episode Listen Later Oct 23, 2024 33:47


At Open Compute Summit this past week, key trends shaping the future of computing and infrastructure were discussed. One major concern is the global data center energy consumption, which is projected to triple by 2030, highlighting the urgent need for more efficient energy solutions. As technology advances, the shift from a 3nm process to a 2nm process is proving costly, with design costs estimated to reach a staggering $725 million, according to ARM. In response to both power demands and design challenges, liquid cooling is gaining momentum, emerging as a vital technology to improve efficiency and manage the increasing heat output from advanced computing systems. Time Stamps: 0:00 - Welcome to the Rundown 1:36 - BMC Starts Two New Companies 4:06 - CEO Indicted for Fraud 7:10 - Microsoft goes agentic AI 10:37 - Amazon Teams Up with US Department of Justice 14:30 - Perplexity Is getting Sued by Media Giants 16:44 - Sophos Acuires Secureworks 20:00 - Exciting Developments from Open Compute Summit 31:41 - The Weeks Ahead 32:56 - Thanks for Watching Hosts: Tom Hollingsworth: https://www.linkedin.com/in/networkingnerd/ Jon Swartz: https://www.linkedin.com/in/jonswartz/ Follow Gestalt IT Website: https://www.GestaltIT.com/ Twitter: https://www.twitter.com/GestaltIT LinkedIn: https://www.linkedin.com/company/Gestalt-IT #Rundown, #OCPSummit24, #AgenticAI, @NetworkingNerd, @JSwartz, @GestaltIT, @TechstrongGroup, @TechstrongTV, @TheFuturumGroup, @BMCSoftware, @Microsoft, @AWSCloud, @Sophos, @Secureworks, @perplexity_ai, @OpenComputePrj,

Data Center Revolution
Ep 64: The Open Compute Project

Data Center Revolution

Play Episode Listen Later Oct 19, 2023 50:38


Kirk sits down with Rob Coyle and Dirk Van Slyke of the Open Compute Project to discuss the origins of the OCP and what they are doing to drive change in the future.

project ocp open compute
Screaming in the Cloud
Building Computers for the Cloud with Steve Tuck

Screaming in the Cloud

Play Episode Listen Later Sep 21, 2023 42:18


Steve Tuck, Co-Founder & CEO of Oxide Computer Company, joins Corey on Screaming in the Cloud to discuss his work to make modern computers cloud-friendly. Steve describes what it was like going through early investment rounds, and the difficult but important decision he and his co-founder made to build their own switch. Corey and Steve discuss the demand for on-prem computers that are built for cloud capability, and Steve reveals how Oxide approaches their product builds to ensure the masses can adopt their technology wherever they are. About SteveSteve is the Co-founder & CEO of Oxide Computer Company.  He previously was President & COO of Joyent, a cloud computing company acquired by Samsung.  Before that, he spent 10 years at Dell in a number of different roles. Links Referenced: Oxide Computer Company: https://oxide.computer/ On The Metal Podcast: https://oxide.computer/podcasts/on-the-metal TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is brought to us in part by our friends at RedHat. As your organization grows, so does the complexity of your IT resources. You need a flexible solution that lets you deploy, manage, and scale workloads throughout your entire ecosystem. The Red Hat Ansible Automation Platform simplifies the management of applications and services across your hybrid infrastructure with one platform. Look for it on the AWS Marketplace.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. You know, I often say it—but not usually on the show—that Screaming in the Cloud is a podcast about the business of cloud, which is intentionally overbroad so that I can talk about basically whatever the hell I want to with whoever the hell I'd like. Today's guest is, in some ways of thinking, about as far in the opposite direction from Cloud as it's possible to go and still be involved in the digital world. Steve Tuck is the CEO at Oxide Computer Company. You know, computers, the things we all pretend aren't underpinning those clouds out there that we all use and pay by the hour, gigabyte, second-month-pound or whatever it works out to. Steve, thank you for agreeing to come back on the show after a couple years, and once again suffer my slings and arrows.Steve: Much appreciated. Great to be here. It has been a while. I was looking back, I think three years. This was like, pre-pandemic, pre-interest rates, pre… Twitter going totally sideways.Corey: And I have to ask to start with that, it feels, on some level, like toward the start of the pandemic, when everything was flying high and we'd had low interest rates for a decade, that there was a lot of… well, lunacy lurking around in the industry, my own business saw it, too. It turns out that not giving a shit about the AWS bill is in fact a zero interest rate phenomenon. And with all that money or concentrated capital sloshing around, people decided to do ridiculous things with it. I would have thought, on some level, that, “We're going to start a computer company in the Bay Area making computers,” would have been one of those, but given that we are a year into the correction, and things seem to be heading up into the right for you folks, that take was wrong. How'd I get it wrong?Steve: Well, I mean, first of all, you got part of it right, which is there were just a litany of ridiculous companies and projects and money being thrown in all directions at that time.Corey: An NFT of a computer. We're going to have one of those. That's what you're selling, right? Then you had to actually hard pivot to making the real thing.Steve: That's it. So, we might as well cut right to it, you know. This is—we went through the crypto phase. But you know, our—when we started the company, it was yes, a computer company. It's on the tin. It's definitely kind of the foundation of what we're building. But you know, we think about what a modern computer looks like through the lens of cloud.I was at a cloud computing company for ten years prior to us founding Oxide, so was Bryan Cantrill, CTO, co-founder. And, you know, we are huge, huge fans of cloud computing, which was an interesting kind of dichotomy. Instead of conversations when we were raising for Oxide—because of course, Sand Hill is terrified of hardware. And when we think about what modern computers need to look like, they need to be in support of the characteristics of cloud, and cloud computing being not that you're renting someone else's computers, but that you have fully programmable infrastructure that allows you to slice and dice, you know, compute and storage and networking however software needs. And so, what we set out to go build was a way for the companies that are running on-premises infrastructure—which, by the way, is almost everyone and will continue to be so for a very long time—access to the benefits of cloud computing. And to do that, you need to build a different kind of computing infrastructure and architecture, and you need to plumb the whole thing with software.Corey: There are a number of different ways to view cloud computing. And I think that a lot of the, shall we say, incumbent vendors over in the computer manufacturing world tend to sound kind of like dinosaurs, on some level, where they're always talking in terms of, you're a giant company and you already have a whole bunch of data centers out there. But one of the magical pieces of cloud is you can have a ridiculous idea at nine o'clock tonight and by morning, you'll have a prototype, if you're of that bent. And if it turns out it doesn't work, you're out, you know, 27 cents. And if it does work, you can keep going and not have to stop and rebuild on something enterprise-grade.So, for the small-scale stuff and rapid iteration, cloud providers are terrific. Conversely, when you wind up in the giant fleets of millions of computers, in some cases, there begin to be economic factors that weigh in, and for some on workloads—yes, I know it's true—going to a data center is the economical choice. But my question is, is starting a new company in the direction of building these things, is it purely about economics or is there a capability story tied in there somewhere, too?Steve: Yeah, it's actually economics ends up being a distant third, fourth, in the list of needs and priorities from the companies that we're working with. When we talk about—and just to be clear we're—our demographic, that kind of the part of the market that we are focused on are large enterprises, like, folks that are spending, you know, half a billion, billion dollars a year in IT infrastructure, they, over the last five years, have moved a lot of the use cases that are great for public cloud out to the public cloud, and who still have this very, very large need, be it for latency reasons or cost reasons, security reasons, regulatory reasons, where they need on-premises infrastructure in their own data centers and colo facilities, et cetera. And it is for those workloads in that part of their infrastructure that they are forced to live with enterprise technologies that are 10, 20, 30 years old, you know, that haven't evolved much since I left Dell in 2009. And, you know, when you think about, like, what are the capabilities that are so compelling about cloud computing, one of them is yes, what you mentioned, which is you have an idea at nine o'clock at night and swipe a credit card, and you're off and running. And that is not the case for an idea that someone has who is going to use the on-premises infrastructure of their company. And this is where you get shadow IT and 16 digits to freedom and all the like.Corey: Yeah, everyone with a corporate credit card winds up being a shadow IT source in many cases. If your processes as a company don't make it easier to proceed rather than doing it the wrong way, people are going to be fighting against you every step of the way. Sometimes the only stick you've got is that of regulation, which in some industries, great, but in other cases, no, you get to play Whack-a-Mole. I've talked to too many companies that have specific scanners built into their mail system every month looking for things that look like AWS invoices.Steve: [laugh]. Right, exactly. And so, you know, but if you flip it around, and you say, well, what if the experience for all of my infrastructure that I am running, or that I want to provide to my software development teams, be it rented through AWS, GCP, Azure, or owned for economic reasons or latency reasons, I had a similar set of characteristics where my development team could hit an API endpoint and provision instances in a matter of seconds when they had an idea and only pay for what they use, back to kind of corporate IT. And what if they were able to use the same kind of developer tools they've become accustomed to using, be it Terraform scripts and the kinds of access that they are accustomed to using? How do you make those developers just as productive across the business, instead of just through public cloud infrastructure?At that point, then you are in a much stronger position where you can say, you know, for a portion of things that are, as you pointed out, you know, more unpredictable, and where I want to leverage a bunch of additional services that a particular cloud provider has, I can rent that. And where I've got more persistent workloads or where I want a different economic profile or I need to have something in a very low latency manner to another set of services, I can own it. And that's where I think the real chasm is because today, you just don't—we take for granted the basic plumbing of cloud computing, you know? Elastic Compute, Elastic Storage, you know, networking and security services. And us in the cloud industry end up wanting to talk a lot more about exotic services and, sort of, higher-up stack capabilities. None of that basic plumbing is accessible on-prem.Corey: I also am curious as to where exactly Oxide lives in the stack because I used to build computers for myself in 2000, and it seems like having gone down that path a bit recently, yeah, that process hasn't really improved all that much. The same off-the-shelf components still exist and that's great. We always used to disparagingly call spinning hard drives as spinning rust in racks. You named the company Oxide; you're talking an awful lot about the Rust programming language in public a fair bit of the time, and I'm starting to wonder if maybe words don't mean what I thought they meant anymore. Where do you folks start and stop, exactly?Steve: Yeah, that's a good question. And when we started, we sort of thought the scope of what we were going to do and then what we were going to leverage was smaller than it has turned out to be. And by that I mean, man, over the last three years, we have hit a bunch of forks in the road where we had questions about do we take something off the shelf or do we build it ourselves. And we did not try to build everything ourselves. So, to give you a sense of kind of where the dotted line is, around the Oxide product, what we're delivering to customers is a rack-level computer. So, the minimum size comes in rack form. And I think your listeners are probably pretty familiar with this. But, you know, a rack is—Corey: You would be surprised. It's basically, what are they about seven feet tall?Steve: Yeah, about eight feet tall.Corey: Yeah, yeah. Seven, eight feet, weighs a couple 1000 pounds, you know, make an insulting joke about—Steve: Two feet wide.Corey: —NBA players here. Yeah, all kinds of these things.Steve: Yeah. And big hunk of metal. And in the cases of on-premises infrastructure, it's kind of a big hunk of metal hole, and then a bunch of 1U and 2U boxes crammed into it. What the hyperscalers have done is something very different. They started looking at, you know, at the rack level, how can you get much more dense, power-efficient designs, doing things like using a DC bus bar down the back, instead of having 64 power supplies with cables hanging all over the place in a rack, which I'm sure is what you're more familiar with.Corey: Tremendous amount of weight as well because you have the metal chassis for all of those 1U things, which in some cases, you wind up with, what, 46U in a rack, assuming you can even handle the cooling needs of all that.Steve: That's right.Corey: You have so much duplication, and so much of the weight is just metal separating one thing from the next thing down below it. And there are opportunities for massive improvement, but you need to be at a certain point of scale to get there.Steve: You do. You do. And you also have to be taking on the entire problem. You can't pick at parts of these things. And that's really what we found. So, we started at this sort of—the rack level as sort of the design principle for the product itself and found that that gave us the ability to get to the right geometry, to get as much CPU horsepower and storage and throughput and networking into that kind of chassis for the least amount of wattage required, kind of the most power-efficient design possible.So, it ships at the rack level and it ships complete with both our server sled systems in Oxide, a pair of Oxide switches. This is—when I talk about, like, design decisions, you know, do we build our own switch, it was a big, big, big question early on. We were fortunate even though we were leaning towards thinking we needed to go do that, we had this prospective early investor who was early at AWS and he had asked a very tough question that none of our other investors had asked to this point, which is, “What are you going to do about the switch?”And we knew that the right answer to an investor is like, “No. We're already taking on too much.” We're redesigning a server from scratch in, kind of, the mold of what some of the hyperscalers have learned, doing our own Root of Trust, we're doing our own operating system, hypervisor control plane, et cetera. Taking on the switch could be seen as too much, but we told them, you know, we think that to be able to pull through all of the value of the security benefits and the performance and observability benefits, we can't have then this [laugh], like, obscure third-party switch rammed into this rack.Corey: It's one of those things that people don't think about, but it's the magic of cloud with AWS's network, for example, it's magic. You can get line rate—or damn near it—between any two points, sustained.Steve: That's right.Corey: Try that in the data center, you wind into massive congestion with top-of-rack switches, where, okay, we're going to parallelize this stuff out over, you know, two dozen racks and we're all going to have them seamlessly transfer information between each other at line rate. It's like, “[laugh] no, you're not because those top-of-rack switches will melt and become side-of-rack switches, and then bottom-puddle-of-rack switches. It doesn't work that way.”Steve: That's right.Corey: And you have to put a lot of thought and planning into it. That is something that I've not heard a traditional networking vendor addressing because everyone loves to hand-wave over it.Steve: Well so, and this particular prospective investor, we told him, “We think we have to go build our own switch.” And he said, “Great.” And we said, “You know, we think we're going to lose you as an investor as a result, but this is what we're doing.” And he said, “If you're building your own switch, I want to invest.” And his comment really stuck with us, which is AWS did not stand on their own two feet until they threw out their proprietary switch vendor and built their own.And that really unlocked, like you've just mentioned, like, their ability, both in hardware and software to tune and optimize to deliver that kind of line rate capability. And that is one of the big findings for us as we got into it. Yes, it was really, really hard, but based on a couple of design decisions, P4 being the programming language that we are using as the surround for our silicon, tons of opportunities opened up for us to be able to do similar kinds of optimization and observability. And that has been a big, big win.But to your question of, like, where does it stop? So, we are delivering this complete with a baked-in operating system, hypervisor, control plane. And so, the endpoint of the system, where the customer meets is either hitting an API or a CLI or a console that delivers and kind of gives you the ability to spin up projects. And, you know, if one is familiar with EC2 and EBS and VPC, that VM level of abstraction is where we stop.Corey: That, I think, is a fair way of thinking about it. And a lot of cloud folks are going to pooh-pooh it as far as saying, “Oh well, just virtual machines. That's old cloud. That just treats the cloud like a data center.” And in many cases, yes, it does because there are ways to build modern architectures that are event-driven on top of things like Lambda, and API Gateway, and the rest, but you take a look at what my customers are doing and what drives the spend, it is invariably virtual machines that are largely persistent.Sometimes they scale up, sometimes they scale down, but there's always a baseline level of load that people like to hand-wave away the fact that what they're fundamentally doing in a lot of these cases, is paying the cloud provider to handle the care and feeding of those systems, which can be expensive, yes, but also delivers significant innovation beyond what almost any company is going to be able to deliver in-house. There is no way around it. AWS is better than you are—whoever you happen to—be at replacing failed hard drives. That is a simple fact. They have teams of people who are the best in the world of replacing failed hard drives. You generally do not. They are going to be better at that than you. But that's not the only axis. There's not one calculus that leads to, is cloud a scam or is cloud a great value proposition for us? The answer is always a deeply nuanced, “It depends.”Steve: Yeah, I mean, I think cloud is a great value proposition for most and a growing amount of software that's being developed and deployed and operated. And I think, you know, one of the myths that is out there is, hey, turn over your IT to AWS because we have or you know, a cloud provider—because we have such higher caliber personnel that are really good at swapping hard drives and dealing with networks and operationally keeping this thing running in a highly available manner that delivers good performance. That is certainly true, but a lot of the operational value in an AWS is been delivered via software, the automation, the observability, and not actual people putting hands on things. And it's an important point because that's been a big part of what we're building into the product. You know, just because you're running infrastructure in your own data center, it does not mean that you should have to spend, you know, 1000 hours a month across a big team to maintain and operate it. And so, part of that, kind of, cloud, hyperscaler innovation that we're baking into this product is so that it is easier to operate with much, much, much lower overhead in a highly available, resilient manner.Corey: So, I've worked in a number of data center facilities, but the companies I was working with, were always at a scale where these were co-locations, where they would, in some cases, rent out a rack or two, in other cases, they'd rent out a cage and fill it with their own racks. They didn't own the facilities themselves. Those were always handled by other companies. So, my question for you is, if I want to get a pile of Oxide racks into my environment in a data center, what has to change? What are the expectations?I mean, yes, there's obviously going to be power and requirements at the data center colocation is very conversant with, but Open Compute, for example, had very specific requirements—to my understanding—around things like the airflow construction of the environment that they're placed within. How prescriptive is what you've built, in terms of doing a building retrofit to start using you folks?Steve: Yeah, definitely not. And this was one of the tensions that we had to balance as we were designing the product. For all of the benefits of hyperscaler computing, some of the design center for you know, the kinds of racks that run in Google and Amazon and elsewhere are hyperscaler-focused, which is unlimited power, in some cases, data centers designed around the equipment itself. And where we were headed, which was basically making hyperscaler infrastructure available to, kind of, the masses, the rest of the market, these folks don't have unlimited power and they aren't going to go be able to go redesign data centers. And so no, the experience should be—with exceptions for folks maybe that have very, very limited access to power—that you roll this rack into your existing data center. It's on standard floor tile, that you give it power, and give it networking and go.And we've spent a lot of time thinking about how we can operate in the wide-ranging environmental characteristics that are commonplace in data centers that focus on themselves, colo facilities, and the like. So, that's really on us so that the customer is not having to go to much work at all to kind of prepare and be ready for it.Corey: One of the challenges I have is how to think about what you've done because you are rack-sized. But what that means is that my own experimentation at home recently with on-prem stuff for smart home stuff involves a bunch of Raspberries Pi and a [unintelligible 00:19:42], but I tend to more or less categorize you the same way that I do AWS Outposts, as well as mythical creatures, like unicorns or giraffes, where I don't believe that all these things actually exist because I haven't seen them. And in fact, to get them in my house, all four of those things would theoretically require a loading dock if they existed, and that's a hard thing to fake on a demo signup form, as it turns out. How vaporware is what you've built? Is this all on paper and you're telling amazing stories or do they exist in the wild?Steve: So, last time we were on, it was all vaporware. It was a couple of napkin drawings and a seed round of funding.Corey: I do recall you not using that description at the time, for what it's worth. Good job.Steve: [laugh]. Yeah, well, at least we were transparent where we were going through the race. We had some napkin drawings and we had some good ideas—we thought—and—Corey: You formalize those and that's called Microsoft PowerPoint.Steve: That's it. A hundred percent.Corey: The next generative AI play is take the scrunched-up, stained napkin drawing, take a picture of it, and convert it to a slide.Steve: Google Docs, you know, one of those. But no, it's got a lot of scars from the build and it is real. In fact, next week, we are going to be shipping our first commercial systems. So, we have got a line of racks out in our manufacturing facility in lovely Rochester, Minnesota. Fun fact: Rochester, Minnesota, is where the IBM AS/400s were built.Corey: I used to work in that market, of all things.Steve: Really?Corey: Selling tape drives in the AS/400. I mean, I still maintain there's no real mainframe migration to the cloud play because there's no AWS/400. A joke that tends to sail over an awful lot of people's heads because, you know, most people aren't as miserable in their career choices as I am.Steve: Okay, that reminds me. So, when we were originally pitching Oxide and we were fundraising, we [laugh]—in a particular investor meeting, they asked, you know, “What would be a good comp? Like how should we think about what you are doing?” And fortunately, we had about 20 investor meetings to go through, so burning one on this was probably okay, but we may have used the AS/400 as a comp, talking about how [laugh] mainframe systems did such a good job of building hardware and software together. And as you can imagine, there were some blank stares in that room.But you know, there are some good analogs to historically in the computing industry, when you know, the industry, the major players in the industry, were thinking about how to deliver holistic systems to support end customers. And, you know, we see this in the what Apple has done with the iPhone, and you're seeing this as a lot of stuff in the automotive industry is being pulled in-house. I was listening to a good podcast. Jim Farley from Ford was talking about how the automotive industry historically outsourced all of the software that controls cars, right? So, like, Bosch would write the software for the controls for your seats.And they had all these suppliers that were writing the software, and what it meant was that innovation was not possible because you'd have to go out to suppliers to get software changes for any little change you wanted to make. And in the computing industry, in the 80s, you saw this blow apart where, like, firmware got outsourced. In the IBM and the clones, kind of, race, everyone started outsourcing firmware and outsourcing software. Microsoft started taking over operating systems. And then VMware emerged and was doing a virtualization layer.And this, kind of, fragmented ecosystem is the landscape today that every single on-premises infrastructure operator has to struggle with. It's a kit car. And so, pulling it back together, designing things in a vertically integrated manner is what the hyperscalers have done. And so, you mentioned Outposts. And, like, it's a good example of—I mean, the most public cloud of public cloud companies created a way for folks to get their system on-prem.I mean, if you need anything to underscore the draw and the demand for cloud computing-like, infrastructure on-prem, just the fact that that emerged at all tells you that there is this big need. Because you've got, you know, I don't know, a trillion dollars worth of IT infrastructure out there and you have maybe 10% of it in the public cloud. And that's up from 5% when Jassy was on stage in '21, talking about 95% of stuff living outside of AWS, but there's going to be a giant market of customers that need to own and operate infrastructure. And again, things have not improved much in the last 10 or 20 years for them.Corey: They have taken a tone onstage about how, “Oh, those workloads that aren't in the cloud, yet, yeah, those people are legacy idiots.” And I don't buy that for a second because believe it or not—I know that this cuts against what people commonly believe in public—but company execs are generally not morons, and they make decisions with context and constraints that we don't see. Things are the way they are for a reason. And I promise that 90% of corporate IT workloads that still live on-prem are not being managed or run by people who've never heard of the cloud. There was a decision made when some other things were migrating of, do we move this thing to the cloud or don't we? And the answer at the time was no, we're going to keep this thing on-prem where it is now for a variety of reasons of varying validity. But I don't view that as a bug. I also, frankly, don't want to live in a world where all the computers are basically run by three different companies.Steve: You're spot on, which is, like, it does a total disservice to these smart and forward-thinking teams in every one of the Fortune 1000-plus companies who are taking the constraints that they have—and some of those constraints are not monetary or entirely workload-based. If you want to flip it around, we were talking to a large cloud SaaS company and their reason for wanting to extend it beyond the public cloud is because they want to improve latency for their e-commerce platform. And navigating their way through the complex layers of the networking stack at GCP to get to where the customer assets are that are in colo facilities, adds lag time on the platform that can cost them hundreds of millions of dollars. And so, we need to think behind this notion of, like, “Oh, well, the dark ages are for software that can't run in the cloud, and that's on-prem. And it's just a matter of time until everything moves to the cloud.”In the forward-thinking models of public cloud, it should be both. I mean, you should have a consistent experience, from a certain level of the stack down, everywhere. And then it's like, do I want to rent or do I want to own for this particular use case? In my vast set of infrastructure needs, do I want this to run in a data center that Amazon runs or do I want this to run in a facility that is close to this other provider of mine? And I think that's best for all. And then it's not this kind of false dichotomy of quality infrastructure or ownership.Corey: I find that there are also workloads where people will come to me and say, “Well, we don't think this is going to be economical in the cloud”—because again, I focus on AWS bills. That is the lens I view things through, and—“The AWS sales rep says it will be. What do you think?” And I look at what they're doing and especially if involves high volumes of data transfer, I laugh a good hearty laugh and say, “Yeah, keep that thing in the data center where it is right now. You will thank me for it later.”It's, “Well, can we run this in an economical way in AWS?” As long as you're okay with economical meaning six times what you're paying a year right now for the same thing, yeah, you can. I wouldn't recommend it. And the numbers sort of speak for themselves. But it's not just an economic play.There's also the story of, does this increase their capability? Does it let them move faster toward their business goals? And in a lot of cases, the answer is no, it doesn't. It's one of those business process things that has to exist for a variety of reasons. You don't get to reimagine it for funsies and even if you did, it doesn't advance the company in what they're trying to do any, so focus on something that differentiates as opposed to this thing that you're stuck on.Steve: That's right. And what we see today is, it is easy to be in that mindset of running things on-premises is kind of backwards-facing because the experience of it is today still very, very difficult. I mean, talking to folks and they're sharing with us that it takes a hundred days from the time all the different boxes land in their warehouse to actually having usable infrastructure that developers can use. And our goal and what we intend to go hit with Oxide as you can roll in this complete rack-level system, plug it in, within an hour, you have developers that are accessing cloud-like services out of the infrastructure. And that—God, countless stories of firmware bugs that would send all the fans in the data center nonlinear and soak up 100 kW of power.Corey: Oh, God. And the problems that you had with the out-of-band management systems. For a long time, I thought Drax stood for, “Dell, RMA Another Computer.” It was awful having to deal with those things. There was so much room for innovation in that space, which no one really grabbed onto.Steve: There was a really, really interesting talk at DEFCON that we just stumbled upon yesterday. The NVIDIA folks are giving a talk on BMC exploits… and like, a very, very serious BMC exploit. And again, it's what most people don't know is, like, first of all, the BMC, the Baseboard Management Controller, is like the brainstem of the computer. It has access to—it's a backdoor into all of your infrastructure. It's a computer inside a computer and it's got software and hardware that your server OEM didn't build and doesn't understand very well.And firmware is even worse because you know, firmware written by you know, an American Megatrends or other is a big blob of software that gets loaded into these systems that is very hard to audit and very hard to ascertain what's happening. And it's no surprise when, you know, back when we were running all the data centers at a cloud computing company, that you'd run into these issues, and you'd go to the server OEM and they'd kind of throw their hands up. Well, first they'd gaslight you and say, “We've never seen this problem before,” but when you thought you've root-caused something down to firmware, it was anyone's guess. And this is kind of the current condition today. And back to, like, the journey to get here, we kind of realized that you had to blow away that old extant firmware layer, and we rewrote our own firmware in Rust. Yes [laugh], I've done a lot in Rust.Corey: No, it was in Rust, but, on some level, that's what Nitro is, as best I can tell, on the AWS side. But it turns out that you don't tend to have the same resources as a one-and-a-quarter—at the moment—trillion-dollar company. That keeps [valuing 00:30:53]. At one point, they lost a comma and that was sad and broke all my logic for that and I haven't fixed it since. Unfortunate stuff.Steve: Totally. I think that was another, kind of, question early on from certainly a lot of investors was like, “Hey, how are you going to pull this off with a smaller team and there's a lot of surface area here?” Certainly a reasonable question. Definitely was hard. The one advantage—among others—is, when you are designing something kind of in a vertical holistic manner, those design integration points are narrowed down to just your equipment.And when someone's writing firmware, when AMI is writing firmware, they're trying to do it to cover hundreds and hundreds of components across dozens and dozens of vendors. And we have the advantage of having this, like, purpose-built system, kind of, end-to-end from the lowest level from first boot instruction, all the way up through the control plane and from rack to switch to server. That definitely helped narrow the scope.Corey: This episode has been fake sponsored by our friends at AWS with the following message: Graviton Graviton, Graviton, Graviton, Graviton, Graviton, Graviton, Graviton, Graviton. Thank you for your l-, lack of support for this show. Now, AWS has been talking about Graviton an awful lot, which is their custom in-house ARM processor. Apple moved over to ARM and instead of talking about benchmarks they won't publish and marketing campaigns with words that don't mean anything, they've let the results speak for themselves. In time, I found that almost all of my workloads have moved over to ARM architecture for a variety of reason, and my laptop now gets 15 hours of battery life when all is said and done. You're building these things on top of x86. What is the deal there? I do not accept that if that you hadn't heard of ARM until just now because, as mentioned, Graviton, Graviton, Graviton.Steve: That's right. Well, so why x86, to start? And I say to start because we have just launched our first generation products. And our first-generation or second-generation products that we are now underway working on are going to be x86 as well. We've built this system on AMD Milan silicon; we are going to be launching a Genoa sled.But when you're thinking about what silicon to use, obviously, there's a bunch of parts that go into the decision. You're looking at the kind of applicability to workload, performance, power management, for sure, and if you carve up what you are trying to achieve, x86 is still a terrific fit for the broadest set of workloads that our customers are trying to solve for. And choosing which x86 architecture was certainly an easier choice, come 2019. At this point, AMD had made a bunch of improvements in performance and energy efficiency in the chip itself. We've looked at other architectures and I think as we are incorporating those in the future roadmap, it's just going to be a question of what are you trying to solve for.You mentioned power management, and that is kind of commonly been a, you know, low power systems is where folks have gone beyond x86. Is we're looking forward to hardware acceleration products and future products, we'll certainly look beyond x86, but x86 has a long, long road to go. It still is kind of the foundation for what, again, is a general-purpose cloud infrastructure for being able to slice and dice for a variety of workloads.Corey: True. I have to look around my environment and realize that Intel is not going anywhere. And that's not just an insult to their lack of progress on committed roadmaps that they consistently miss. But—Steve: [sigh].Corey: Enough on that particular topic because we want to keep this, you know, polite.Steve: Intel has definitely had some struggles for sure. They're very public ones, I think. We were really excited and continue to be very excited about their Tofino silicon line. And this came by way of the Barefoot networks acquisition. I don't know how much you had paid attention to Tofino, but what was really, really compelling about Tofino is the focus on both hardware and software and programmability.So, great chip. And P4 is the programming language that surrounds that. And we have gotten very, very deep on P4, and that is some of the best tech to come out of Intel lately. But from a core silicon perspective for the rack, we went with AMD. And again, that was a pretty straightforward decision at the time. And we're planning on having this anchored around AMD silicon for a while now.Corey: One last question I have before we wind up calling it an episode, it seems—at least as of this recording, it's still embargoed, but we're not releasing this until that winds up changing—you folks have just raised another round, which means that your napkin doodles have apparently drawn more folks in, and now that you're shipping, you're also not just bringing in customers, but also additional investor money. Tell me about that.Steve: Yes, we just completed our Series A. So, when we last spoke three years ago, we had just raised our seed and had raised $20 million at the time, and we had expected that it was going to take about that to be able to build the team and build the product and be able to get to market, and [unintelligible 00:36:14] tons of technical risk along the way. I mean, there was technical risk up and down the stack around this [De Novo 00:36:21] server design, this the switch design. And software is still the kind of disproportionate majority of what this product is, from hypervisor up through kind of control plane, the cloud services, et cetera. So—Corey: We just view it as software with a really, really confusing hardware dongle.Steve: [laugh]. Yeah. Yes.Corey: Super heavy. We're talking enterprise and government-grade here.Steve: That's right. There's a lot of software to write. And so, we had a bunch of milestones that as we got through them, one of the big ones was getting Milan silicon booting on our firmware. It was funny it was—this was the thing that clearly, like, the industry was most suspicious of, us doing our own firmware, and you could see it when we demonstrated booting this, like, a year-and-a-half ago, and AMD all of a sudden just lit up, from kind of arm's length to, like, “How can we help? This is amazing.” You know? And they could start to see the benefits of when you can tie low-level silicon intelligence up through a hypervisor there's just—Corey: No I love the existing firmware I have. Looks like it was written in 1984 and winds up having terrible user ergonomics that hasn't been updated at all, and every time something comes through, it's a 50/50 shot as whether it fries the box or not. Yeah. No, I want that.Steve: That's right. And you look at these hyperscale data centers, and it's like, no. I mean, you've got intelligence from that first boot instruction through a Root of Trust, up through the software of the hyperscaler, and up to the user level. And so, as we were going through and kind of knocking down each one of these layers of the stack, doing our own firmware, doing our own hardware Root of Trust, getting that all the way plumbed up into the hypervisor and the control plane, number one on the customer side, folks moved from, “This is really interesting. We need to figure out how we can bring cloud capabilities to our data centers. Talk to us when you have something,” to, “Okay. We actually”—back to the earlier question on vaporware, you know, it was great having customers out here to Emeryville where they can put their hands on the rack and they can, you know, put your hands on software, but being able to, like, look at real running software and that end cloud experience.And that led to getting our first couple of commercial contracts. So, we've got some great first customers, including a large department of the government, of the federal government, and a leading firm on Wall Street that we're going to be shipping systems to in a matter of weeks. And as you can imagine, along with that, that drew a bunch of renewed interest from the investor community. Certainly, a different climate today than it was back in 2019, but what was great to see is, you still have great investors that understand the importance of making bets in the hard tech space and in companies that are looking to reinvent certain industries. And so, we added—our existing investors all participated. We added a bunch of terrific new investors, both strategic and institutional.And you know, this capital is going to be super important now that we are headed into market and we are beginning to scale up the business and make sure that we have a long road to go. And of course, maybe as importantly, this was a real confidence boost for our customers. They're excited to see that Oxide is going to be around for a long time and that they can invest in this technology as an important part of their infrastructure strategy.Corey: I really want to thank you for taking the time to speak with me about, well, how far you've come in a few years. If people want to learn more and have the requisite loading dock, where should they go to find you?Steve: So, we try to put everything up on the site. So, oxidecomputer.com or oxide.computer. We also, if you remember, we did [On the Metal 00:40:07]. So, we had a Tales from the Hardware-Software Interface podcast that we did when we started. We have shifted that to Oxide and Friends, which the shift there is we're spending a little bit more time talking about the guts of what we built and why. So, if folks are interested in, like, why the heck did you build a switch and what does it look like to build a switch, we actually go to depth on that. And you know, what does bring-up on a new server motherboard look like? And it's got some episodes out there that might be worth checking out.Corey: We will definitely include a link to that in the [show notes 00:40:36]. Thank you so much for your time. I really appreciate it.Steve: Yeah, Corey. Thanks for having me on.Corey: Steve Tuck, CEO at Oxide Computer Company. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this episode, please leave a five-star review on your podcast platform of choice, along with an angry ranting comment because you are in fact a zoology major, and you're telling me that some animals do in fact exist. But I'm pretty sure of the two of them, it's the unicorn.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Tech Disruptors
The Open Compute Project - Unlocking The Cloud Infrastructure For All

Tech Disruptors

Play Episode Listen Later Jan 9, 2023 47:12


Bloomberg Intelligence Technology Analyst Woo Jin Ho hosts Open Compute Project (OCP) CEO George Tchaparian to discuss the current and future innovations introduced by the consortium. Since its inception in 2009, the consortium has grown to over 300 members and boasts $18 billion in OCP-recognized vendor revenues. We discuss OCPs latest projects, such as disaggregating motherboards and chips, bringing cloud technologies to telecom and enterprise users, and tackling climate change.

Cyber Security Headlines
Ransom Cartel linked to REvil, Gen Z security awareness, Open Compute Project's Caliptra

Cyber Security Headlines

Play Episode Listen Later Oct 20, 2022 7:06


Ransom Cartel linked to REvil Do we need cybersecurity training for Gen Z? Open Compute Project announces Caliptra Thanks to this week's episode sponsor, SafeBase Security questionnaires. If those two words sent a shiver down your spine, you need to check out SafeBase. SafeBase's Smart Trust Center is a centralized source of truth for your organization's security and compliance information. After implementing SafeBase, many companies see a 90% reduction in custom questionnaires. Imagine how much time you'd save. Visit safebase.com to find out more.

From Research to Reality: The Hewlett Packard Labs Podcast
Upcoming Hewlett Packard Labs Podcast: Silicon Design 101

From Research to Reality: The Hewlett Packard Labs Podcast

Play Episode Listen Later Sep 20, 2022 1:06


In next week's episode of the Hewlett Packard Labs Podcast “From Research to Reality”, Dejan Milojicic hosts Jim Greener, Director of Silicon Design Lab in Hewlett Packard Labs. Jim takes us through history of Silicon Design and how the landscape changed from his early career days till today. Jim discusses a couple of successful projects and he hints at possible future of silicon design. We discuss how the whole industry, not only components, transformed through a disaggregation. On a personal side, Jim is the first interviewee who grew up, went to school and spent his whole career in one town. He explains the wonderful outdoor opportunities in and around Ft Collins, Colorado.

Sustain
Episode 82: Steve Helvie and the Open Compute Project

Sustain

Play Episode Listen Later Jun 25, 2021 40:33


Guest Steve Helvie Panelists Eric Berry | Justin Dorfman | Richard Littauer Show Notes Hello and welcome to Sustain! The podcast where we talk about sustaining open source for the long haul. Our guest today is exceptional in many ways, so you don't want to miss this episode! On this episode, we have Steve Helvie, VP of Channel Development for the Open Compute Project (OCP). He helps to educate organizations on the benefits of open hardware designs and the value of “community-driven” engineering for the data center. Today, Steve tells us how the Open Compute Project started, how he got involved, how it generates revenue, what open hardware is, and the challenges he sees with open hardware. We also learn why Europe is always at the forefront of regulations when it comes to sustainability and designs. Download this episode now to find out much more! [00:00:39] Eric, Richard, and Justin tell us about their backgrounds since Steve was curious. [00:03:26] Steve tells us his background, what he does at Open Compute Project, and explains more about open hardware. [00:06:41] Steve mentions there are 200 projects in the Open Compute Project and Richard wonders what the minimum entry is, what you need to be one of these projects, and how much money is needed to think about having open hardware in his company. [00:12:04] Justin asks for Steve's insight on a supply chain attack when it comes to hardware and how does the OCP fix it. [00:14:56] Steve talks about sustainability with “save the earth and save money,” and how Europe is always at the forefront of regulations when it comes to sustainability and designs. [00:17:00] Steve had mentioned that he's invested in helping people have hardware and run hardware better for their own companies, and Richard sees this to be at ends with Cloud Native, so he asks Steve to talk about how he sees that conflict. [00:18:13] Richard wonders if Steve is helping to improve Uber's private cloud and partially the public cloud by allowing them to do work with OCP and with other managers, how has that not led towards a non-sustainable earth and how does he reckon with that conflict. [00:20:51] In talking about refreshing hardware, Justin tells us about a book he read called _Flash Boys. _He also tells us about how he talked to an ex-Googler when GCP was getting built, who told him that Google was importing thirty tons of hard drives every single day and asks Steve if this is a normal thing. [00:22:43] Richard wonders if a large amount of Steve's clients are Crypto. [00:23:37] Eric brings up Steve's background and wonders if he had an a-ha moment or was there a point in time where he thought this is bigger than just hardware. [00:26:00] Steve tells us besides memberships, how the OCP generates revenue. He talks about having to switch to virtual summits during COVID. The guys all chat about if they've seen memberships and activities increasing in the last year since going virtual. Steve shares a staggering number of virtual attendees at his recent event. [00:30:37] Richard wonders what challenges Steve sees for the entire field of open hardware. Steve mentions a great course he took on Open Source Technology Management that's worth checking out provided by Brandeis University. [00:35:29] Find out where you can follow Steve online. Quotes [00:08:02] “There is such a huge fear that someone's going to take my designs and copy them.” [00:08:28] “So, what big companies like, in any company really, is they like a dual sourcing strategy.” [00:08:40] “They like that one skew, give me consistency across the board that I can deploy in Asia, Europe, or America, but give me multiple suppliers that mitigates my supply chain risk.” [00:10:48] “The types of companies that are looking at Open Compute are companies that have an open source mindset, they have a Cloud Native mindset where software is going to define everything.” [00:11:26] “And that's the point of when that happens in industries you start to see this customer poll. It's happening now in Telcos. Fintech gets it, gaming gets it, traditional banking, traditional healthcare, insurance companies do not get it yet, but they will. It's going to come.” [00:14:32] “So, there's this second user economy or what we call circular economy that's happening now within what Google, Microsoft, Facebook, all the Hyperscalers now have a second use plan because they need to for sustainability.” [00:15:03] “What's happening in Europe is you have Europe is always at the forefront of regulations when it comes to sustainability and designs.” [00:15:21] “There are heat reuse out of data center initiatives. For example, the Netherlands, you cannot build a new data center in the Netherlands unless you have a heat reuse.” [00:19:11] “So, the only part that I can see that's redeeming about this fact is that OCP designs use a lot less energy between 30-50% less energy than a normal standard server.” [00:19:53] “We have large enterprises that are taking the hardware coming out of these Hyperscale Data Centers that oftentimes is less than three years old.” [00:20:02] “A lot of these Hyperscalers don't even keep their hardware for more than three years and they're out if it. That still has a lot of life for if I'm a small and medium sized business in anywhere else in the world, they can still use that hardware for five years.” [00:34:28] “Open software, you can crank through it, iterations, sprints. Open hardware, it's very dependent on chip cycles, product cycles, and yeah, it's a lot of hurry up and wait in hardware.” Spotlight [00:36:32] Eric's spotlight is Gitpod. [00:38:30] Justin's spotlights are Episodes 1-16 of Sustain the podcast are back home and Orbit. [00:38:59] Richard's spotlight is Strange Parts. [00:39:21] Steve's spotlight is Jason Mauck and his podcast called Mauck Me. Links Steve Helvie Twitter (https://twitter.com/stevehelvie) Steve Helvie Linkedin (https://www.linkedin.com/in/steve-helvie-37935712) Steve@opencompute.org (mailto:steve@opencompute.org) Open Compute Project (https://www.opencompute.org/) Open Compute Project Membership Tiers (https://www.opencompute.org/membership) Open Compute Project Open System Firmware (https://www.opencompute.org/projects/open-system-firmware) Flash Boys: A Wall Street Revolt by Michael Lewis (https://www.amazon.com/Flash-Boys-Wall-Street-Revolt/dp/0393351599) Committing To Cloud Native podcast-Google Cloud, Hay-doop, Mars Rover, AWS and more with Miles Ward of SADA-Episode 3 (https://podcast.curiefense.io/3) Brandeis University-Certificate in Open Source Technology Management micro courses (https://www.brandeis.edu/gps/professional-development/micro-courses/ostm/index.html) Sustain podcast-What OpenUK Does with Amanda Brock and Andrew Katz-Episode 49 (https://podcast.sustainoss.org/49) Gitpod (https://www.gitpod.io/) Sustain podcast-Episodes 1-5 (https://podcast.sustainoss.org/page/7) Sustain podcast-Episodes 6-16 (https://podcast.sustainoss.org/page/6) Orbit (https://orbit.love/) Strange Parts (https://strangeparts.com/) Jason Mauck Twitter (https://twitter.com/jasonmauck1?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) Mauck Me podcast (https://mauckme.podbean.com/) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr at Peachtree Sound (https://www.peachtreesound.com/) Special Guest: Steve Helvie.

Exchanges on Exchangers
Open Compute Project Foundation (OCP)

Exchanges on Exchangers

Play Episode Listen Later Apr 26, 2021 43:18


Check out our latest Podcast with Archna Haylock and Donald Mitchell about the Open Compute Project FoundationFrom PUE, Sustainability, Liquid Cooling to... Thread Standards... we covered it all!

Packet Pushers - Full Podcast Feed
Network Break 287: Open Compute Infrastructure Makes Its Mark; Cisco Live Postponed

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Jun 8, 2020 39:07


Today's Network Break podcast discusses the growth of the infrastructure market based on Open Compute specs, the decline in switch and routing revenues, Cisco postponing its 2020 Cisco Live virtual event, VMware's latest acquisition, and more tech news.

Packet Pushers - Network Break
Network Break 287: Open Compute Infrastructure Makes Its Mark; Cisco Live Postponed

Packet Pushers - Network Break

Play Episode Listen Later Jun 8, 2020 39:07


Today's Network Break podcast discusses the growth of the infrastructure market based on Open Compute specs, the decline in switch and routing revenues, Cisco postponing its 2020 Cisco Live virtual event, VMware's latest acquisition, and more tech news.

Packet Pushers - Fat Pipe
Network Break 287: Open Compute Infrastructure Makes Its Mark; Cisco Live Postponed

Packet Pushers - Fat Pipe

Play Episode Listen Later Jun 8, 2020 39:07


Today's Network Break podcast discusses the growth of the infrastructure market based on Open Compute specs, the decline in switch and routing revenues, Cisco postponing its 2020 Cisco Live virtual event, VMware's latest acquisition, and more tech news.

Packet Pushers - Fat Pipe
Network Break 287: Open Compute Infrastructure Makes Its Mark; Cisco Live Postponed

Packet Pushers - Fat Pipe

Play Episode Listen Later Jun 8, 2020 39:07


Today's Network Break podcast discusses the growth of the infrastructure market based on Open Compute specs, the decline in switch and routing revenues, Cisco postponing its 2020 Cisco Live virtual event, VMware's latest acquisition, and more tech news. The post Network Break 287: Open Compute Infrastructure Makes Its Mark; Cisco Live Postponed appeared first on Packet Pushers.

Packet Pushers - Full Podcast Feed
Network Break 287: Open Compute Infrastructure Makes Its Mark; Cisco Live Postponed

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Jun 8, 2020 39:07


Today's Network Break podcast discusses the growth of the infrastructure market based on Open Compute specs, the decline in switch and routing revenues, Cisco postponing its 2020 Cisco Live virtual event, VMware's latest acquisition, and more tech news. The post Network Break 287: Open Compute Infrastructure Makes Its Mark; Cisco Live Postponed appeared first on Packet Pushers.

Packet Pushers - Network Break
Network Break 287: Open Compute Infrastructure Makes Its Mark; Cisco Live Postponed

Packet Pushers - Network Break

Play Episode Listen Later Jun 8, 2020 39:07


Today's Network Break podcast discusses the growth of the infrastructure market based on Open Compute specs, the decline in switch and routing revenues, Cisco postponing its 2020 Cisco Live virtual event, VMware's latest acquisition, and more tech news. The post Network Break 287: Open Compute Infrastructure Makes Its Mark; Cisco Live Postponed appeared first on Packet Pushers.

Ask Noah Show
Episode 180: The Open Compute Project with Bill Carter

Ask Noah Show

Play Episode Listen Later May 12, 2020 56:30


The Open Compute Project with Bill Carter Bill Carter, the CTO of the Open Compute Project joins us to discuss a new way of building a rack for a data center. The OCP uses a tool-less, modern, efficient design and best of all - the plans are open and available! -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/180) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #AskNoahShow on Freenode! -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they’re excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)

Ask Noah HD Video
The Open Compute Project with Bill Carter

Ask Noah HD Video

Play Episode Listen Later May 12, 2020


Bill Carter, the CTO of the Open Compute Project joins us to discuss a new way of building a rack for a data center. The OCP uses a tool-less, modern, efficient design and best of all - the plans are open and available!

Ask Noah HD Video
The Open Compute Project with Bill Carter

Ask Noah HD Video

Play Episode Listen Later May 12, 2020


Bill Carter, the CTO of the Open Compute Project joins us to discuss a new way of building a rack for a data center. The OCP uses a tool-less, modern, efficient design and best of all - the plans are open and available!

IT Talks
31 Servers and network components (no)

IT Talks

Play Episode Listen Later May 7, 2020 23:47


Automation and open networking, a journey in time, challenges and possibilities with our expert on the subject, Tore Anderson. A comprehensive talk about the development of automation of servers and networks. Where servers today are highly automated with lots of administration tools like Puppet, Chef, Ansible, Salt, and so on. With networks it is a different story, where progress has not been as fast and where devices traditionally have been proprietary. However, with projects like “Open Compute” and white box switches things have started to happen.

Bits vs Bytes
080 - Community-Driven Engineering with the Open Compute Project

Bits vs Bytes

Play Episode Listen Later May 4, 2020 38:30


Steve Helvie is the VP of Channel for the Open Compute Project (OCP). In this role he helps to educate organisations on the benefits of open hardware designs and the value of “community-driven” engineering for the data centre. In this podcast we discuss how Open Sourcing hardware is helping innovation and creation of more efficient and better data centers.

Bits vs Bytes
080 – Community-Driven Engineering with the Open Compute Project

Bits vs Bytes

Play Episode Listen Later May 4, 2020 38:30


Steve Helvie (steve@opencompute.org) is the VP of Channel for the Open Compute Project (OCP). In this role he helps to educate organisations on the benefits of open hardware designs and the value of “community-driven” engineering for the data centre. In this podcast we discuss how Open Sourcing hardware is helping innovation and creation of more […]

Kernel of Truth
Open Compute Project Summit 2020

Kernel of Truth

Play Episode Listen Later Feb 25, 2020 21:55


Subscribe to Kernel of Truth on iTunes, Google Play, Spotify, Cast Box and Sticher! Click here for our previous episode. Early March is a busy time here at Cumulus Networks and part of the reason is the Open Compute Project Summit. Kernel of Truth hosts Brian O’Sullivan and Roopa Prabhu are joined by Scott Emery, project lead at … Continue reading Open Compute Project Summit 2020 →

David Bombal
#27: What is Whitebox / Bare Metal switching? Open Compute Project? OpenStack?

David Bombal

Play Episode Listen Later Jun 1, 2019 12:59


Chuck's Python course: (Discounted at $10): https://bit.ly/2lsDZeo Chuck's SDN book: https://amzn.to/2lCp6WN Chuck's SDN Startup: http://www.tallac.com Connect with Chuck on LinkedIn here: https://www.linkedin.com/in/chuck-black-1017676/ Connect with David on LinkedIn here: https://www.linkedin.com/in/davidbombal/ In these series of videos David Bombal, CCIE and Chuck Black, Developer discuss the future of networking. SDN, Network automation, network programmability, APIs, NETCONF, REST APIs and lots of other technologies! What is Whitebox / Bare Metal switching? Open Compute Project? OpenStack? Cumulus Linux: https://cumulusnetworks.com/products/cumulus-linux/ Open Compute Project: https://www.opencompute.org/ Open vSwitch: http://www.openvswitch.org/ OpenStack: https://www.openstack.org/ Learn from someone who wrote SDN code. Who wrote SDN books. Who understands how SDN code actually works. David's details: YouTube: www.youtube.com/davidbombal Twitter: twitter.com/davidbombal Instagram: www.instagram.com/davidbombal/ LinkedinIn: www.linkedin.com/in/davidbombal/ #Whitebox #Bare_Metail_Switching #Whitebox_Switching

DataCentric Podcast
April 2, 2019: NVIDIA in the DataCenter, Nutanix Everywhere, Open Compute, & more on Huawei

DataCentric Podcast

Play Episode Listen Later Apr 1, 2019 44:44


This week Matt & Steve talk through some of what they learned at recent industry events, including the moves that NVIDIA is making into the data center, some discussion of containers in the enterprise, a brief wrap-up from Nutanix's investor conference in New York, some light-thinking following Matt's first missed Open Compute Summit in a decade, and a bit of non-political discussion on Huawei. If you follow Data Center technology, as well as the players behind the what's happening, you'll be in good company with Matt KImball and Steve McDowell from Moor Insights & Strategy and the DataCentric Podcast. Timeline 00:30 - 08:00 NVIDIA GTC Wrap-Up: NVIDIA heads into the Enterprise 08:00 - 11:00 Impact of Containers on Data Center management frameworks 11:00 - 21:20 Nutanix Investor Day Wrap-Up: NTNX takes aim 21:20 - 30:00 Open Compute Summit: Why it still matters. 30:00 - 42:30 Huawei: A Serious Player, despite all the geopolitical noise around the company 42:30 - 44:44 Senseless Banter

En Liten Podd Om It
En Liten Podd Om IT - Avsnitt 206 - De klassade sig själv som skadlig kod

En Liten Podd Om It

Play Episode Listen Later Mar 18, 2019 68:29


Om Shownotes ser konstiga ut så finns de på webben här också: https://www.enlitenpoddomint.se/e/en-liten-podd-om-it-avsnitt-206  Detta är avsnitt 205 och spelades in den 10 mars, och eftersom gifta män lämnar mer dricks än singel-män (enligt den här boken) så handlar dagens avsnitt om:   FEEDBACK OCH BACKLOG * Mats är "lyckligare" än vanligt, David är lika "glad" han, Björn jobbar som vanligt på sälj, och Johan har inte haft nån VAB den här veckan. * Microsoft förnekar samarbete med app som spårar muslimer * Facebook har tagit bort 1,5 miljoner filmkopior på massakern i Nya Zealand    * BONUSLÄNK: facebook beskriver hur man vill stoppa ”revenge porn”    * BONUSLÄNK: JÄTTEÅNG artikel om facebook tänkt att det ska gå till    * BONUSLÄNK: en tidigare facebookanställd beskriver hur det är att jobba med att gå igenom anmälda postningar * Det finns gamla buggar i WinRAR. Se till att uppdatera     * BONUSLÄNK: Exempel på applikation som kan användas för patchning. (OBS: vi är INTE sponsrade eller liknande)      MICROSOFT: * Fundering: Är det så att Microsofts framtid är Företag och gaming (privatpersonmarknaden är liksom inte en av dem) * Open Compute project. Nått att bry sig om som vanlig dödlig? Microsoft verkar bry sig rätt mycket  * Man kan specialisera sig på allt!! Åka på konferens i San jose söndag 17/3 - tors 21/3 * Win10 19H1 är väl typ klar nu?  Vattenmärket är borta.        * BONUSLÄNK: en SUPERlång lista över nyheter * 800 Miljoner windows 10 datorer…    * BONUSLÄNK: Global marketshare for windows 7    * BONUSLÄNK: Shipments of chormebooks 2014-2023    * BONUSLÄNK: Global Apple Mac Sales 2002-2018    * BONUSLÄNK: fördelning i marknadsandel mellan olika Operativsystem     APPLE: * En större Ipad? Måste jag bry mig?     * BONUSLÄNK: Apple kommer att ha ett event i slutet av månaden  * Nice ny TV från LG (med homekit) * Spotify anmäler Apple   GOOGLE: * Antivirus på telefoner är bra… Fast bara om du valt rätt Antivirus    * BONUSLÄNK: Själva testet: https://www.av-comparatives.org/tests/android-test-2019-250-apps/   * Android Q får grejer kring privacy   * Nice för oss iPhone-människor! Google tangentbordet på iOS får stöd för google translate       TIPS: * Where in the world are Carmen Sandiego * Low Tech Magazine     PRYLLISTA: * David: https://www.sfbok.se/produkt/illuminati-2nd-edition-184099 * Björn: Speldator till sonen * Mats:  Pixel C! * Johan: https://www.engadget.com/2019/03/15/electric-mustang-teaser/     SAKER VI INTE TOG UPP: * https://www.theverge.com/2019/3/15/18266998/microsoft-skype-group-video-calls-50-participants * WWDC blir i San Jose den 3-7 juni: https://developer.apple.com/wwdc19/      EGNA LÄNKAR: * En Liten Podd Om IT på webben * En Liten Podd Om IT på Facebook     LÄNKAR TILL VART MAN HITTAR PODDEN FÖR ATT LYSSNA: * Apple Podcaster (iTunes) * Overcast * Acast * Spotify * Stitcher

Cloud Engineering – Software Engineering Daily
Open Compute Project with Steve Helvie

Cloud Engineering – Software Engineering Daily

Play Episode Listen Later Aug 14, 2017 59:10


Facebook was rapidly outgrowing its infrastructure in 2009. Classic data center design was not up to the task of the rapid influx of new users and data, photos and streaming video hitting Facebook’s servers. A small team of engineers spent the next two years designing a data center from the ground up to be cheaper, The post Open Compute Project with Steve Helvie appeared first on Software Engineering Daily.

Tech Café
Sac à puces !

Tech Café

Play Episode Listen Later Mar 23, 2017 123:25


CPNews Le D-Wave passe la seconde, mais manque d’interconnections… Et IBM prépare son ordinateur quantique polyvalent à 50 qbits. PowerVR annonce sa nouvelle génération de GPU : Furian. Xiaomi aussi lance sa puce maison : le S1. Préserver les interconnections avec du graphène, une solution d’avenir ? Enfin un CPU qui résiste aux charmes de Vénus... Un tic tock Emile ? Intel fait n’importe quoi lance un Xeon à 8898$. 24 coeurs, mais quand même ! Process - Architecture - Optimization - Snooze ? Après un Kaby Lake inutile sur desktop : les Core 8xxx seront toujours en 14nm... Le retour d’AMD : les Ryzen font très mal dans les PC… Et dans les serveurs aussi ! Un CPU ARM dans les Macbook, le début de la fin pour Intel ? Microsoft lance son projet Open Compute "Olympus" avec Intel / AMD / ARM inside... Sac à puces : le Dossier SoC Des bit slices aux "systems on chip" : 40 ans d’intégration. Les grandes familles CPU : ARM, x86, MIPS (Si si !), PowerPC, etc. Les concepteurs de puces : ARM, Apple, Intel, AMD, PowerVR, Freescale, NVIDIA, Samsung, Mediatek… Les vendeurs d’appareils : Apple, Sony, Samsung, Xiaomi, HP, Dell, Nintendo, Microsoft, etc... Les implémentations des concepteurs : Cortex, Denver, Moongoose, Kryo, Twister, Atom, Core, Jaguar, Ryzen, et les autres… Les grande familles GPU : Geforce, Radeon, Adreno, Mali, Intel GT, PowerVR… Les autres composants : DSP (Hexagon…), codec video (quicksync), puces sonores, contrôleurs mémoire / USB / Ethernet / Modem... Les bus internes... Les fondeurs : Intel, Glofo (a inclus IBM semi récemment), TSMC, Samsung, STMicro, NXP, Texas Instruments… Les Légo Systèmes sur puces qui vont dans les produits finis : Apple A7, Exynos 8xxx, Core i3 - XXXX, Atom truc, AMD A10-XXX, Snapdragon 835, Tegra X1, etc. SoCking : pleins d’exemples dans un GROS TABLEAU Produit Marque SoC Famille Architecture GPU Process Fondeur Iphone 6 Apple A9 ARMv8 Apple Twister Imagination PowerVR 16nm TSMC G5 LG Snapdragon 820 ARMv8 Qualcomm Kryo Adreno 530 14nm Samsung Galaxy S7 Samsung Exynos 8890 ARMv8 Samsung M1 (Mongoonse) ARM Mali T760 14nm Samsung Mate 8 Huawei Hisilicon Kirin 950 ARMv8 ARM Cortex A72 ARM Mali T880 16nm TSMC Surface Pro 4 Microsoft Core i5-6300U Intel Intel Core (« Skylake ») Intel HD520 14nm Intel Surface 3 Microsoft Atom X7 Intel Intel Atom « Cherry Trail » Intel HD 14nm Intel Transformer T100 Asus Atom Z3740 Intel Intel Atom « Bay Trail » Intel 22nm Intel Switch Nintendo NVIDIA Tegra X1 ARMv8 ARM Cortex A57 NVIDIA Geforce 20nm TSMC ? Wii U Nintendo Espresso PowerPC IBM PowerPC AMD Radeon (« Latte ») 45nm IBM Xbox 360 Microsoft Xenon PowerPC IBM PowerPC Xenos(Radeon X1900) 90 puis 65nm IBM puis Chartered PS3 Sony Cell PowerPC Cell NVIDIA RSX 90,65 et 45nm IBM PS Vita Sony CXD5315GG ARM ARM Cortex A9 Imagination PowerVR ?? Samsung ? PS4 Sony CXD90026G Intel AMD Jaguar AMD Radeon 28nm Glofo Xbox One Microsoft X887732 Intel AMD Jaguar AMD Radeon 28nm Glofo NES Mini Nintendo Allwinner R16 ARM ARM Cortex A7 ARM Mali 400 28nm ? ? Raspberry Pi 3 Raspberry Broadcom BCM2837 ARMv8 ARM Cortex A53 Broadcom VideoCore IV 40nm ? Sony 3DS Nintendo Nintendo 10480H ARM ARM11 DMP PICA200 45nm ?? Aura HD (liseuse) Kobo Freescale iMX507 ARM ARM Cortex A8 N/A ?? NXP SmartWatch 3 Sony Snapdragon 400 ARM ARM Cortex A7 Adreno 305 28nm TSMC Chromecast 2 Google Marvell Armada 1500 ARM ARM Cortex A7 ?? ?? ?? R7000 (routeur) Netgear BCM4709A0 (Broadcom) ARMv7 ARM Cortex A9 N/A 40nm ?? ES8000 (TV) Samsung Samsung Echo-P ARMv7 ARM Cortex A9 ARM Mali 400 ??? Samsung Mindstorm (jeux Lego) Lego EV3 ARM ARM9 N/A ?? ?? Drone Bebop 2 Parrot Parrot P7 ARM ARM Cortex A9 ARM Mali 400 ??? ??? Pepper Robot (1.6) Aldebaran Robotics Atom E3845 Intel Intel Atom Intel HD 22nm Intel Le Moment de zapper Math ! Comment une machine calcule-t-elle une racine carré ? Ca arrive même au meilleur : le bug du pentium…

Power Systems Design PSDCast
PSDcast - Nathan Tracy of TE Connectivity on Open Compute Project hardware

Power Systems Design PSDCast

Play Episode Listen Later Dec 1, 2016


Power Systems Design, Information to Power Your Designs

Packet Pushers - Fat Pipe
PQ Show 94: The State Of Open Compute Networking

Packet Pushers - Fat Pipe

Play Episode Listen Later Oct 6, 2016


The Packet Pushers catch up on the latest developments from the Open Compute Project on networking, including new efforts that target the campus and WLANs, with guest Carlos Cardenas. The post PQ Show 94: The State Of Open Compute Networking appeared first on Packet Pushers.

Packet Pushers - Full Podcast Feed
PQ Show 94: The State Of Open Compute Networking

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Oct 6, 2016


The Packet Pushers catch up on the latest developments from the Open Compute Project on networking, including new efforts that target the campus and WLANs, with guest Carlos Cardenas. The post PQ Show 94: The State Of Open Compute Networking appeared first on Packet Pushers.

Packet Pushers - Priority Queue
PQ Show 94: The State Of Open Compute Networking

Packet Pushers - Priority Queue

Play Episode Listen Later Oct 6, 2016


The Packet Pushers catch up on the latest developments from the Open Compute Project on networking, including new efforts that target the campus and WLANs, with guest Carlos Cardenas. The post PQ Show 94: The State Of Open Compute Networking appeared first on Packet Pushers.

RCR Wireless News
Innovation and the Open Compute Project - Hetnet Happenings: Episode 52

RCR Wireless News

Play Episode Listen Later Apr 4, 2016 7:50


On this episode of HetNet Happenings, host Sean Kinney, the managing editor for RCR Wireless News, takes a look at how the Open Compute Project is fostering innovation in the telecom industry with the adoption of white box servers and other hyperscale data center practices.

The Cloudcast
The Cloudcast #194 - DevOps Down to the Rack Level

The Cloudcast

Play Episode Listen Later Jun 10, 2015 23:09


Aaron talks to Cole Crawford (CEO/Founder of Vapor.io, Founding executive director of Open Compute project and Co-founder of OpenStack) about momentum for Open Compute, rethinking how Data Center racks are designed, and the Vapor.io stack - OpenMist OS, Open DCRE and CORE. Interested in the O'Reilly OSCON? Want to register for OSCON now? Use promo code 20CLOUD for 20% off Details to win an OSCON pass coming soon! Check out the OSCON Schedule Free eBook from O'Reilly Media for Cloudcast Listeners! Check out an excerpt from the upcoming Docker Cookbook Links from the show: Vapor Homepage - http://www.vapor.io/ Topic 1 - Tell us about your background. It’s very extensive in both open source (software) and open hardware. Topic 2 - The company is described as “the first hyper converged and truly data defined data center solution”. Please translate that for us :) Topic 3 - For a small company, you have some large (conceptual) offerings - common hardware, rack-level provisioning, and this unique new rack model. Just how ambitious are you guys? (hardware with API’s!) Topic 4 - OpenMist OS (just launched). Let’s talk about each of the core pieces - Open DCRE (Data Center Runtime Environment). Is this an open BMC (Board Management Controller)? Topic 5 - Vapor CORE - This seems like RAID (Storage) meets BGP / HSRP (Networking) and compute scheduling (vCenter) all mashed together, with APIs to higher-level services (eg. Mesosphere or Docker) Topic 6 - Vapor Chamber - at first glance, this seems like The Big Green Egg (grill) for data center equipment. Fair analogy? Music Credit: Nine Inch Nails (nin.com)

Intel Chip Chat
Making the Open Compute Vision a Reality – Intel® Chip Chat episode 373

Intel Chip Chat

Play Episode Listen Later Mar 10, 2015 10:04


Raejeanne Skillern, General Manager of the Cloud Service Provider Organization within the Data Center Group at Intel explains Intel’s involvement in the Open Compute Project and the technologies Intel will be highlighting at the 2015 Open Compute Summit in San Jose California. She discusses the launch of the new Intel® Xeon® Processor D-1500 Product Family, as well as how Intel will be demoing Rack Scale Architecture and other solutions at the Summit that are aligned with OCP specifications.

Intel: Intelligent Storage
Open Compute Storage Servers built by Wiwynn with Intel Atom processor C2000 product family

Intel: Intelligent Storage

Play Episode Listen Later Oct 16, 2013


Intelligent Storage: Learn about modularized, high-availability and high density storage servers built by Wiwynn with 30 individually hot pluggable SAS/SATA HDDs built with Intel Atom processor C2000 product family