Podcasts about Fibre Channel

data transfer protocol

  • 26PODCASTS
  • 63EPISODES
  • 37mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Feb 3, 2025LATEST
Fibre Channel

POPULARITY

20172018201920202021202220232024


Best podcasts about Fibre Channel

Latest podcast episodes about Fibre Channel

Packet Pushers - Full Podcast Feed
NB512: US Objects to HPE-Juniper Wedding; Cheeky DeepSeek Freaks VCs

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Feb 3, 2025 29:55


Take a Network Break! The US Justice Department blocks the HPE-Juniper merger with a surprise lawsuit, DeepSeek shakes up the AI world, and Broadcom rolls out quantum-safe Fibre Channel controllers. Sweden seizes a vessel suspected of tampering with a subsea cable, a code update could make Linux significantly more power-efficient, and the WLAN market gets... Read more »

Packet Pushers - Network Break
NB512: US Objects to HPE-Juniper Wedding; Cheeky DeepSeek Freaks VCs

Packet Pushers - Network Break

Play Episode Listen Later Feb 3, 2025 29:55


Take a Network Break! The US Justice Department blocks the HPE-Juniper merger with a surprise lawsuit, DeepSeek shakes up the AI world, and Broadcom rolls out quantum-safe Fibre Channel controllers. Sweden seizes a vessel suspected of tampering with a subsea cable, a code update could make Linux significantly more power-efficient, and the WLAN market gets... Read more »

Packet Pushers - Fat Pipe
NB512: US Objects to HPE-Juniper Wedding; Cheeky DeepSeek Freaks VCs

Packet Pushers - Fat Pipe

Play Episode Listen Later Feb 3, 2025 29:55


Take a Network Break! The US Justice Department blocks the HPE-Juniper merger with a surprise lawsuit, DeepSeek shakes up the AI world, and Broadcom rolls out quantum-safe Fibre Channel controllers. Sweden seizes a vessel suspected of tampering with a subsea cable, a code update could make Linux significantly more power-efficient, and the WLAN market gets... Read more »

Gestalt IT
Ethernet Won't Replace InfiniBand for AI Networking in 2024

Gestalt IT

Play Episode Listen Later Jan 16, 2024 26:14


InfiniBand is the king of AI networking today. Ethernet is making a big leap to take some of that market share but it's not going to dethrone the incumbent any time soon. In this episode, join Jody Lemoine, David Peñaloza, and Chris Grundemann along with Tom Hollingsworth as they debate the merits of using Ethernet in place of InfiniBand. They discuss the paradigm shift as well as the suitability of the protocols to the workloads as well as how Ultra Ethernet is similar to another shift in converged protocols - Fibre Channel over Ethernet. © Gestalt IT, LLC for Gestalt IT: Ethernet Won't Replace InfiniBand for AI Networking in 2024

Gestalt IT
Ethernet Won't Replace InfiniBand for AI Networking in 2024

Gestalt IT

Play Episode Listen Later Jan 16, 2024 26:14


InfiniBand is the king of AI networking today. Ethernet is making a big leap to take some of that market share but it's not going to dethrone the incumbent any time soon. In this episode, join Jody Lemoine, David Peñaloza, and Chris Grundemann along with Tom Hollingsworth as they debate the merits of using Ethernet in place of InfiniBand. They discuss the paradigm shift as well as the suitability of the protocols to the workloads as well as how Ultra Ethernet is similar to another shift in converged protocols - Fibre Channel over Ethernet. © Gestalt IT, LLC for Gestalt IT: Ethernet Won't Replace InfiniBand for AI Networking in 2024

Packet Pushers - Full Podcast Feed
Network Break 443: Nuclear DCs, Mobile Cars, Fibrechannel, Open Source And Cheese

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Aug 21, 2023 43:35


We've got more durm and strang in open source license debate, cars that don't work wihtout a network, something mumble something Fibrechannel, a security acquisition by Check Point, cheesy microchips and more.

Packet Pushers - Full Podcast Feed
Network Break 443: Nuclear DCs, Mobile Cars, Fibrechannel, Open Source And Cheese

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Aug 21, 2023 43:35


We've got more durm and strang in open source license debate, cars that don't work wihtout a network, something mumble something Fibrechannel, a security acquisition by Check Point, cheesy microchips and more. The post Network Break 443: Nuclear DCs, Mobile Cars, Fibrechannel, Open Source And Cheese appeared first on Packet Pushers.

Packet Pushers - Network Break
Network Break 443: Nuclear DCs, Mobile Cars, Fibrechannel, Open Source And Cheese

Packet Pushers - Network Break

Play Episode Listen Later Aug 21, 2023 43:35


We've got more durm and strang in open source license debate, cars that don't work wihtout a network, something mumble something Fibrechannel, a security acquisition by Check Point, cheesy microchips and more.

Packet Pushers - Network Break
Network Break 443: Nuclear DCs, Mobile Cars, Fibrechannel, Open Source And Cheese

Packet Pushers - Network Break

Play Episode Listen Later Aug 21, 2023 43:35


We've got more durm and strang in open source license debate, cars that don't work wihtout a network, something mumble something Fibrechannel, a security acquisition by Check Point, cheesy microchips and more. The post Network Break 443: Nuclear DCs, Mobile Cars, Fibrechannel, Open Source And Cheese appeared first on Packet Pushers.

Packet Pushers - Fat Pipe
Network Break 443: Nuclear DCs, Mobile Cars, Fibrechannel, Open Source And Cheese

Packet Pushers - Fat Pipe

Play Episode Listen Later Aug 21, 2023 43:35


We've got more durm and strang in open source license debate, cars that don't work wihtout a network, something mumble something Fibrechannel, a security acquisition by Check Point, cheesy microchips and more.

Packet Pushers - Fat Pipe
Network Break 443: Nuclear DCs, Mobile Cars, Fibrechannel, Open Source And Cheese

Packet Pushers - Fat Pipe

Play Episode Listen Later Aug 21, 2023 43:35


We've got more durm and strang in open source license debate, cars that don't work wihtout a network, something mumble something Fibrechannel, a security acquisition by Check Point, cheesy microchips and more. The post Network Break 443: Nuclear DCs, Mobile Cars, Fibrechannel, Open Source And Cheese appeared first on Packet Pushers.

StorageReview.com - Storage Reviews
Podcast#117: Fibre Channel Still Tops for Virtualization

StorageReview.com - Storage Reviews

Play Episode Listen Later Mar 14, 2023


Brian invited his friend Nishant Lodha, Marvell’s Director of Emerging Technologies to sit for… The post Podcast#117: Fibre Channel Still Tops for Virtualization appeared first on StorageReview.com.

The Marvell Essential Technology Podcast
S1 EP16 - The Algebra of Storage Fabrics

The Marvell Essential Technology Podcast

Play Episode Listen Later May 25, 2022 20:57


Nishant Lodha, Director of Product Marketing – Emerging Technologies and Brian Beeler, Editor in Chief at Storage Review discuss the Algebra of Storage Fabrics.  Brian is one of the most renowned voices in enterprise storage and the force behind StorageReview.com, a world-leading independent storage authority, providing in-depth news coverage, hands-on evaluation, detailed reviews and consulting on everything enterprise storage - arrays, hard drives, SSDs, networking and storage fabrics.  Join in on their conversation discussing some of the latest trends in NVMe-oF, Fibre Channel and much more! Read the Storage Review Article Marvell Doubles Down On FC-NVMe: https://bit.ly/3KKo8C6.  Learn more: https://bit.ly/3N4JRpz

The Marvell Essential Technology Podcast
S1 EP10 - The Revolution in Fibre Channel

The Marvell Essential Technology Podcast

Play Episode Listen Later Feb 15, 2022 11:06 Transcription Available


Todd Owens, Field Marketing Director and Nishant Lodha, Director of Product Marketing – Emerging Technologies continue discussing Fibre Channel being the gold standard protocol for connecting shared storage servers.  End customers, channel partners and OEM customers are particularly focused on virtualization, security, and transitioning from SCSI to NVMe, , all while having low cost, persistent storage and low latency.  Hear Todd and Nishant's perspectives and solutions from Marvell that enable customers and partners to increase storage workloads with optimized security.

Screaming in the Cloud
The Proliferation of Ways to Learn with Serena (@shenetworks)

Screaming in the Cloud

Play Episode Listen Later Feb 3, 2022 34:31


About Serena Serena is a Network Engineer who specializes in Data Center Compute and Virtualization. She has degrees in Computer Information Systems with a concentration on networking and information security and is currently pursuing a master's in Data Center Systems Engineering. She is most known for her content on TikTok and Twitter as Shenetworks. Serena's content focuses on networking and security for beginners which has included popular videos on bug bounties, switch spoofing, VLAN hoping, and passing the Security+ certification in 24 hours.Links: Cisco cert Discord study group:https://discord.com/invite/uXQ8yWnN8a Beacons:https://beacons.page/shenetworks TikTok:https://www.tiktok.com/@shenetworks sysengineer's TikTok:https://www.tiktok.com/@sysengineer Twitter:https://twitter.com/notshenetworks TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: Today's episode is brought to you in part by our friends at MinIO the high-performance Kubernetes native object store that's built for the multi-cloud, creating a consistent data storage layer for your public cloud instances, your private cloud instances, and even your edge instances, depending upon what the heck you're defining those as, which depends probably on where you work. It's getting that unified is one of the greatest challenges facing developers and architects today. It requires S3 compatibility, enterprise-grade security and resiliency, the speed to run any workload, and the footprint to run anywhere, and that's exactly what MinIO offers. With superb read speeds in excess of 360 gigs and 100 megabyte binary that doesn't eat all the data you've gotten on the system, it's exactly what you've been looking for. Check it out today at min.io/download, and see for yourself. That's min.io/download, and be sure to tell them that I sent you.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's guest was on relatively recently, but it turns out that when I have people on the show to talk about things, invariably I tend to continue talking to them about things and that leads down really interesting rabbit holes. Today is a stranger rabbit hole than most. Joining me once again is @SheNetworks or Serena [DiPenti 00:00:51]. Thanks for coming back and subjecting yourself to, basically, my nonsense all over again in the same month.Serena: Thanks for having me back. Excited.Corey: So, you have a, I think study group is the term that you're using. I don't know how to describe it in a way that doesn't make me sound ridiculous and describing and speaking with my hands and the rest. It's a Discord, as the kids of today tend to use. There are some private channels on an existing Discord group, and we'll get to the mechanics of that in a second. But it's a study group for various Cisco certifications, which it's been a while since I had one; my CCNA is something I took back in 2009. I've checked, it's expired to the point where they can't even look it up anymore to figure out who I might have been, once upon a time. What is this group and where did it come from?Serena: Yeah, so the Discord itself is kind of a collective of a bunch of people that are creators on TikTok. And it's just, like, a cool place to connect, especially people from TikTok join, people from Twitter join, they want to interact, you know, a great place to get resources if you're early in your career. I—you know, new year, new me resolution was [laugh] I wanted to start studying for the CCNP a little bit, and I've been doing it pretty loosely for a while, but I kind of was like, all right, time to actually sit down and dedicate some real time to this. And I put on Twitter, you know, if anybody else was interested—I know there's other various study groups out there and things like that, but I was just like, hey, you know, it was anyone interested and a study group and I got really good response. Of course, a lot of people are at the CCNA level, so I made a channel for CCNA and CCNP, so whatever level you're at, you can come in and ask questions. It's really great.Corey: One thing that irked me when I first joined, as well, there's no CCENT which was sort of the entry-level Cisco cert, the first half of the CCNA, and I did a bit of googling before shooting my mouth off. And it turns out that Cisco sunset that cert a while back, so CCNA is now the entry-level cert, as I understand it.Serena: Yeah. So, when I did my CCNA, I did the C-C-E-N-T—the CCENT, and then the ICND2, and that's how I got my CCNA. And then I went and got the Data Center CCNA, which was two exams… two? Or maybe it was just one. I can't remember fully. But they basically got rid of all of their CCNAs and created one new one that's just the CCNA Enterprise.Corey: What I found worked out for me when I was going through the process of getting the CCNA—the CCENT, I forget how at the time, came along for the ride. And it was the CCENT, the baseline stuff that really added value to my entire career. That piece of advice that I would give anyone in the technical space is when your hand-waving over a thing you don't really understand. Maybe stop doing that one afternoon when you don't have anything else going on, dig into it.For me, it was always, “What the hell is a subnet mask?” “I don't know. It's the thing that I put the right numbers in, the box stops turning gray and will turn black and let me click the button; life goes on.” Figuring out what that meant and how it was calculated was interesting and it made me understand what's going on at a deeper level. Which means that invariably when things break as—they're computers; they break—I could have a better understanding of the holistic system and ideally have a better chance of getting to an outcome of fixing it.So, I'm not sitting here suggesting that anyone who wants to, “Oh, you want to work in the cloud and go and build things out on top of AWS or GCP. Great, go and get a Cisco certification is the first stop along your journey.” But understanding how the network works is absolutely going to serve you well for the rest of your technical career because not a lot has changed in the networking sense over the past 13 years since I sat the certification exam. It turns out that the TCP handshake still works the same way: Badly.Serena: [laugh]. Yeah, and to your point, the troubleshooting part is really where you need that depth of knowledge, right? And that's typically when it's crunch time and things are gone awry. And you really need to have an understanding of okay, is it the subnet mask? And the quicker that you can identify that outage, that problem, the quicker you get a resolution. And you do need depth of knowledge for that, and understanding that kind of underlying infrastructure is so helpful.Corey: And that was always the useful part of the certification—and the exam that went along with it—to me was, “Okay, with a subnet mask of whatever you're talking about here, great. How many usable IP addresses are there in the network?” And yeah, that's the kind of thing that we really care about.The stuff that drove me nuts was the other half of it, where it's the, “Ah, what is the proper syntactical command on the Cisco command line to display this thing?” And it's, “First, I can probably look that up or tab-complete it or whatnot. Secondly, I get it's a Cisco exam, but this is a world where interoperability is very much a thing and it is incredibly likely that the thing I need to find that out on is not going to ultimately be a Cisco device, once I'm working in enterprise.”Serena: Yeah, I do have similar feedback when it comes to that because right now, I've been trying to do kind of a chapter a day out of the Cisco Press book, and that's my main source of studying right now. I like to read a lot, so reading is usually my main method of studying, I guess. But I'm in a chapter right now that's, like, 100 pages of just hardware specifics. And we're talking about, like, PCIe cards and VICs and the different models and which ports are unified and you can configure for Fibre Channel, and which are uplink on the different generations. And I'm like, “Ohh.”I hate that. It's my least favorite part of studying because for that, I mean, I always just pull up the documentation. And it's like, “Okay, here's the ports that can be, you know, configured as Fibre Channel over Ethernet or Fibre Channel,” or whatever. Remembering it off the top of my head, which model, which year, which ports, I'm not great with that. And I don't think it's, honestly, that valuable when it comes to certification exams because you really should be using the documentation when you are doing those types of configurations between hardware and generations and compatibility.Corey: We sort of see the same thing in the development space, where, okay, the job we're hiring you to do is to work on some front end work and change how things are rendered, but when we're doing the job interview for that role, oh, now we have an empty whiteboard, we want you to write syntactically valid code that will implement some sorting algorithm or whatnot, while some condescending jerk sits there. And, “Nope, that's not it,” in the background in a high-pressure environment because for that jackwagon, it's any given Thursday, but for you, it determines the next phase of your career. And I hated that stuff. Whereas in the real world, I'm not going to be implementing an algorithm like that in any realistic sense; I'll be using the one built into whatever language I'm using. It's important from a computer science perspective to know it, but from a day-to-day job environment, not so much.And I can't recall the last time that I had to fix a technical issue where I did not have the internet as a resource while I was fixing that issue, even when it's the internet is down because it turns out without the network, I just have a whole bunch of expensive space heaters here, great, my phone still worked. I could check, “Oh, what is the command to get back into that firewall?” That it turns out, I just locked myself out of by—yeah, it turns out when you close a port and you're using that port, mistakes show.Serena: Yeah, I agree with that. And I mean, that goes into the much broader conversation of technical interviews because even as a network engineer, one time I had a whiteboard technical interview where they were asking, like, routing questions, but I didn't have access to any equipment, and so it was just basically asking them questions. And I'm a very visual person, so for me to not be able to, like, kind of put my hands on something and, like, run some commands and look over it myself. I did so horribly in that interview, and I left feeling just, like—I left feeling really bad about myself, honestly, because I had done so bad. And for me, I was assuming they were using some routing protocol. And they're like, “No, it's actually all statically configured.” And I was like, I would be able to know that if I could run commands and, like, actually look. But it was so bad.Corey: Right. And it's stressful working in front of people. I know that whatever I'm typing in front of an audience, I don't do it, but it feels like what I did first is, all right, let me put my mittens on, and then I—because I can't type to save my life, and I look incompetent across five different levels at that point. And yeah, it's these contrived problems. One of the things I like about the study group is when there's a question that is, I guess, not the answer, I would expect, it's okay, we can talk about that. Give me more context behind why.I thought it was this. Clearly, I'm missing something—or the bot is broken—so what is going on here? Help me understand why this is the way that it is? And back when I was learning how this stuff all worked, I went through originally a class at a community college and then finished it up with apparently with sort of a brain dump style boot camp, which I didn't really realize was a thing until after the fact. It was just memorization of these things.Which okay, great. I could memorize my way through some things I would never use again like EIGRP, one of Cisco's proprietary routing protocols that I've never heard of anyone using in the real world before, but I'm sure it's a thing and they're trying to push it. Great. I can skate past that well enough to hang a cert, but it didn't feel like the way to learn it because there was no context. It was just the rote memorization.Serena: Mm-hm. Yeah, and that is very difficult. I'm a big fan of theory, so you know, when we're talking about VIC cards, I was going through each generation, and which you would use for a blade or a rack server, whatever. I think that your time is better spent understanding what a VIC card is, why it's important, maybe, like, the history, and all that instead of being, like, “This version isn't compatible with this UCS blade server,” or whatever. Because I am studying for the Data Center flavor of the CCNP right now, so it's a little bit of a different path. I think most people take the enterprise, that's the more traditional route, switch, IOS. Mine's more UCS, Nexus, HyperFlex type questions.Corey: One thing that I always appreciate is, for example, take subnet mask [crosstalk 00:10:57] calculations. Yeah, I can figure that out on a whiteboard now. But here in the real world, everyone uses a subnet calculator. It's the way that things work. And there's a lot of discussion back and forth about things like that, without talking about the real-world implications, such as, if you're building out two subnets inside of a larger range, don't put them right next to each other because if you need to expand the network later, you're in a world of pain compared to if you had given them some significant breathing room.And okay, great. You probably don't need to use all the [10.0.0.0/8 00:11:30] network in your small-scale environment, and even some larger-scale ones you're hard-pressed to use all those things.It's just the real-world experience, and you understand that you don't want to do that. The second time. The first time you do it because why not? It's easy to remember for humans. And then you run into weird issues with oh, well, why would I ever have more than 254 servers sitting in a subnet—or 253, whatever the number is these days, don't yell at me—great.What about containers running on top of those things? Oh, right, the worst answer to so many architectural patterns, we'll throw some containers at it. And you're back into those problems.Serena: Yeah.Corey: It's the real-world scars you get.Serena: Yeah. And I think that there is such a difference between when you're studying and learning versus—and taking certifications or tests—than in the real world. And that was very discouraging for me when I was first learning because I would take these exams—and we had a Cisco academy where I went to college—and I would take these exams, and my professor was just known for her very difficult test, so I think her advanced routing course, maybe only 30% of the people who took it passed it their first try. And so I would take these exams, I'd walk away being like, “I don't know anything. I'm never going to be a good network engineer, I'm never going to be able to get a job or anything,” because I couldn't regurgitate which show command was showing me errors on a switch, right?And then now in the real world, I'm like, okay, relieved because I was like, I can look this up, like, I can take my time. And then you know, with getting your hands on—I mean, you learned so much within your first year; that is probably more than I learned in all four years of school. But saying that, it was really great for me to have that base of all of that underlying networking and already kind of understanding the terminology alone is such a big… barrier, I would say, like, just being able to sit in a room and listen to these conversations and understand what's going on. That's half the battle in the beginning. [laugh].Corey: I have never heard anyone be prouder of being bad at their job than a professor saying, “I have a 30% pass rate.” Isn't your whole ethos of that role to be someone who teaches people how to do a thing? So, if two-thirds of your class is not learning that thing, it doesn't mean you're a hard grader, it means you're bad at conveying the concept and/or testing for understanding of the thing that you've just taught them. If you're a teacher listening to this, please don't email me until you fix your problem first.Serena: [laugh]. See, and… she would come in and say on the first day class—I took multiple classes with her and she was like, “If you read everything in the book, and pay attention to all the slides, you're still going to fail.” She wanted you to really go above and beyond, and commit and run all these labs and do all these things, and in college, I hated it. I was so resentful and angry because it really did make me feel bad. But at the same time, there was one point someone had asked her a question, and she was like, “Why don't you ask Serena? She has the highest grade in the class.”And I was shocked because I had, like, a C in the [laugh] class. And I was like, “Me? I'm the one that has the highest grade in the class?” And I would definitely do things a little bit differently if I were teaching that course because it, I think, turned off a lot of people into the field. But me passing those grades, I mean, I really could have probably taken the CCNP right when I was done with those courses and passed with flying colors. But I didn't have the money to take the CCNP exams until much later when I had a job. And now it's like so much has changed. The exams have changed. I'm in Data Center now. So, a little bit different. But yeah. [laugh].Corey: I never understood the idea of charging for certs. If people are spending the time and energy to learn about your company's specific technology well enough to take the exam, they're probably going to want to use it in their career as they move forward, so charging a few 100 bucks to sit the test has never struck me as a good idea. And the cloud companies do the exact same things as well. And every company that attains some level of success launches a certification exam, but then they charge a few 100 bucks for it, which… does that money really matter because either you're an engineer, and your company is going to be paying for it, or you're making engineering money these days, and it's just an irritant, but it feels to me like the people that really get disadvantaged by that are the early learners, the students, the folks who are planning to have a career in this, but a few 100 bucks becomes a barrier.Serena: Oh, it's a huge barrier. I mean, it was a big barrier for me. I didn't have money to go to college, so I took out student loans. I worked my way through college and constantly had a job, which then was difficult because my grades suffered because I didn't have the same amount of time.Corey: You did have the highest grade in class, I recall.Serena: [laugh]. For that one course. For the one course. [laugh]. But I didn't have the same amount of time in a day to study as some of my classmates who didn't have to have a job in college.But then also, I couldn't afford $300 to take one exam out of the three that you needed at the time for the CCNP. And that's when I was early in my career. The CCNA, too, like, I didn't have the money to take that exam either. And I think a lot of people are in that position because they are trying to better their knowledge. They're trying to achieve a new job.That's what those certifications are geared towards, right? And so putting that $300—I mean, that person might be working a minimum wage job, and they're trying to get out of that minimum wage job into a higher—paying tech job. And $300 is a lot of money. It is a lot of money. My rent in college was $300. That's a whole month's rent for me, right, to put it in perspective. So yeah.Corey: Yeah. We'll be throwing a bunch of credit codes your way for folks who are learning and [unintelligible 00:17:10] the financial burden because it's important that people be able to not have money being the obstacle to learning a technical field. I am curious, though, as to the genesis of this whole Discord because I heard you talking about it, I joined, but there are a lot of other people talking about different things. Most notably and importantly, there's an Ohio slander channel—Serena: [laugh].Corey: —in there, which is just spot-on perfect from where I sit. But it's not just you, and it's not just networking stuff. It's a systems engineering Slack. Where did it come from?Serena: Yeah so sysengineer, my friend [Chris Lynd 00:17:43]—she's also a TikTok creator—and she set up her own Discord server, which I have kind of like inserted myself into. It's very hard to run your own server, right, so it's kind of more of a collective at this point. But she's sysengineer on TikTok, and so her server is just sysengineer. And there's a lot of memes, right? Because we have a lot of, like, Gen Z—I mean, who doesn't love a good meme? And Chris Lynd, sysengineer, is from Ohio, I'm from Ohio. So, the Ohio slander thing is kind of funny because we're just like always talking crap about Ohio. [laugh].Corey: Which it deserves, let's be very clear here. I have family in Ohio, myself. Every time I visited them, my favorite part was leaving Ohio. I mean, data transfer between AWS regions, the least expensive one is the one cent instead of two cents between Ohio and Virginia because even data wants to get out of Ohio.Serena: It was like, 11 of the astronauts are from Ohio. And it was like, “What about Ohio makes me want to leave the Earth?” [laugh].Corey: Yeah, “How far can I get from Ohio, the absolute furthest place away?” “Well, here's the furthest place on earth.” “Not far enough.” I know, if you're from Ohio, I know you're going to be very upset. You're going to be listening to this and angrily riding your horse to Pennsylvania to send an angry email my way, but that's okay. You'll get there eventually.Serena: But yeah, there's a lot of memes and stuff from TikTok. It's funny because we love to joke; we love to keep it light-hearted; we want to attract people who are younger, a lot of the memes come from TikTok. And so it's a fun, good time. And there's developers on there, there's tons of people that work other jobs that aren't systems engineering, or network engineering. So, we have a bunch of different opportunities and channels for other people to kind of ask questions and connect with other people in the field. Especially with everyone being remote for the most part now, and Covid, you don't have a ton of social interaction, so it's a good place to go get some social interaction.Corey: This episode is sponsored by our friends at Oracle HeatWave is a new high-performance query accelerator for the Oracle MySQL Database Service, although I insist on calling it “my squirrel.” While MySQL has long been the worlds most popular open source database, shifting from transacting to analytics required way too much overhead and, ya know, work. With HeatWave you can run your OLAP and OLTP—don't ask me to pronounce those acronyms again—workloads directly from your MySQL database and eliminate the time-consuming data movement and integration work, while also performing 1100X faster than Amazon Aurora and 2.5X faster than Amazon Redshift, at a third of the cost. My thanks again to Oracle Cloud for sponsoring this ridiculous nonsense.Corey: It's also great because when I was early in my career, I was a traveling consultant, and periodically I would find myself, well, working 40 hours a week and then in a hotel room for the rest of it. That's sort of depressing; I would go to local meetups. I'll never forget going to one Linux user group meeting. In this town, apparently, Linux wasn't really a thing, so the big conversational topic is how to sneak Linux into your Windows job. And I'm sitting around here going, “I don't know if that's necessarily the best way to go about it.”But I checked; there were no reasonable Linux jobs in that community. So, all of their focus in these user groups was about doing it as a side project, as this aspirational thing. And I'm sitting here visiting from out of town, I'm thinking, “Well, I have a job in the Linux environment. And how did I find it? I just went online and looked for jobs that had the word Linux in the title, and there you go.”That option is not open to everyone in every geography, so being able to get exposed to folks who aren't all in your neighborhood is one of the big benefits I found online forums like this.Serena: Yeah. One of the things that I think was positive that came out of Covid is, if you are in a smaller region—one of the reasons I left Ohio was because of a lack of jobs, right. And because there was more opportunity in other areas. And now I wouldn't have had to move. Not that say—I mean, I would have probably moved out of Ohio anyway.But if you don't want to, if your whole family's there now, you're luckily not really stuck with just the jobs that are in your local area. There's tons of remote jobs now. I think that's fantastic, and like I said, one of the positive things that did come out of Covid.Corey: The thing that I don't fully understand is folks who are working for remote companies—we're a distributed companies outside The Duckbill Group, and we pay the same for a role, regardless of where on—or off—the planet you happen to be sitting, just because the value you're adding makes zero difference to me based upon where you happen to be. And there are a number of companies out there who are being very particular about well, where are you geographically because then we need to adjust your comp so you're appropriate for that market. And it's, really? Is the work you're doing this month materially different than the work you're doing next month, as far as value goes, based upon where you're sitting? I don't buy it. But it's also challenging at giant companies to wind up paying the same across the board for all of your staff in one fell swoop.Serena: I think it's particularly bad. I had seen some companies that were basically saying if they're already employed and already getting some salary, and then, like, if you move, we're going to lower your salary. And I was like, it just to me seems so greedy, especially coming from these massive companies that charge huge profits, that you're going to be concerned over a ten, twenty, thirty-thousand dollar difference, right? And it's like, it just seems greedy to me because it's like, well, you had no problem paying that while I was living there, but now it's a problem that I move closer to family or something like that? I luckily was not in that position, but it would have put a distaste in my mouth towards that company, I think, as an employee in that position.Corey: We want to know where people are for tax purposes, we have this whole thing about not committing tax fraud, but aside from that, we don't care where you happen to be. We've had people take a month in Costa Rica, for example. Great. Have fun. Let us know what you think. As long as you have internet there and you make the scheduled meetings you've committed to make, great.But that's part of the benefit of having a company has been distributed since before the pandemic. What I really have sympathy for is folks who had built companies that depended on an in-office culture, and suddenly you're forced into remote during a very stressful time.Serena: Mm-hm. Yeah. Luckily, I mean, most of my jobs are very easily remote, but I can see that. I don't know. The whole—I don't ever want to work in an office again, personally. It's just not for me. I have done really well transitioning to work from home and still keeping up with all my coworkers, and reaching out to them, having meetings.I think, at this point, after two years in, companies are going to have a really hard time justifying to their employees, like, oh, we have to be back in office. And it's like, well, why? Is productivity down? Are we not as profitable? Like, what happened within these last two years that is making you think, like, we need to go back into the office? And they don't really have anything besides, “Culture?” And it's like, yeah, you're going to need to do more than that. [laugh].Corey: It's important for us to see our co-workers from time to time, and once it's safe to do so we're going to be doing quarterly meetups in various places, but that's also… it's not every day.Serena: Right.Corey: The technology problems, I have less sympathy for it now than I did at the start of the pandemic, where network engineers were basically calling the data center and, “Yeah, can you go reboot the VPN concentrator?” “Uh, okay. Which server is that? Probably the one that's glowing white-hot right now.” Because they aren't designed for the entire company to be using it simultaneously all the time. Two years later, we have mostly fixed those problems.Serena: Yeah, yeah. Two years later, it's like, okay, you're going to really have to convince me to go back into the office. [laugh]. And I like the flexibility. Like, I really do. If I want to move, I can move. If I want to, like you said, go to Costa Rica for a month, I could do that. But there's a lot of options, flexibility. I've been having a great time work from home.Corey: And I've been having a lot of fun exploring the bounds of this new Discord group, and I'll throw a link to it in the [show notes 00:24:49] because anyone who wants to show up and can validate that their human being is welcome to join until they turn into a jerk which is basically the [audio break 00:24:57] the community these days, let's be clear, but I found there are a couple of Discord bots—and yeah, it's all the same thing now—that ask test questions, and you can give an answer and it tells you in a DM whether you got it right or not, which is always fun when the bot is broken, and you're sitting there going well, that doesn't make much sense. But what other stuff has been built into this? For those of us who spend all of our time in Slack these days, what is the advantage of the Discord way of doing things?Serena: I guess for me, I'm not, like, a huge Discord person. This is really the only one that I participate in. I'm in a couple of my friends Discord as well, but there's a lot of stickers that are customizable, that relate back to memes a lot of the times. But yeah, the bot that you had mentioned is a great feature that Discord has where @terranovatech, who's also another TikTok content creator—his name's Anthony—he created from Python a practice question bot for CCNA and CCNP. And so, uploaded some questions to those.The bot is in beta guys, so you know, just like, [laugh] be aware of that. We are trying to constantly improve it and add new features. I have been adding a ton of questions for [D core 00:26:05] as I go through my book studying; I'll, you know, create practice questions. And that's typically a part of my normal studying routine, is creating practice questions that I can then go back to after I've read something to solidify it in my mind. And you know, you can use those questions, too, you can suggest questions. If you're like, “Hey, I was doing studying and I think this would be a cool question to add to the Discord bot.” We can do that as well. And so that's great. I love that feature.Corey: One last question before we wind up calling it an episode. Recently, you have caused a bit of TikTok controversy, for lack of a better term. And sure enough, we've had people swing in from all over the planet that chime in and yell at you in the comments. What's going on there?Serena: Okay. Yeah, so that's not unusual for me to cause some TikTok drama in the tech space. Okay, so there's a TikTok trend right now where it's a song and the song lyrics are, “You look so dumb right now.” Okay? And the other videos, like, if you click the sound, you can see, like, some of the videos will say, like, “They told me I needed to rotate my tires, but they rotate every time I drive.”And someone was like, “My girlfriend said she needs new foundation, but our house is just fine.” And so in the background, you hear the song that says, like, “You look so dumb right now.” So, it's just, like, a funny… funny joke. I did it, and I was like, I knew some people were going to miss the joke. And I said, you know, “When they say you need a backup, but you use RAID.” [laugh]. And so the sound is, “You look so dumb right now.”And I was definitely expecting people to miss the joke. And so I even tweeted at the same time, I was like, “I posted a new video, like, about that joke.” And so I was like, “Be prepared for the comments.” Because I knew even someone would be, like, she's just backtracking now. Like, she just is embarrassed. But I was like, “It's the joke guys.”I even put in the caption #thisisajoke. And, like, 90% of people that commented on it just completely missed that joke and were very upset that I made that—that I said that.Corey: Anyone who believes RAID is a backup only has to make one mistake deleting the wrong thing or overwriting something important before they realize that is very much not the case. And if you've been in tech for longer than about 20 minutes, you probably made a mistake like that at one point. It's not one of those things that could reasonably be expected that someone would take seriously. But yet, here we are with entire legions of people with no sense of humor.Serena: Yeah, it ended up in, like, Facebook groups and stuff, too, where these people thought I was being serious. And in the comments, I started making more jokes because someone's like, well, what if your data center catches on fire? And I was like, “Well, don't have a fire at your data center. Like, I don't understand. Obviously.” And so I just tried to, like, you know, make more jokes back to, kind of, keep it up and people were very upset. [laugh].Corey: That's why you're not allowed to smoke in them. Problem solved. Where would the fire come from? Yeah.Serena: There was, like, someone was like, “Well, what if you get ransomware?” And I was like, “We have Norton.” Like, what—[laugh] like, just, like, making the most red—and I was trying to really go outlandish with some of them because they're like, “RAID is not a replacement for cold storage.” And I was like, “Well, we have a lot of fans, so our RAID is very cold.” [laugh]. And, like, just kept it going. Some people were not happy.Corey: I love that. They just keep doubling down on the dumb. The problem is some people are lifelong experts at it, and they're always going to beat you with experience when you try it. It's…Serena: [laugh]. Yeah.Corey: Honestly, the hardest thing to learn, one it was valuable, least from my perspective, is learning when to just ignore the comments and keep going.Serena: Yeah. I definitely get some that I ignore. I mean, if they're, like, overly mean, I'll block somebody or something like that. You know, for someone just missing a joke, it's like, “Okay, whatever.” But yeah, some people—even after they're like, “Hey, man. This is just a joke.” They're like, “Well, this isn't a funny joke.” And I was like, “I will never make a joke about RAID as a backup again. I promise.” [laugh].Corey: No, you already told that joke. There are better ones you can explore.Serena: Yeah. For sure.Corey: So, if people want to come and hang out in this Discord, what's the best way for them to find it? We'll put it in the [show notes 00:30:05], but sometimes people listen rather than read.Serena: Yeah, I think if you even just Google ‘sysengineer Discord' it should come up like that; it's on the Google returned searches. It's a link in my Beacons on my TikTok. It's in a link in sysengineer's TikTok. So, there's a couple different places that you can find and join.Corey: And of course, in the [show notes 00:30:27] for this podcast, as well.Serena: And the [show notes 00:30:30] of this podcast, of course. [laugh].Corey: Thank you so much for taking the time to talk to me about all this. If people want to follow you beyond just the Discord, where's the best place for them to find you?Serena: So, I'm @SheNetworks on TikTok and then I'm @notshenetworks on Twitter. So, you can find me in both of those locations.Corey: Fantastic. Thanks so much for taking the time to speak with me today. I appreciate it.Serena: Thanks for having me on.Corey: Serena DiPenti, network engineer and of course@SheNetworks on the internet. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment telling me which RAID level makes the best backup.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

The Marvell Essential Technology Podcast
S1 EP6 - Still the One! Why Fibre Channel is Here to Stay

The Marvell Essential Technology Podcast

Play Episode Listen Later Nov 18, 2021 11:06 Transcription Available


Fibre Channel has been the gold standard protocol for connecting shared storage to servers for several decades now. Recently, new technologies like Flash storage, SSD's, memory class storage and NVMe have been deployed to allow servers and storage devices to be faster and deliver more performance. This would lead one to think that the time has come for Fibre Channel to be put out to pasture and for new protocols to be used to connect servers and storage together. The reality is however, Fibre Channel technology keeps advancing as well, and is more than capable for use with the high performance servers and storage arrays of today and tomorrow. In this podcast, we will explain some of the latest innovations in Fibre Channel connectivity and why Fibre Channel will remain the gold standard for shared storage connectivity. Read the blog: https://bit.ly/30wX2vQ

The Pure Report
Purity 6.1 and FlashArray//C Launch Update

The Pure Report

Play Episode Listen Later Feb 16, 2021 23:36


Catch up with the latest announcements on Purity for FlashArray with new data services that extend the Modern Data Experience. Product Marketing Director and frequent friend of the program, James Gallegos, joins to talk about the latest Purity and FlashArray//C enhancements to help customers store, safeguard, manage, access, and mobilize data, eliminating the frustrations of legacy storage and making it easier for users to achieve digital transformation initiatives. We cover it all - new ActiveCluster replication capabilities, advances in NVMe over FIbre Channel, new FlashArray//C options, Safemode protection against ransomware, and a new Pure Validated Design for VMware vSphere Tanzu. Get all the news in one place with this episode of the Pure Report. For more information: https://www.purestorage.com/products/storage-software/purity.html

DataCentric Podcast
Architecting a Flexible & Resilient Data Center

DataCentric Podcast

Play Episode Listen Later Nov 8, 2020 49:12


The events of the past year have taught us a valuable lesson: unexpected change is going to happen, and that change requires IT organizations to quickly respond. Data centers built on a solid foundation of fast, flexible, secure, and resilient infrastructure are the ones poised to adapt and keep the business moving forward. In this episode, Moor Insights & Strategy technology analyst Steve McDowell is joined by a group of storage infrastructure experts to talk about the fundamentals of a flexible cyber-resilient infrastructure. AJ Casamento and Brian Larsen join from Broadcom, while Brian Sherman comes to us from IBM's Storage division. The group talks about cyber-resilient architectures, autonomous infrastructure, and building a fast and flexible storage architecture to support it all. Fibre Channel Gen 7, NVMe-over-FC, Intelligent Storage Systems, tiered storage, disaster recovery... it seems like the group hit on most of the topics that an IT practitioner needs to be aware of as we all look forward to a disruptive future. This episode is sponsored by Broadcom. Special Guests: AJ Casamento, Brian Larsen, and Brian Sherman.

DataCentric Podcast
Inside Modern Flash Storage Technologies

DataCentric Podcast

Play Episode Listen Later Nov 8, 2020 35:07


Flash storage continues to evolve, with that evolution triggering changes in how we think about and deliver enterprise storage. Andy Walls, IBM Fellow and IBM's CTO for its flash storage products, takes us through the big impactors affecting the flash storage industry, We hit on everything: QLC, NVMe-over-fabric, Server-Class Memory, and much more. 00:00 Introductions 02:07 The role of QLC flash in enterprise storage & overcoming its inherent limitations 12:55 Where does Server Class Memory fit? 17:44 NVMe-over-Fabric changes the interconnect game 21:21 Disaggregated storage: are the days of traditional block storage arrays numbered? 23:55 How is Storage for Mainfames different?, Steve asks from a position of ignorance 26:28 Impactful emerging technologies: CXL bus, Computational Storage, 32:00 Does storage need to change to support Quantum Computing? Special Guest: Andy Walls.

Packet Pushers - Full Podcast Feed
Network Break 300: Cisco Mixes Microservices And SD-WAN; Broadcom Rolls Out Gen7 Fibre Channel Switches

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Sep 8, 2020 48:11


Network Break dives into a new Cisco project that ties microservices to SD-WAN, a CenturyLink outage, new vulnerabilities in IOS-XR, Broadcom's new Gen7 Fibre Channel switches, and more IT news.

Packet Pushers - Network Break
Network Break 300: Cisco Mixes Microservices And SD-WAN; Broadcom Rolls Out Gen7 Fibre Channel Switches

Packet Pushers - Network Break

Play Episode Listen Later Sep 8, 2020 48:11


Network Break dives into a new Cisco project that ties microservices to SD-WAN, a CenturyLink outage, new vulnerabilities in IOS-XR, Broadcom's new Gen7 Fibre Channel switches, and more IT news.

Packet Pushers - Fat Pipe
Network Break 300: Cisco Mixes Microservices And SD-WAN; Broadcom Rolls Out Gen7 Fibre Channel Switches

Packet Pushers - Fat Pipe

Play Episode Listen Later Sep 8, 2020 48:11


Network Break dives into a new Cisco project that ties microservices to SD-WAN, a CenturyLink outage, new vulnerabilities in IOS-XR, Broadcom's new Gen7 Fibre Channel switches, and more IT news.

Packet Pushers - Fat Pipe
Network Break 300: Cisco Mixes Microservices And SD-WAN; Broadcom Rolls Out Gen7 Fibre Channel Switches

Packet Pushers - Fat Pipe

Play Episode Listen Later Sep 8, 2020 48:11


Network Break dives into a new Cisco project that ties microservices to SD-WAN, a CenturyLink outage, new vulnerabilities in IOS-XR, Broadcom's new Gen7 Fibre Channel switches, and more IT news. The post Network Break 300: Cisco Mixes Microservices And SD-WAN; Broadcom Rolls Out Gen7 Fibre Channel Switches appeared first on Packet Pushers.

Packet Pushers - Full Podcast Feed
Network Break 300: Cisco Mixes Microservices And SD-WAN; Broadcom Rolls Out Gen7 Fibre Channel Switches

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Sep 8, 2020 48:11


Network Break dives into a new Cisco project that ties microservices to SD-WAN, a CenturyLink outage, new vulnerabilities in IOS-XR, Broadcom's new Gen7 Fibre Channel switches, and more IT news. The post Network Break 300: Cisco Mixes Microservices And SD-WAN; Broadcom Rolls Out Gen7 Fibre Channel Switches appeared first on Packet Pushers.

Packet Pushers - Network Break
Network Break 300: Cisco Mixes Microservices And SD-WAN; Broadcom Rolls Out Gen7 Fibre Channel Switches

Packet Pushers - Network Break

Play Episode Listen Later Sep 8, 2020 48:11


Network Break dives into a new Cisco project that ties microservices to SD-WAN, a CenturyLink outage, new vulnerabilities in IOS-XR, Broadcom's new Gen7 Fibre Channel switches, and more IT news. The post Network Break 300: Cisco Mixes Microservices And SD-WAN; Broadcom Rolls Out Gen7 Fibre Channel Switches appeared first on Packet Pushers.

Cisco學習資訊分享
Cisco MDS Zoning 基礎模式、CFS、和加強模式

Cisco學習資訊分享

Play Episode Listen Later May 2, 2020


【Cisco MDS Zoning 基礎模式、CFS、和加強模式】Cisco MDS 9000 Family (截圖自Cisco.com官網)Fibre-Channel是存儲網路(Storage Area Network, SAN)領域中最常被使用的協定。Cisco也有存儲網路的交換器產品:Cisco MDS系列。Fibre-Channel 協定(後面我用FC來代替) 的安全管制機制稱為分區(Zoning, 將Zone變成動詞)。我們來討論,在Cisco MDS產品上,幫助FC Zoning設定的三種工具:基礎模式(Basic Zoning)、Cisco Fabric Service (CFS)、加強模式(Enhanced Zoning)。為了方便理解,我這裡另外錄製了一段展示影片,可以跟這篇文章一起研讀。我們先將三套MDS乾淨的VSAN建立好。展示的畫面中我使用VSAN 4。Basic Zoning,基礎模式Cisco MDS當VSAN剛建立完成的時候所預設的Zoning, 稱為 Basic Zoning。Basic Zoning是Fibre Channel的標準協定,可以跨廠牌支援。我們來觀察,在Cisco MDS上面的Zoning資訊傳遞行為。我們先創建Zone Set“set4A”和“set4B”。當我們創建Zone Set,但是尚未啟用(Activate)之前,所有的MDS都不會收到任何的Zoning資訊。接下來我們執行“zoneset activate name set4A vsan 4”。這個命令,除了將會在本地的MDS,將Zoning的設定以set4A啟用之外,還會透過Fibre Channel底層的協定,將啟用過的Zoning設定,複製同時啟用,到所有的MDS上面。我們透過“show zoneset active vsan 4”命令來檢查所有的MDS,啟用中的Zone Set。我們可以觀察到,所有的MDS,都是“set4A”被啟用。任何時間,我們只能有一份的Zone Set是啟用中。通常,我們在MDS上面,不只是保存啟用中的Zone Set,我們經常會面對不同場景的,保留好幾份的Zone Set,輪流按照需要啟用。這些定義過,無論是否被啟用的Zone Set,全部合起來稱為 Full Zone Set。我們可以透過 “show zoneset vsan 4”命令來檢查,所有MDS上面的Full Zone Set。然而,因為Basic Zoning所傳遞的資訊,並不包含Full Zone Set,因此除了我們有設定過的這套MDS之外,其他的MDS完全看不到Full Zone Set。Cisco Fabric Service, CFS在Basic Zoning工作模式之下,我們無法透過標準的Fibre Channel協定來傳遞 Full Zone Set。我們只能透過,Cisco私有的協定Cisco Fabric Service(後面簡稱為CFS),來傳遞 Full Zone Set。CFS協定預設已經開啟。但是,我們必須手動要求CFS傳送Full Zone Set。我們利用 “zoneset distribute vsan 4” 來立刻傳遞 Full Zone Set。CFS的限制是,除非我們手動下達“zoneset distribute vsan 4”,否則,CFS不會自動將更動過的Full Zone Set的內容,同步複製到所有的MDS。我們可以編輯加入一組新的Zone Set “set4C”,然後再次透過 “show zoneset  vsan 4”來確認,包含“set4C”的Full Zone Set並沒有自動被傳送出去。Enhanced Zoning,加強模式如果我們希望Full Zone Set會自動同步到所有的MDS,同時,在編輯Zone Set的時候,MDS幫我們做編輯鎖定,只有一位管理者可以做Zone Set編輯,避免Basic Zoning所可能產生的內容相互覆蓋的風險,我們需要改良過的Enhanced Zoning。要開啟Enhanced Zoning模式,我們只需要挑選一套MDS,下達“zone mode enhanced vsan 4”即可。這個開啟的結果,會自動傳遞生效到所有的MDS。我們可以透過“show zone status vsan 4”命令,來確認所有的MDS已經變更成Enhanced Zoning。開啟Enhanced Zoning模式的同時,原本新增的 Zone Set “set4C”,自動會傳遞到所有的MDS。我們可以再次透過“show zoneset vsan 4”命令確認。Enhanced Zoning之下,Zone Set的編輯和啟用我們再次加入“set4D”,我們依然可以驗證,包含“set4D”的Full Zone Set會被自動同步到所有的MDS。首先我們會注意到,在Enhanced Zoning工作模式之下,任何MDS上面的Zone Set編輯,都會鎖定所有MDS的編輯功能。直到我們下達生效解鎖的Commit命令“zoneset commit vsan 4”。這個命令執行完成後,我們可以再次透過“show zoneset vsan 4”確認,同步成功。假設因為需要,我們必須啟用“set4D”,我們可以到任何的一套MDS,下達“zoneset activate name set4D vsan 4”。最後我們使用“show zoneset active vsan 4”來確認,的確啟用中的Zone Set,已經正確變成 “set4D”。加強模式Enhanced Zoning同時提供了基礎模式的啟用功能和CFS的同步完整Zoning設定的服務內容,再加上編輯鎖定、自動同步,是更佳的工作模式。全新的SAN安裝,應該直接設定成Enhanced Zoning。One more thing…FC協定是存儲網路協定,雖然不是TCP/IP的通訊封包協定,但是因為有相似的特性,只要願意多花一些時間,TCP/IP的工作者也可以快速熟練SAN協定。最棒的是,Cisco在SAN或是TCP/IP都有硬體,MDS的命令集基本上就是NX-OS環境,因此轉換上不存在太多的障礙。「東方寺」和盛開的櫻花台北市士林區我是洪李吉。我的網站是「Cisco學習資訊分享」。我們下次見!

Storage Developer Conference
#92: Fibre Channel – The Most Trusted Fabric Delivers NVMe

Storage Developer Conference

Play Episode Listen Later Apr 16, 2019 32:36


Storage Unpacked Podcast
#76 – Fibre Channel and NVMe with Mark Jones

Storage Unpacked Podcast

Play Episode Listen Later Nov 23, 2018 28:58


In this week’s podcast, Chris and Martin talk to Mark Jones from the Fibre Channel Industry Association.  This recording is an introduction to running NVMe over Fibre Channel, setting the scene on how Fibre Channel has evolved and will continue to be a storage protocol for many years. Fibre Channel reached an important milestone in […] The post #76 – Fibre Channel and NVMe with Mark Jones appeared first on Storage Unpacked Podcast.

BSD Now
Episode 261: FreeBSDcon Flashback | BSD Now 261

BSD Now

Play Episode Listen Later Aug 30, 2018 109:13


Insight into TrueOS and Trident, stop evildoers with pf-badhost, Flashback to FreeBSDcon ‘99, OpenBSD’s measures against TLBleed, play Morrowind on OpenBSD in 5 steps, DragonflyBSD developers shocked at Threadripper performance, and more. ##Headlines An Insight into the Future of TrueOS BSD and Project Trident Last month, TrueOS announced that they would be spinning off their desktop offering. The team behind the new project, named Project Trident, have been working furiously towards their first release. They did take a few minutes to answer some of our question about Project Trident and TrueOS. I would like to thank JT and Ken for taking the time to compile these answers. It’s FOSS: What is Project Trident? Project Trident: Project Trident is the continuation of the TrueOS Desktop. Essentially, it is the continuation of the primary “TrueOS software” that people have been using for the past 2 years. The continuing evolution of the entire TrueOS project has reached a stage where it became necessary to reorganize the project. To understand this change, it is important to know the history of the TrueOS project. Originally, Kris Moore created PC-BSD. This was a Desktop release of FreeBSD focused on providing a simple and user-friendly graphical experience for FreeBSD. PC-BSD grew and matured over many years. During the evolution of PC-BSD, many users began asking for a server focused version of the software. Kris agreed, and TrueOS was born as a scaled down server version of PC-BSD. In late 2016, more contributors and growth resulted in significant changes to the PC-BSD codebase. Because the new development was so markedly different from the original PC-BSD design, it was decided to rebrand the project. TrueOS was chosen as the name for this new direction for PC-BSD as the project had grown beyond providing only a graphical front to FreeBSD and was beginning to make fundamental changes to the FreeBSD operating system. One of these changes was moving PC-BSD from being based on each FreeBSD Release to TrueOS being based on the active and less outdated FreeBSD Current. Other major changes are using OpenRC for service management and being more aggressive about addressing long-standing issues with the FreeBSD release process. TrueOS moved toward a rolling release cycle, twice a year, which tested and merged FreeBSD changes directly from the developer instead of waiting months or even years for the FreeBSD review process to finish. TrueOS also deprecated and removed obsolete technology much more regularly. As the TrueOS Project grew, the developers found these changes were needed by other FreeBSD-based projects. These projects began expressing interest in using TrueOS rather than FreeBSD as the base for their project. This demonstrated that TrueOS needed to again evolve into a distribution framework for any BSD project to use. This allows port maintainers and source developers from any BSD project to pool their resources and use the same source repositories while allowing every distribution to still customize, build, and release their own self-contained project. The result is a natural split of the traditional TrueOS team. There were now naturally two teams in the TrueOS project: those working on the build infrastructure and FreeBSD enhancements – the “core” part of the project, and those working on end-user experience and utility – the “desktop” part of the project. When the decision was made to formally split the projects, the obvious question that arose was what to call the “Desktop” project. As TrueOS was already positioned to be a BSD distribution platform, the developers agreed the desktop side should pick a new name. There were other considerations too, one notable being that we were concerned that if we continued to call the desktop project “TrueOS Desktop”, it would prevent people from considering TrueOS as the basis for their distribution because of misconceptions that TrueOS was a desktop-focused OS. It also helps to “level the playing field” for other desktop distributions like GhostBSD so that TrueOS is not viewed as having a single “blessed” desktop version. It’s FOSS: What features will TrueOS add to the FreeBSD base? Project Trident: TrueOS has already added a number of features to FreeBSD: OpenRC replaces rc.d for service management LibreSSL in base Root NSS certificates out-of-box Scriptable installations (pc-sysinstall) The full list of changes can be seen on the TrueOS repository (https://github.com/trueos/trueos/blob/trueos-master/README.md). This list does change quite regularly as FreeBSD development itself changes. It’s FOSS: I understand that TrueOS will have a new feature that will make creating a desktop spin of TrueOS very easy. Could you explain that new feature? Project Trident: Historically, one of the biggest hurdles for creating a desktop version of FreeBSD is that the build options for packages are tuned for servers rather than desktops. This means a desktop distribution cannot use the pre-built packages from FreeBSD and must build, use, and maintain a custom package repository. Maintaining a fork of the FreeBSD ports tree is no trivial task. TrueOS has created a full distribution framework so now all it takes to create a custom build of FreeBSD is a single JSON manifest file. There is now a single “source of truth” for the source and ports repositories that is maintained by the TrueOS team and regularly tagged with “stable” build markers. All projects can use this framework, which makes updates trivial. It’s FOSS: Do you think that the new focus of TrueOS will lead to the creation of more desktop-centered BSDs? Project Trident: That is the hope. Historically, creating a desktop-centered BSD has required a lot of specialized knowledge. Not only do most people not have this knowledge, but many do not even know what they need to learn until they start troubleshooting. TrueOS is trying to drastically simplify this process to enable the wider Open Source community to experiment, contribute, and enjoy BSD-based projects. It’s FOSS: What is going to happen to TrueOS Pico? Will Project Trident have ARM support? Project Trident: Project Trident will be dependent on TrueOS for ARM support. The developers have talked about the possibility of supporting ARM64 and RISC-V architectures, but it is not possible at the current time. If more Open Source contributors want to help develop ARM and RISC-V support, the TrueOS project is definitely willing to help test and integrate that code. It’s FOSS: What does this change (splitting Trus OS into Project Trident) mean for the Lumina desktop environment? Project Trident: Long-term, almost nothing. Lumina is still the desktop environment for Project Trident and will continue to be developed and enhanced alongside Project Trident just as it was for TrueOS. Short-term, we will be delaying the release of Lumina 2.0 and will release an updated version of the 1.x branch (1.5.0) instead. This is simply due to all the extra overhead to get Project Trident up and running. When things settle down into a rhythm, the development of Lumina will pick up once again. It’s FOSS: Are you planning on including any desktop environments besides Lumina? Project Trident: While Lumina is included by default, all of the other popular desktop environments will be available in the package repo exactly as they had been before. It’s FOSS: Any plans to include Steam to increase the userbase? Project Trident: Steam is still unavailable natively on FreeBSD, so we do not have any plans to ship it out of the box currently. In the meantime, we highly recommend installing the Windows version of Steam through the PlayOnBSD utility. It’s FOSS: What will happen to the AppCafe? Project Trident: The AppCafe is the name of the graphical interface for the “pkg” utility integrated into the SysAdm client created by TrueOS. This hasn’t changed. SysAdm, the graphical client, and by extension AppCafe are still available for all TrueOS-based distributions to use. It’s FOSS: Does Project Trident have any corporate sponsors lined up? If not, would you be open to it or would you prefer that it be community supported? Project Trident: iXsystems is the first corporate sponsor of Project Trident and we are always open to other sponsorships as well. We would prefer smaller individual contributions from the community, but we understand that larger project needs or special-purpose goals are much more difficult to achieve without allowing larger corporate sponsorships as well. In either case, Project Trident is always looking out for the best interests of the community and will not allow intrusive or harmful code to enter the project even if a company or individual tries to make that code part of a sponsorship deal. It’s FOSS: BSD always seems to be lagging in terms of support for newer devices. Will TrueOS be able to remedy that with a quicker release cycle? Project Trident: Yes! That was a primary reason for TrueOS to start tracking the CURRENT branch of FreeBSD in 2016. This allows for the changes that FreeBSD developers are making, including new hardware support, to be available much sooner than if we followed the FreeBSD release cycle. It’s FOSS: Do you have any idea when Project Trident will have its first release? Project Trident: Right now we are targeting a late August release date. This is because Project Trident is “kicking the wheels” on the new TrueOS distribution system. We want to ensure everything is working smoothly before we release. Going forward, we plan on having regular package updates every week or two for the end-user packages and a new release of Trident with an updated OS version every 6 months. This will follow the TrueOS release schedule with a small time offset. ###pf-badhost: Stop the evil doers in their tracks! pf-badhost is a simple, easy to use badhost blocker that uses the power of the pf firewall to block many of the internet’s biggest irritants. Annoyances such as ssh bruteforcers are largely eliminated. Shodan scans and bots looking for webservers to abuse are stopped dead in their tracks. When used to filter outbound traffic, pf-badhost blocks many seedy, spooky malware containing and/or compromised webhosts. Filtering performance is exceptional, as the badhost list is stored in a pf table. To quote the OpenBSD FAQ page regarding tables: “the lookup time on a table holding 50,000 addresses is only slightly more than for one holding 50 addresses.” pf-badhost is simple and powerful. The blocklists are pulled from quality, trusted sources. The ‘Firehol’, ‘Emerging Threats’ and ‘Binary Defense’ block lists are used as they are popular, regularly updated lists of the internet’s most egregious offenders. The pf-badhost.sh script can easily be expanded to use additional or alternate blocklists. pf-badhost works best when used in conjunction with unbound-adblock for the ultimate badhost blocking. Notes: If you are trying to run pf-badhost on a LAN or are using NAT, you will want to add a rule to your pf.conf appearing BEFORE the pf-badhost rules allowing traffic to and from your local subnet so that you can still access your gateway and any DNS servers. Conversely, adding a line to pf-badhost.sh that removes your subnet range from the table should also work. Just make sure you choose a subnet range / CIDR block that is actually in the list. 192.168.0.0/16, 172.16.0.0/12 and 10.0.0.0/8 are the most common home/office subnet ranges. DigitalOcean https://do.co/bsdnow ###FLASHBACK: FreeBSDCon’99: Fans of Linux’s lesser-known sibling gather for the first time FreeBSD, a port of BSD Unix to Intel, has been around almost as long as Linux has – but without the media hype. Its developer and user community recently got a chance to get together for the first time, and they did it in the city where BSD – the Berkeley Software Distribution – was born some 25 years ago. October 17, 1999 marked a milestone in the history of FreeBSD – the first FreeBSD conference was held in the city where it all began, Berkeley, CA. Over 300 developers, users, and interested parties attended from around the globe. This was easily 50 percent more people than the conference organizers had expected. This first conference was meant to be a gathering mostly for developers and FreeBSD advocates. The turnout was surprisingly (and gratifyingly) large. In fact, attendance exceeded expectations so much that, for instance, Kirk McKusick had to add a second, identical tutorial on FreeBSD internals, because it was impossible for everyone to attend the first! But for a first-ever conference, I was impressed by how smoothly everything seemed to go. Sessions started on time, and the sessions I attended were well-run; nothing seemed to be too cold, dark, loud, late, or off-center. Of course, the best part about a conference such as this one is the opportunity to meet with other people who share similar interests. Lunches and breaks were a good time to meet people, as was the Tuesday night beer bash. The Wednesday night reception was of a type unusual for the technical conferences I usually attend – a three-hour Hornblower dinner cruise on San Francisco Bay. Not only did we all enjoy excellent food and company, but we all got to go up on deck and watch the lights of San Francisco and Berkeley as we drifted by. Although it’s nice when a conference attracts thousands of attendees, there are some things that can only be done with smaller groups of people; this was one of them. In short, this was a tiny conference, but a well-run one. Sessions Although it was a relatively small conference, the number and quality of the sessions belied the size. Each of the three days of the conference featured a different keynote speaker. In addition to Jordan Hubbard, Jeremy Allison spoke on “Samba Futures” on day two, and Brian Behlendorf gave a talk on “FreeBSD and Apache: A Perfect Combo” to start off the third day. The conference sessions themselves were divided into six tracks: advocacy, business, development, networking, security, and panels. The panels track featured three different panels, made up of three different slices of the community: the FreeBSD core team, a press panel, and a prominent user panel with representatives from such prominent commercial users as Yahoo! and USWest. I was especially interested in Apple Computer’s talk in the development track. Wilfredo Sanchez, technical lead for open source projects at Apple (no, that’s not an oxymoron!) spoke about Apple’s Darwin project, the company’s operating system road map, and the role of BSD (and, specifically, FreeBSD) in Apple’s plans. Apple and Unix have had a long and uneasy history, from the Lisa through the A/UX project to today. Personally, I’m very optimistic about the chances for the Darwin project to succeed. Apple’s core OS kernel team has chosen FreeBSD as its reference platform. I’m looking forward to what this partnership will bring to both sides. Other development track sessions included in-depth tutorials on writing device drivers, basics of the Vinum Volume Manager, Fibre Channel, development models (the open repository model), and the FreeBSD Documentation Project (FDP). If you’re interested in contributing to the FreeBSD project, the FDP is a good place to start. Advocacy sessions included “How One Person Can Make a Difference” (a timeless topic that would find a home at any technical conference!) and “Starting and Managing A User Group” (trials and tribulations as well as rewards). The business track featured speakers from three commercial users of FreeBSD: Cybernet, USWest, and Applix. Applix presented its port of Applixware Office for FreeBSD and explained how Applix has taken the core services of Applixware into open source. Commercial applications and open source were once a rare combination; we can only hope the trend away from that state of affairs will continue. Commercial use of FreeBSD The use of FreeBSD in embedded applications is increasing as well – and it is increasing at the same rate that hardware power is. These days, even inexpensive systems are able to run a BSD kernel. The BSD license and the solid TCP/IP stack prove significant enticements to this market as well. (Unlike the GNU Public License, the BSD license does not require that vendors make derivative works open source.) Companies such as USWest and Verio use FreeBSD for a wide variety of different Internet services. Yahoo! and Hotmail are examples of companies that use FreeBSD extensively for more specific purposes. Yahoo!, for example, has many hundreds of FreeBSD boxes, and Hotmail has almost 2000 FreeBSD machines at its data center in the San Francisco Bay area. Hotmail is owned by Microsoft, so the fact that it runs FreeBSD is a secret. Don’t tell anyone… When asked to comment on the increasing commercial interest in BSD, Hubbard said that FreeBSD is learning the Red Hat lesson. “Walnut Creek and others with business interests in FreeBSD have learned a few things from the Red Hat IPO,” he said, “and nobody is just sitting around now, content with business as usual. It’s clearly business as unusual in the open source world today.” Hubbard had also singled out some of BSD’s commercial partners, such as Whistle Communications, for praise in his opening day keynote. These partners play a key role in moving the project forward, he said, by contributing various enhancements and major new systems, such as Netgraph, as well as by contributing paid employee time spent on FreeBSD. Even short FreeBSD-related contacts can yield good results, Hubbard said. An example of this is the new jail() security code introduced in FreeBSD 3.x and 4.0, which was contributed by R & D Associates. A number of ISPs are also now donating the hardware and bandwidth that allows the project to provide more resource mirrors and experimental development sites. See you next year And speaking of corporate sponsors, thanks go to Walnut Creek for sponsoring the conference, and to Yahoo! for covering all the expenses involved in bringing the entire FreeBSD core team to Berkeley. As a fan of FreeBSD, I’m happy to see that the project has finally produced a conference. It was time: many of the 16 core team members had been working together on a regular basis for nearly seven years without actually meeting face to face. It’s been an interesting year for open source projects. I’m looking forward to the next year – and the next BSD conference – to be even better. ##News Roundup OpenBSD Recommends: Disable SMT/Hyperthreading in all Intel BIOSes Two recently disclosed hardware bugs affected Intel cpus: - TLBleed - T1TF (the name "Foreshadow" refers to 1 of 3 aspects of this bug, more aspects are surely on the way) Solving these bugs requires new cpu microcode, a coding workaround, *AND* the disabling of SMT / Hyperthreading. SMT is fundamentally broken because it shares resources between the two cpu instances and those shared resources lack security differentiators. Some of these side channel attacks aren't trivial, but we can expect most of them to eventually work and leak kernel or cross-VM memory in common usage circumstances, even such as javascript directly in a browser. There will be more hardware bugs and artifacts disclosed. Due to the way SMT interacts with speculative execution on Intel cpus, I expect SMT to exacerbate most of the future problems. A few months back, I urged people to disable hyperthreading on all Intel cpus. I need to repeat that: DISABLE HYPERTHREADING ON ALL YOUR INTEL MACHINES IN THE BIOS. Also, update your BIOS firmware, if you can. OpenBSD -current (and therefore 6.4) will not use hyperthreading if it is enabled, and will update the cpu microcode if possible. But what about 6.2 and 6.3? The situation is very complex, continually evolving, and is taking too much manpower away from other tasks. Furthermore, Intel isn't telling us what is coming next, and are doing a terrible job by not publically documenting what operating systems must do to resolve the problems. We are having to do research by reading other operating systems. There is no time left to backport the changes -- we will not be issuing a complete set of errata and syspatches against 6.2 and 6.3 because it is turning into a distraction. Rather than working on every required patch for 6.2/6.3, we will re-focus manpower and make sure 6.4 contains the best solutions possible. So please try take responsibility for your own machines: Disable SMT in the BIOS menu, and upgrade your BIOS if you can. I'm going to spend my money at a more trustworthy vendor in the future. ###Get Morrowind running on OpenBSD in 5 simple steps This article contains brief instructions on how to get one of the greatest Western RPGs of all time, The Elder Scrolls III: Morrowind, running on OpenBSD using the OpenMW open source engine recreation. These instructions were tested on a ThinkPad X1 Carbon Gen 3. The information was adapted from this OpenMW forum thread: https://forum.openmw.org/viewtopic.php?t=3510 Purchase and download the DRM-free version from GOG (also considered the best version due to the high quality PDF guide that it comes with): https://www.gog.com/game/theelderscrollsiiimorrowindgotyedition Install the required packages built from the ports tree as root. openmw is the recreated game engine, and innoextract is how we will get the game data files out of the win32 executable. pkgadd openmw innoextract Move the file from GOG setuptesmorrowindgoty2.0.0.7.exe into its own directory morrowind/ due to innoextract’s default behaviour of extracting into the current directory. Then type: innoextract setuptesmorrowindgoty2.0.0.7.exe Type openmw-wizard and follow the straightforward instructions. Note that you have a pre-existing installation, and select the morrowind/app/Data Files folder that innoextract extracted. Type in openmw-launcher, toggle the settings to your preferences, and then hit play! iXsystems https://twitter.com/allanjude/status/1034647571124367360 ###My First Clang Bug Part of the role of being a packager is compiling lots (and lots) of packages. That means compiling lots of code from interesting places and in a variety of styles. In my opinion, being a good packager also means providing feedback to upstream when things are bad. That means filing upstream bugs when possible, and upstreaming patches. One of the “exciting” moments in packaging is when tools change. So each and every major CMake update is an exercise in recompiling 2400 or more packages and adjusting bits and pieces. When a software project was last released in 2013, adjusting it to modern tools can become quite a chore (e.g. Squid Report Generator). CMake is excellent for maintaining backwards compatibility, generally accommodating old software with new policies. The most recent 3.12 release candidate had three issues filed from the FreeBSD side, all from fallout with older software. I consider the hours put into good bug reports, part of being a good citizen of the Free Software world. My most interesting bug this week, though, came from one line of code somewhere in Kleopatra: QUNUSED(gpgagentdata); That one line triggered a really peculiar link error in KDE’s FreeBSD CI system. Yup … telling the compiler something is unused made it fall over. Commenting out that line got rid of the link error, but introduced a warning about an unused function. Working with KDE-PIM’s Volker Krause, we whittled the problem down to a six-line example program — two lines if you don’t care much for coding style. I’m glad, at that point, that I could throw it over the hedge to the LLVM team with some explanatory text. Watching the process on their side reminds me ever-so-strongly of how things work in KDE (or FreeBSD for that matter): Bugzilla, Phabricator, and git combine to be an effective workflow for developers (perhaps less so for end-users). Today I got a note saying that the issue had been resolved. So brief a time for a bug. Live fast. Get squashed young. ###DragonFlyBSD Now Runs On The Threadripper 2990WX, Developer Shocked At Performance Last week I carried out some tests of BSD vs. Linux on the new 32-core / 64-thread Threadripper 2990WX. I tested FreeBSD 11, FreeBSD 12, and TrueOS – those benchmarks will be published in the next few days. I tried DragonFlyBSD, but at the time it wouldn’t boot with this AMD HEDT processor. But now the latest DragonFlyBSD development kernel can handle the 2990WX and the lead DragonFly developer calls this new processor “a real beast” and is stunned by its performance potential. When I tried last week, the DragonFlyBSD 5.2.2 stable release nor DragonFlyBSD 5.3 daily snapshot would boot on the 2990WX. But it turns out Matthew Dillon, the lead developer of DragonFlyBSD, picked up a rig and has it running now. So in time for the next 5.4 stable release or those using the daily snapshots can have this 32-core / 64-thread Zen+ CPU running on this operating system long ago forked from FreeBSD. In announcing his success in bringing up the 2990WX under DragonFlyBSD, which required a few minor changes, he shared his performance thoughts and hopes for the rig. “The cpu is a real beast, packing 32 cores and 64 threads. It blows away our dual-core Xeon to the tune of being +50% faster in concurrent compile tests, and it also blows away our older 4-socket Opteron (which we call ‘Monster’) by about the same margin. It’s an impressive CPU. For now the new beast is going to be used to help us improve I/O performance through the filesystem, further SMP work (but DFly scales pretty well to 64 threads already), and perhaps some driver to work to support the 10gbe on the mobo.” Dillon shared some results on the system as well. " The Threadripper 2990WX is a beast. It is at least 50% faster than both the quad socket opteron and the dual socket Xeon system I tested against. The primary limitation for the 2990WX is likely its 4 channels of DDR4 memory, and like all Zen and Zen+ CPUs, memory performance matters more than CPU frequency (and costs almost no power to pump up the performance). That said, it still blow away a dual-socket Xeon with 3x the number of memory channels. That is impressive!" The well known BSD developer also added, “This puts the 2990WX at par efficiency vs a dual-socket Xeon system, and better than the dual-socket Xeon with slower memory and a power cap. This is VERY impressive. I should note that the 2990WX is more specialized with its asymetric NUMA architecture and 32 cores. I think the sweet spot in terms of CPU pricing and efficiency is likely going to be with the 2950X (16-cores/32-threads). It is clear that the 2990WX (32-cores/64-threads) will max out 4-channel memory bandwidth for many workloads, making it a more specialized part. But still awesome…This thing is an incredible beast, I’m glad I got it.” While I have the FreeBSD vs. Linux benchmarks from a few days ago, it looks like now on my ever growing TODO list will be re-trying out the newest DragonFlyBSD daily snapshot for seeing how the performance compares in the mix. Stay tuned for the numbers that should be in the next day or two. ##Beastie Bits X11 on really small devices mandoc-1.14.4 released The pfSense Book is now available to everyone MWL: Burn it down! Burn it all down! Configuring OpenBSD: System and user config files for a more pleasant laptop FreeBSD Security Advisory: Resource exhaustion in TCP reassembly OpenBSD Foundation gets first 2018 Iridium donation New ZFS commit solves issue a few users reported in the feedback segment Project Trident should have a beta release by the end of next week Reminder about Stockholm BUG: September 5, 17:30-22:00 BSD-PL User Group: September 13, 18:30-21:00 Tarsnap ##Feedback/Questions Malcom - Having different routes per interface Bostjan - ZFS and integrity of data Michael - Suggestion for Monitoring Barry - Feedback Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

BSD Now
Episode 243: Understanding The Scheduler | BSD Now 243

BSD Now

Play Episode Listen Later Apr 25, 2018 85:24


OpenBSD 6.3 and DragonflyBSD 5.2 are released, bug fix for disappearing files in OpenZFS on Linux (and only Linux), understanding the FreeBSD CPU scheduler, NetBSD on RPI3, thoughts on being a committer for 20 years, and 5 reasons to use FreeBSD in 2018. Headlines OpenBSD 6.3 released Punctual as ever, OpenBSD 6.3 has been releases with the following features/changes: Improved HW support, including: SMP support on OpenBSD/arm64 platforms vmm/vmd improvements: IEEE 802.11 wireless stack improvements Generic network stack improvements Installer improvements Routing daemons and other userland network improvements Security improvements dhclient(8) improvements Assorted improvements OpenSMTPD 6.0.4 OpenSSH 7.7 LibreSSL 2.7.2 DragonFlyBSD 5.2 released Big-ticket items Meltdown and Spectre mitigation support Meltdown isolation and spectre mitigation support added. Meltdown mitigation is automatically enabled for all Intel cpus. Spectre mitigation must be enabled manually via sysctl if desired, using sysctls machdep.spectremitigation and machdep.meltdownmitigation. HAMMER2 H2 has received a very large number of bug fixes and performance improvements. We can now recommend H2 as the default root filesystem in non-clustered mode. Clustered support is not yet available. ipfw Updates Implement state based "redirect", i.e. without using libalias. ipfw now supports all possible ICMP types. Fix ICMPMAXTYPE assumptions (now 40 as of this release). Improved graphics support The drm/i915 kernel driver has been updated to support Intel Coffeelake GPUs Add 24-bit pixel format support to the EFI frame buffer code. Significantly improve fbio support for the "scfb" XOrg driver. This allows EFI frame buffers to be used by X in situations where we do not otherwise support the GPU. Partly implement the FBIOBLANK ioctl for display powersaving. Syscons waits for drm modesetting at appropriate places, avoiding races. + For more details, check out the “All changes since DragonFly 5.0” section. ZFS on Linux bug causes files to disappear A bug in ZoL 0.7.7 caused 0.7.8 to be released just 3 days after the release The bug only impacts Linux, the change that caused the problem was not upstreamed yet, so does not impact ZFS on illumos, FreeBSD, OS X, or Windows The bug can cause files being copied into a directory to not be properly linked to the directory, so they will no longer be listed in the contents of the directory ZoL developers are working on a tool to allow you to recover the data, since no data was actually lost, the files were just not properly registered as part of the directory The bug was introduced in a commit made in February, that attempted to improve performance of datasets created with the case insensitivity option. In an effort to improve performance, they introduced a limit to cap to give up (return ENOSPC) if growing the directory ZAP failed twice. The ZAP is the key-value pair data structure that contains metadata for a directory, including a hash table of the files that are in a directory. When a directory has a large number of files, the ZAP is converted to a FatZAP, and additional space may need to be allocated as additional files are added. Commit cc63068 caused ENOSPC error when copy a large amount of files between two directories. The reason is that the patch limits zap leaf expansion to 2 retries, and return ENOSPC when failed. Finding the root cause of this issue was somewhat hampered by the fact that many people were not able to reproduce the issue. It turns out this was caused by an entirely unrelated change to GNU coreutils. On later versions of GNU Coreutils, the files were returned in a sorted order, resulting in them hitting different buckets in the hash table, and not tripping the retry limit Tools like rsync were unaffected, because they always sort the files before copying If you did not see any ENOSPC errors, you were likely not impacted The intent for limiting retries is to prevent pointlessly growing table to max size when adding a block full of entries with same name in different case in mixed mode. However, it turns out we cannot use any limit on the retry. When we copy files from one directory in readdir order, we are copying in hash order, one leaf block at a time. Which means that if the leaf block in source directory has expanded 6 times, and you copy those entries in that block, by the time you need to expand the leaf in destination directory, you need to expand it 6 times in one go. So any limit on the retry will result in error where it shouldn't. Recommendations for Users from Ryan Yao: The regression makes it so that creating a new file could fail with ENOSPC after which files created in that directory could become orphaned. Existing files seem okay, but I have yet to confirm that myself and I cannot speak for what others know. It is incredibly difficult to reproduce on systems running coreutils 8.23 or later. So far, reports have only come from people using coreutils 8.22 or older. The directory size actually gets incremented for each orphaned file, which makes it wrong after orphan files happen. We will likely have some way to recover the orphaned files (like ext4’s lost+found) and fix the directory sizes in the very near future. Snapshots of the damaged datasets are problematic though. Until we have a subcommand to fix it (not including the snapshots, which we would have to list), the damage can be removed from a system that has it either by rolling back to a snapshot before it happened or creating a new dataset with 0.7.6 (or another release other than 0.7.7), moving everything to the new dataset and destroying the old. That will restore things to pristine condition. It should also be possible to check for pools that are affected, but I have yet to finish my analysis to be certain that no false negatives occur when checking, so I will avoid saying how for now. Writes to existing files cannot trigger this bug, only adding new files to a directory in bulk News Roundup des@’s thoughts on being a FreeBSD committer for 20 years Yesterday was the twentieth anniversary of my FreeBSD commit bit, and tomorrow will be the twentieth anniversary of my first commit. I figured I’d split the difference and write a few words about it today. My level of engagement with the FreeBSD project has varied greatly over the twenty years I’ve been a committer. There have been times when I worked on it full-time, and times when I did not touch it for months. The last few years, health issues and life events have consumed my time and sapped my energy, and my contributions have come in bursts. Commit statistics do not tell the whole story, though: even when not working on FreeBSD directly, I have worked on side projects which, like OpenPAM, may one day find their way into FreeBSD. My contributions have not been limited to code. I was the project’s first Bugmeister; I’ve served on the Security Team for a long time, and have been both Security Officer and Deputy Security Officer; I managed the last four Core Team elections and am doing so again this year. In return, the project has taught me much about programming and software engineering. It taught me code hygiene and the importance of clarity over cleverness; it taught me the ins and outs of revision control; it taught me the importance of good documentation, and how to write it; and it taught me good release engineering practices. Last but not least, it has provided me with the opportunity to work with some of the best people in the field. I have the privilege today to count several of them among my friends. For better or worse, the FreeBSD project has shaped my career and my life. It set me on the path to information security in general and IAA in particular, and opened many a door for me. I would not be where I am now without it. I won’t pretend to be able to tell the future. I don’t know how long I will remain active in the FreeBSD project and community. It could be another twenty years; or it could be ten, or five, or less. All I know is that FreeBSD and I still have things to teach each other, and I don’t intend to call it quits any time soon. iXsystems unveils new TrueNAS M-Series Unified Storage Line San Jose, Calif., April 10, 2018 — iXsystems, the leader in Enterprise Open Source servers and software-defined storage, announced the TrueNAS M40 and M50 as the newest high-performance models in its hybrid, unified storage product line. The TrueNAS M-Series harnesses NVMe and NVDIMM to bring all-flash array performance to the award-winning TrueNAS hybrid arrays. It also includes the Intel® Xeon® Scalable Family of Processors and supports up to 100GbE and 32Gb Fibre Channel networking. Sitting between the all-flash TrueNAS Z50 and the hybrid TrueNAS X-Series in the product line, the TrueNAS M-Series delivers up to 10 Petabytes of highly-available and flash-powered network attached storage and rounds out a comprehensive product set that has a capacity and performance option for every storage budget. Designed for On-Premises & Enterprise Cloud Environments As a unified file, block, and object sharing solution, TrueNAS can meet the needs of file serving, backup, virtualization, media production, and private cloud users thanks to its support for the SMB, NFS, AFP, iSCSI, Fibre Channel, and S3 protocols. At the heart of the TrueNAS M-Series is a custom 4U, dual-controller head unit that supports up to 24 3.5” drives and comes in two models, the M40 and M50, for maximum flexibility and scalability. The TrueNAS M40 uses NVDIMMs for write cache, SSDs for read cache, and up to two external 60-bay expansion shelves that unlock up to 2PB in capacity. The TrueNAS M50 uses NVDIMMs for write caching, NVMe drives for read caching, and up to twelve external 60-bay expansion shelves to scale upwards of 10PB. The dual-controller design provides high-availability failover and non-disruptive upgrades for mission-critical enterprise environments. By design, the TrueNAS M-Series unleashes cutting-edge persistent memory technology for demanding performance and capacity workloads, enabling businesses to accelerate enterprise applications and deploy enterprise private clouds that are twice the capacity of previous TrueNAS models. It also supports replication to the Amazon S3, BackBlaze B2, Google Cloud, and Microsoft Azure cloud platforms and can deliver an object store using the ubiquitous S3 object storage protocol at a fraction of the cost of the public cloud. Fast As a true enterprise storage platform, the TrueNAS M50 supports very demanding performance workloads with up to four active 100GbE ports, 3TB of RAM, 32GB of NVDIMM write cache and up to 15TB of NVMe flash read cache. The TrueNAS M40 and M50 include up to 24/7 and global next-business-day support, putting IT at ease. The modular and tool-less design of the M-Series allows for easy, non-disruptive servicing and upgrading by end-users and support technicians for guaranteed uptime. TrueNAS has US-Based support provided by the engineering team that developed it, offering the rapid response that every enterprise needs. Award-Winning TrueNAS Features Enterprise: Perfectly suited for private clouds and enterprise workloads such as file sharing, backups, M&E, surveillance, and hosting virtual machines. Unified: Utilizes SMB, AFP, NFS for file storage, iSCSI, Fibre Channel and OpenStack Cinder for block storage, and S3-compatible APIs for object storage. Supports every common operating system, hypervisor, and application. Economical: Deploy an enterprise private cloud and reduce storage TCO by 70% over AWS with built-in enterprise-class features such as in-line compression, deduplication, clones, and thin-provisioning. Safe: The OpenZFS file system ensures data integrity with best-in-class replication and snapshotting. Customers can replicate data to the rest of the iXsystems storage lineup and to the public cloud. Reliable: High Availability option with dual hot-swappable controllers for continuous data availability and 99.999% uptime. Familiar: Provision and manage storage with the same simple and powerful WebUI and REST APIs used in all iXsystems storage products, as well as iXsystems’ FreeNAS Software. Certified: TrueNAS has passed the Citrix Ready, VMware Ready, and Veeam Ready certifications, reducing the risk of deploying a virtualized infrastructure. Open: By using industry-standard sharing protocols, the OpenZFS Open Source enterprise file system and FreeNAS, the world’s #1 Open Source storage operating system (and also engineered by iXsystems), TrueNAS is the most open enterprise storage solution on the market. Availability The TrueNAS M40 and M50 will be generally available in April 2018 through the iXsystems global channel partner network. The TrueNAS M-Series starts at under $20,000 USD and can be easily expanded using a linear “per terabyte” pricing model. With typical compression, a Petabtye can be stored for under $100,000 USD. TrueNAS comes with an all-inclusive software suite that provides NFS, Windows SMB, iSCSI, snapshots, clones and replication. For more information, visit www.ixsystems.com/TrueNAS TrueNAS M-Series What's New Video Understanding and tuning the FreeBSD Scheduler ``` Occasionally I noticed that the system would not quickly process the tasks i need done, but instead prefer other, longrunning tasks. I figured it must be related to the scheduler, and decided it hates me. A closer look shows the behaviour as follows (single CPU): Lets run an I/O-active task, e.g, postgres VACUUM that would continuously read from big files (while doing compute as well [1]): pool alloc free read write read write cache - - - - - - ada1s4 7.08G 10.9G 1.58K 0 12.9M 0 Now start an endless loop: while true; do :; done And the effect is: pool alloc free read write read write cache - - - - - - ada1s4 7.08G 10.9G 9 0 76.8K 0 The VACUUM gets almost stuck! This figures with WCPU in "top": PID USERNAME PRI NICE SIZE RES STATE TIME WCPU COMMAND 85583 root 99 0 7044K 1944K RUN 1:06 92.21% bash 53005 pgsql 52 0 620M 91856K RUN 5:47 0.50% postgres Hacking on kern.sched.quantum makes it quite a bit better: sysctl kern.sched.quantum=1 kern.sched.quantum: 94488 -> 7874 pool alloc free read write read write cache - - - - - - ada1s4 7.08G 10.9G 395 0 3.12M 0 PID USERNAME PRI NICE SIZE RES STATE TIME WCPU COMMAND 85583 root 94 0 7044K 1944K RUN 4:13 70.80% bash 53005 pgsql 52 0 276M 91856K RUN 5:52 11.83% postgres Now, as usual, the "root-cause" questions arise: What exactly does this "quantum"? Is this solution a workaround, i.e. actually something else is wrong, and has it tradeoff in other situations? Or otherwise, why is such a default value chosen, which appears to be ill-deceived? The docs for the quantum parameter are a bit unsatisfying - they say its the max num of ticks a process gets - and what happens when they're exhausted? If by default the endless loop is actually allowed to continue running for 94k ticks (or 94ms, more likely) uninterrupted, then that explains the perceived behaviour - buts thats certainly not what a scheduler should do when other procs are ready to run. 11.1-RELEASE-p7, kern.hz=200. Switching tickless mode on or off does not influence the matter. Starting the endless loop with "nice" does not influence the matter. [1] A pure-I/O job without compute load, like "dd", does not show this behaviour. Also, when other tasks are running, the unjust behaviour is not so stongly pronounced. ``` aarch64 support added I have committed about adding initial support for aarch64. booting log on RaspberryPI3: ``` boot NetBSD/evbarm (aarch64) Drop to EL1...OK Creating VA=PA tables Creating KSEG tables Creating KVA=PA tables Creating devmap tables MMU Enable...OK VSTART = ffffffc000001ff4 FDT devmap cpufunc bootstrap consinit ok uboot: args 0x3ab46000, 0, 0, 0 NetBSD/evbarm (fdt) booting ... FDT /memory [0] @ 0x0 size 0x3b000000 MEM: add 0-3b000000 MEM: res 0-1000 MEM: res 3ab46000-3ab4a000 Usable memory: 1000 - 3ab45fff 3ab4a000 - 3affffff initarm: kernel phys start 1000000 end 17bd000 MEM: res 1000000-17bd000 bootargs: root=axe0 1000 - ffffff 17bd000 - 3ab45fff 3ab4a000 - 3affffff ------------------------------------------ kern_vtopdiff = 0xffffffbfff000000 physical_start = 0x0000000000001000 kernel_start_phys = 0x0000000001000000 kernel_end_phys = 0x00000000017bd000 physical_end = 0x000000003ab45000 VM_MIN_KERNEL_ADDRESS = 0xffffffc000000000 kernel_start_l2 = 0xffffffc000000000 kernel_start = 0xffffffc000000000 kernel_end = 0xffffffc0007bd000 kernel_end_l2 = 0xffffffc000800000 (kernel va area) (devmap va area) VM_MAX_KERNEL_ADDRESS = 0xffffffffffe00000 ------------------------------------------ Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018 The NetBSD Foundation, Inc. All rights reserved. Copyright (c) 1982, 1986, 1989, 1991, 1993 The Regents of the University of California. All rights reserved. NetBSD 8.99.14 (RPI64) #11: Fri Mar 30 12:34:19 JST 2018 ryo@moveq:/usr/home/ryo/tmp/netbsd-src-ryo-wip/sys/arch/evbarm/compile/RPI64 total memory = 936 MB avail memory = 877 MB … Starting local daemons:. Updating motd. Starting sshd. Starting inetd. Starting cron. The following components reported failures: /etc/rc.d/swap2 See /var/run/rc.log for more information. Fri Mar 30 12:35:31 JST 2018 NetBSD/evbarm (rpi3) (console) login: root Last login: Fri Mar 30 12:30:24 2018 on console rpi3# uname -ap NetBSD rpi3 8.99.14 NetBSD 8.99.14 (RPI64) #11: Fri Mar 30 12:34:19 JST 2018 ryo@moveq:/usr/home/ryo/tmp/netbsd-src-ryo-wip/sys/arch/evbarm/compile/RPI64 evbarm aarch64 rpi3# ``` Now, multiuser mode works stably on fdt based boards (RPI3,SUNXI,TEGRA). But there are still some problems, more time is required for release. also SMP is not yet. See sys/arch/aarch64/aarch64/TODO for more detail. Especially the problems around TLS of rtld, and C++ stack unwindings are too difficult for me to solve, I give up and need someone's help (^o^)/ Since C++ doesn't work, ATF also doesn't work. If the ATF works, it will clarify more issues. sys/arch/evbarm64 is gone and integrated into sys/arch/evbarm. One evbarm/conf/GENERIC64 kernel binary supports all fdt (bcm2837,sunxi,tegra) based boards. While on 32bit, sys/arch/evbarm/conf/GENERIC will support all fdt based boards...but doesn't work yet. (WIP) My deepest appreciation goes to Tohru Nishimura (nisimura@) whose writes vector handlers, context switchings, and so on. and his comments and suggestions were innumerably valuable. I would also like to thank Nick Hudson (skrll@) and Jared McNeill (jmcneill@) whose added support FDT and integrated into evbarm. Finally, I would like to thank Matt Thomas (matt@) whose commited aarch64 toolchains and preliminary support for aarch64. Beastie Bits 5 Reasons to Use FreeBSD in 2018 Rewriting Intel gigabit network driver in Rust Recruiting to make Elastic Search on FreeBSD better Windows Server 2019 Preview, in bhyve on FreeBSD “SSH Mastery, 2nd ed” in hardcover Feedback/Questions Jason - ZFS Transfer option Luis - ZFS Pools ClonOS Michael - Tech Conferences anonymous - BSD trash on removable drives Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Virtually Speaking Podcast
Episode 61: Storage Networking Futures

Virtually Speaking Podcast

Play Episode Listen Later Nov 3, 2017 41:40


The benefits of NVME over SSD are pretty compelling. Not only do NVMe drives offer more bandwidth, but the updated interface and protocols improve latency and better handle heavy workloads . Now, as NVMe approaches price parity with SATA SSDs organizations are making the switch, but what about the storage fabric? Are we just shifting the bottleneck to the network? This week we bring in Dr J Metz to discuss storage networking technologies, protocols and standards. J shares his thoughts on what's near end of road and what is gaining momentum. Dr J Metz is a Data Center Technologist in the Office of the CTO at Cisco, focusing on Storage Topics discussed: Ethernet, Fibre Channel, PCI Express, Omni-Path (a new Intel high-performance communications architecture), and InfiniBand. NVMe's impact on datacenter design Technology transitions (the game of whack-a-mole) Is Tape going away? Links mentioned in this episode: Dr J Metz's Blog Episode 46: NVM Express NVM Express and VMware SNIA The Virtually Speaking Podcast The Virtually Speaking Podcast is a weekly technical podcast dedicated to discussing VMware topics related to storage and availability. Each week Pete Flecha and John Nicholson bring in various subject matter experts from VMware and within the industry to discuss their respective areas of expertise. If you’re new to the Virtually Speaking Podcast check out all episodes on vSpeakingPodcast.com.

IT Manager Podcast (DE, german) - IT-Begriffe einfach und verständlich erklärt

Was bedeutet Virtualisierung und in welchem Zusammenhang steht es mit VMware? Dies und mehr erfahren Sie heute von Daniel Ascencao in unserer neuen Folge. Viel Spaß beim Zuhören!   Mehr Informationen zur VMware-Grundlagen-Schulung: Mit dieser 2-tägigen VMware-Grundlagen-Schulung bauen IT-Mitarbeiter ihr VMware-KnowHow auf und aus. Die Inhalte an den beiden Tagen werden unter anderem sein: - VMware Produktpalette (T) - vSphere: Funktionen, Architektur, Lizenzmodell (T) - ESXi: Architektur, Installation, Grundkonfiguration mit vSphere Client (P) - ESXi Networking: interne Netzwerkstruktur, virtuelle Standard Switche, Portgruppen, Physische Uplinks (P) - ESX Storage: Storagetechnologien im Überblick, FibreChannel und iSCSI, SAN, vSphere VMFS Datenspeicher (T + P) - Virtuelle Maschinen (VMs): Virtuelle Hardware, Erstellung von VMs, Installation von Betriebssystemen, VMware Tools (P) - vCenter Server (VC): Architektur, Installation, VC Datenbank, - vCenter Bestandslisten, Richtlinien, vCenter Appliance, WebClient (P) - VM Management: Vorlagen und automatisches Bereitstellen von VMs, Cloning, Anpassung von Gastbetriebssystemen (P) - Zugriffskontrolle: vSphere Sicherheitsmodell & Nutzerauthentifizierung, MS Active Directory (P) - Management: VMotion Migration, VMware EVC (Enhanced VMotion Compatibility) (P) - vSphere DRS (Distributed Resource Scheduler) und Storage DRS: Funktionsweise, Loadbalancing Cluster Setup, Automatisierungslevel, Best Practices (P) - VMware HA (High Availability): Failovercluster (P) - Backup: Backupstrategien (T) - VMware Update Manager (P) - VMware vShield, SyslogCollector, DUMP- Collector (P) 447 Euro netto für ITleague-Partner (sonst 547 Euro netto) Bei Interesse an der Veranstaltung können sich ITleague Partner per Mail an veranstaltungen@itleague.de anmelden. Ansonsten bitte die Online-Anmeldung nutzen: https://itleague.javis.de/onlineregistration/2

Björeman // Melin
Avsnitt 90: Superdatorer med inbyggda soffor

Björeman // Melin

Play Episode Listen Later Aug 26, 2017 91:09


Fredrik låter lite klippt ibland, det är helt och hållet hans eget fel. Jocke citerar fel person, det är helt och hållet hans eget fel. 0 Örnsköldsvik, superdatorer, fiber och SAN 21:47: Datormagazin har en BBS igen! 30:17: IKEA Trådfri 35:58: Apple har bytt ikon för Kartor 36:25: Nya möjliga CMS för Macpro 44:11: Ett viktigt mejl, och sätt att sponsra podden. Vill du inte använda Patreon men vill donera pengar går det att höra av sig till oss för Swish-uppgifter 47:32: Fredrik har äntligen sett Mr Robot! Spoilervarning från 48:49. 52:59: Fredrik lyssnar på snack om blockkedjor och ser chans till bubblor 1:01:50: Discord kastar ut nazister, Trump är hemsk 1:10:19: Chris Lattner går till Google brain och appstorlekar är löjliga 1:15:05: Jocke försöker bygga nytt webbkluster 1:21:29: Jocke recenserar sin nya USB-hubb Länkar Nationellt superdatorcentrum SGI Origin 3200 Silicon graphics Cray Seymour Cray Den första datorn värd att kritisera verkar vara ett citat från Alan Kay Be och Beos Infiniband Fibre channel R12000-processorn Ernie Bayonne Jockes superdatorloot. Craylink - även känd som NUMAlink Promise thunderbold-fibre channel-adapter Ali - Ali express Datormagazin BBS är tillbaka! Fabbes BBS SUGA A590 Terrible fire Vampire-acceleratorerna FPGA Plipbox SD2IEC - 1541-emulatorn Jocke beställde från Polen. Satandisk IKEA Trådfri Artikeln på Macrumors AAPL:s nyhetsbrev Apple har bytt ikon för Kartor Grav Jekyll Bloxsom Sourceforge - där en del lade kod förr Ilir - tusen tack käre Oneplus 5-sponsor Man kan stödja podden på Patreon, men bara om man vill Mr Robot Incomparableavsnittet om Mr Robot Vi pratade lite milt om blockkedjor i avsnitt 67 Discord kastar ut nazister Cloudflare också Tim Cooks brev till de anställda Videoklippet där Anderson Cooper sakligt tar all heder av Trump Chris Lattner blörjar jobba på Google Brain Appstorlekar är fortfarande löjliga Kod är en deprimerande stor del av Facebook-appens filstorlek Acorn Alpine Linux PHP-FPM Nginx WP super cache Varnish Docker Openbsd Ballmer peak Jocke recenserar sin USB-hubb Henge dock Jockes USB-grafikkort Fullständig avsnittsinformation finns här: https://www.bjoremanmelin.se/podcast/avsnitt-90-superdatorer-med-inbyggda-soffor.html.

BSD Now
195: I don't WannaCry

BSD Now

Play Episode Listen Later May 24, 2017 75:15


A pledge of love to OpenBSD, combating ransomware like WannaCry with OpenZFS, and using PFsense to maximize your non-gigabit Internet connection This episode was brought to you by Headlines ino64 project committed to FreeBSD 12-CURRENT (https://svnweb.freebsd.org/base?view=revision&revision=318736) The ino64 project has been completed and merged into FreeBSD 12-CURRENT Extend the inot, devt, nlinkt types to 64-bit ints. Modify struct dirent layout to add doff, increase the size of dfileno to 64-bits, increase the size of dnamlen to 16-bits, and change the required alignment. Increase struct statfs fmntfromname[] and fmntonname[] array length MNAMELEN to 1024 This means the length of a mount point (MNAMELEN) has been increased from 88 byte to 1024 bytes. This allows longer ZFS dataset names and more nesting, and generally improves the usefulness of nested jails It also allow more than 4 billion files to be stored in a single file system (both UFS and ZFS). It also deals with a number of NFS problems, such as Amazon's EFS (cloud NFS), which uses 64 bit IDs even with small numbers of files. ABI breakage is mitigated by providing compatibility using versioned symbols, ingenious use of the existing padding in structures, and by employing other tricks. Unfortunately, not everything can be fixed, especially outside the base system. For instance, third-party APIs which pass struct stat around are broken in backward and forward incompatible ways. A bug in poudriere that may cause some packages to not rebuild is being fixed. Many packages like perl will need to be rebuilt after this change Update note: strictly follow the instructions in UPDATING. Build and install the new kernel with COMPAT_FREEBSD11 option enabled, then reboot, and only then install new world. So you need the new GENERIC kernel with the COMPAT_FREEBSD11 option, so that your old userland will work with the new kernel, and you need to build, install, and reboot onto the new kernel before attempting to install world. The usual process of installing both and then rebooting will NOT WORK Credits: The 64-bit inode project, also known as ino64, started life many years ago as a project by Gleb Kurtsou (gleb). Kirk McKusick (mckusick) then picked up and updated the patch, and acted as a flag-waver. Feedback, suggestions, and discussions were carried by Ed Maste (emaste), John Baldwin (jhb), Jilles Tjoelker (jilles), and Rick Macklem (rmacklem). Kris Moore (kmoore) performed an initial ports investigation followed by an exp-run by Antoine Brodin (antoine). Essential and all-embracing testing was done by Peter Holm (pho). The heavy lifting of coordinating all these efforts and bringing the project to completion were done by Konstantin Belousov (kib). Sponsored by: The FreeBSD Foundation (emaste, kib) Why I love OpenBSD (https://medium.com/@h3artbl33d/why-i-love-openbsd-ca760cf53941) Jeroen Janssen writes: I do love open source software. Oh boy, I really do love open source software. It's extendable, auditable, and customizable. What's not to love? I'm astonished by the idea that tens, hundreds, and sometimes even thousands of enthusiastic, passionate developers collaborate on an idea. Together, they make the world a better place, bit by bit. And this leads me to one of my favorite open source projects: the 22-year-old OpenBSD operating system. The origins of my love affair with OpenBSD From Linux to *BSD The advantages of OpenBSD It's extremely secure It's well documented It's open source > It's neat and clean My take on OpenBSD ** DO ** Combating WannaCry and Other Ransomware with OpenZFS Snapshots (https://www.ixsystems.com/blog/combating-ransomware/) Ransomware attacks that hold your data hostage using unauthorized data encryption are spreading rapidly and are particularly nefarious because they do not require any special access privileges to your data. A ransomware attack may be launched via a sophisticated software exploit as was the case with the recent “WannaCry” ransomware, but there is nothing stopping you from downloading and executing a malicious program that encrypts every file you have access to. If you fail to pay the ransom, the result will be indistinguishable from your simply deleting every file on your system. To make matters worse, ransomware authors are expanding their attacks to include just about any storage you have access to. The list is long, but includes network shares, Cloud services like DropBox, and even “shadow copies” of data that allow you to open previous versions of files. To make matters even worse, there is little that your operating system can do to prevent you or a program you run from encrypting files with ransomware just as it can't prevent you from deleting the files you own. Frequent backups are touted as one of the few effective strategies for recovering from ransomware attacks but it is critical that any backup be isolated from the attack to be immune from the same attack. Simply copying your files to a mounted disk on your computer or in the Cloud makes the backup vulnerable to infection by virtue of the fact that you are backing up using your regular permissions. If you can write to it, the ransomware can encrypt it. Like medical workers wearing hazmat suits for isolation when combating an epidemic, you need to isolate your backups from ransomware. OpenZFS snapshots to the rescue OpenZFS is the powerful file system at the heart of every storage system that iXsystems sells and of its many features, snapshots can provide fast and effective recovery from ransomware attacks at both the individual user and enterprise level as I talked about in 2015. As a copy-on-write file system, OpenZFS provides efficient and consistent snapshots of your data at any given point in time. Each snapshot only includes the precise delta of changes between any two points in time and can be cloned to provide writable copies of any previous state without losing the original copy. Snapshots also provide the basis of OpenZFS replication or backing up of your data to local and remote systems. Because an OpenZFS snapshot takes place at the block level of the file system, it is immune to any file-level encryption by ransomware that occurs over it. A carefully-planned snapshot, replication, retention, and restoration strategy can provide the low-level isolation you need to enable your storage infrastructure to quickly recover from ransomware attacks. OpenZFS snapshots in practice While OpenZFS is available on a number of desktop operating systems such as TrueOS and macOS, the most effective way to bring the benefits of OpenZFS snapshots to the largest number of users is with a network of iXsystems TrueNAS, FreeNAS Certified and FreeNAS Mini unified NAS and SAN storage systems. All of these can provide OpenZFS-backed SMB, NFS, AFP, and iSCSI file and block storage to the smallest workgroups up through the largest enterprises and TrueNAS offers available Fibre Channel for enterprise deployments. By sharing your data to your users using these file and block protocols, you can provide them with a storage infrastructure that can quickly recover from any ransomware attack thrown at it. To mitigate ransomware attacks against individual workstations, TrueNAS and FreeNAS can provide snapshotted storage to your VDI or virtualization solution of choice. Best of all, every iXsystems TrueNAS, FreeNAS Certified, and FreeNAS Mini system includes a consistent user interface and the ability to replicate between one another. This means that any topology of individual offices and campuses can exchange backup data to quickly mitigate ransomware attacks on your organization at all levels. Join us for a free webinar (http://www.onlinemeetingnow.com/register/?id=uegudsbc75) with iXsystems Co-Founder Matt Olander and learn more about why businesses everywhere are replacing their proprietary storage platforms with TrueNAS then email us at info@ixsystems.com or call 1-855-GREP-4-IX (1-855-473-7449), or 1-408-493-4100 (outside the US) to discuss your storage needs with one of our solutions architects. Interview - Michael W. Lucas - mwlucas@michaelwlucas.com (mailto:mwlucas@michaelwlucas.com) / @twitter (https://twitter.com/mwlauthor) Books, conferences, and how these two combine + BR: Welcome back. Tell us what you've been up to since the last time we interviewed you regarding books and such. + AJ: Tell us a little bit about relayd and what it can do. + BR: What other books do you have in the pipeline? + AJ: What are your criteria that qualifies a topic for a mastery book? + BR: Can you tell us a little bit about these writing workshops that you attend and what happens there? + AJ: Without spoiling too much: How did you come up with the idea for git commit murder? + BR: Speaking of BSDCan, can you tell the first timers about what to expect in the http://www.bsdcan.org/2017/schedule/events/890.en.html (Newcomers orientation and mentorship) session on Thursday? + AJ: Tell us about the new WIP session at BSDCan. Who had the idea and how much input did you get thus far? + BR: Have you ever thought about branching off into a new genre like children's books or medieval fantasy novels? + AJ: Is there anything else before we let you go? News Roundup Using LLDP on FreeBSD (https://tetragir.com/freebsd/networking/using-lldp-on-freebsd.html) LLDP, or Link Layer Discovery Protocol allows system administrators to easily map the network, eliminating the need to physically run the cables in a rack. LLDP is a protocol used to send and receive information about a neighboring device connected directly to a networking interface. It is similar to Cisco's CDP, Foundry's FDP, Nortel's SONMP, etc. It is a stateless protocol, meaning that an LLDP-enabled device sends advertisements even if the other side cannot do anything with it. In this guide the installation and configuration of the LLDP daemon on FreeBSD as well as on a Cisco switch will be introduced. If you are already familiar with Cisco's CDP, LLDP won't surprise you. It is built for the same purpose: to exchange device information between peers on a network. While CDP is a proprietary solution and can be used only on Cisco devices, LLDP is a standard: IEEE 802.3AB. Therefore it is implemented on many types of devices, such as switches, routers, various desktop operating systems, etc. LLDP helps a great deal in mapping the network topology, without spending hours in cabling cabinets to figure out which device is connected with which switchport. If LLDP is running on both the networking device and the server, it can show which port is connected where. Besides physical interfaces, LLDP can be used to exchange a lot more information, such as IP Address, hostname, etc. In order to use LLDP on FreeBSD, net-mgmt/lldpd has to be installed. It can be installed from ports using portmaster: #portmaster net-mgmt/lldpd Or from packages: #pkg install net-mgmt/lldpd By default lldpd sends and receives all the information it can gather , so it is advisable to limit what we will communicate with the neighboring device. The configuration file for lldpd is basically a list of commands as it is passed to lldpcli. Create a file named lldpd.conf under /usr/local/etc/ The following configuration gives an example of how lldpd can be configured. For a full list of options, see %man lldpcli To check what is configured locally, run #lldpcli show chassis detail To see the neighbors run #lldpcli show neighbors details Check out the rest of the article about enabling LLDP on a Cisco switch experiments with prepledge (http://www.tedunangst.com/flak/post/experiments-with-prepledge) Ted Unangst takes a crack at a system similar to the one being designed for Capsicum, Oblivious Sandboxing (See the presentation at BSDCan), where the application doesn't even know it is in the sandbox MP3 is officially dead, so I figure I should listen to my collection one last time before it vanishes entirely. The provenance of some of these files is a little suspect however, and since I know one shouldn't open files from strangers, I'd like to take some precautions against malicious malarkey. This would be a good use for pledge, perhaps, if we can get it working. At the same time, an occasional feature request for pledge is the ability to specify restrictions before running a program. Given some untrusted program, wrap its execution in a pledge like environment. There are other system call sandbox mechanisms that can do this (systrace was one), but pledge is quite deliberately designed not to support this. But maybe we can bend it to our will. Our pledge wrapper can't be an external program. This leaves us with the option of injecting the wrapper into the target program via LD_PRELOAD. Before main even runs, we'll initialize what needs initializing, then lock things down with a tight pledge set. Our eventual target will be ffplay, but hopefully the design will permit some flexibility and reuse. So the new code is injected to override the open syscall, and reads a list of files from an environment variable. Those files are opened and the path and file descriptor are put into a linked list, and then pledge is used to restrict further access to the file system. The replacement open call now searches just that linked list, returning the already opened file descriptors. So as long as your application only tries to open files that you have preopened, it can function without modification within the sandbox. Or at least that is the goal... ffplay tries to dlopen() some things, and because of the way dlopen() works, it doesn't go via the libc open() wrapper, so it doesn't get overridden ffplay also tries to call a few ioctl's, not allowed After stubbing both of those out, it still doesn't work and it is just getting worse Ted switches to a new strategy, using ffmpeg to convert the .mp3 to a .wav file and then just cat it to /dev/audio A few more stubs for ffmpeg, including access(), and adding tty access to the list of pledges, and it finally works This point has been made from the early days, but I think this exercise reinforces it, that pledge works best with programs where you understand what the program is doing. A generic pledge wrapper isn't of much use because the program is going to do something unexpected and you're going to have a hard time wrangling it into submission. Software is too complex. What in the world is ffplay doing? Even if I were working with the source, how long would it take to rearrange the program into something that could be pledged? One can try using another program, but I would wager that as far as multiformat media players go, ffplay is actually on the lower end of the complexity spectrum. Most of the trouble comes from using SDL as an abstraction layer, which performs a bunch of console operations. On the flip side, all of this early init code is probably the right design. Once SDL finally gets its screen handle setup, we could apply pledge and sandbox the actual media decoder. That would be the right way to things. Is pledge too limiting? Perhaps, but that's what I want. I could have just kept adding permissions until ffplay had full access to my X socket, but what kind of sandbox is that? I don't want naughty MP3s scraping my screen and spying on my keystrokes. The sandbox I created had all the capabilities one needs to convert an MP3 to audible sound, but the tool I wanted to use wasn't designed to work in that environment. And in its defense, these were new post hoc requirements. Other programs, even sed, suffer from less than ideal pledge sets as well. The best summary might be to say that pledge is designed for tomorrow's programs, not yesterday's (and vice versa). There were a few things I could have done better. In particular, I gave up getting audio to work, even though there's a nice description of how to work with pledge in the sio_open manual. Alas, even going back and with a bit more effort I still haven't succeeded. The requirements to use libsndio are more permissive than I might prefer. How I Maximized the Speed of My Non-Gigabit Internet Connection (https://medium.com/speedtest-by-ookla/engineer-maximizes-internet-speed-story-c3ec0e86f37a) We have a new post from Brennen Smith, who is the Lead Systems Engineer at Ookla, the company that runs Speedtest.net, explaining how he used pfSense to maximize his internet connection I spend my time wrangling servers and internet infrastructure. My daily goals range from designing high performance applications supporting millions of users and testing the fastest internet connections in the world, to squeezing microseconds from our stack —so at home, I strive to make sure that my personal internet performance is running as fast as possible. I live in an area with a DOCSIS ISP that does not provide symmetrical gigabit internet — my download and upload speeds are not equal. Instead, I have an asymmetrical plan with 200 Mbps download and 10 Mbps upload — this nuance considerably impacted my network design because asymmetrical service can more easily lead to bufferbloat. We will cover bufferbloat in a later article, but in a nutshell, it's an issue that arises when an upstream network device's buffers are saturated during an upload. This causes immense network congestion, latency to rise above 2,000 ms., and overall poor quality of internet. The solution is to shape the outbound traffic to a speed just under the sending maximum of the upstream device, so that its buffers don't fill up. My ISP is notorious for having bufferbloat issues due to the low upload performance, and it's an issue prevalent even on their provided routers. They walk through a list of router devices you might consider, and what speeds they are capable of handling, but ultimately ended up using a generic low power x86 machine running pfSense 2.3 In my research and testing, I also evaluated IPCop, VyOS, OPNSense, Sophos UTM, RouterOS, OpenWRT x86, and Alpine Linux to serve as the base operating system, but none were as well supported and full featured as PFSense. The main setting to look at is the traffic shaping of uploads, to keep the pipe from getting saturated and having a large buffer build up in the modem and further upstream. This build up is what increases the latency of the connection As with any experiment, any conclusions need to be backed with data. To validate the network was performing smoothly under heavy load, I performed the following experiment: + Ran a ping6 against speedtest.net to measure latency. + Turned off QoS to simulate a “normal router”. + Started multiple simultaneous outbound TCP and UDP streams to saturate my outbound link. + Turned on QoS to the above settings and repeated steps 2 and 3. As you can see from the plot below, without QoS, my connection latency increased by ~1,235%. However with QoS enabled, the connection stayed stable during the upload and I wasn't able to determine a statistically significant delta. That's how I maximized the speed on my non-gigabit internet connection. What have you done with your network? FreeBSD on 11″ MacBook Air (https://www.geeklan.co.uk/?p=2214) Sevan Janiyan writes in his tech blog about his experiences running FreeBSD on an 11'' MacBook Air This tiny machine has been with me for a few years now, It has mostly run OS X though I have tried OpenBSD on it (https://www.geeklan.co.uk/?p=1283). Besides the screen resolution I'm still really happy with it, hardware wise. Software wise, not so much. I use an external disk containing a zpool with my data on it. Among this data are several source trees. CVS on a ZFS filesystem on OS X is painfully slow. I dislike that builds running inside Terminal.app are slow at the expense of a responsive UI. The system seems fragile, at the slightest push the machine will either hang or become unresponsive. Buggy serial drivers which do not implement the break signal and cause instability are frustrating. Last week whilst working on Rump kernel (http://rumpkernel.org/) builds I introduced some new build issues in the process of fixing others, I needed to pick up new changes from CVS by updating my copy of the source tree and run builds to test if issues were still present. I was let down on both counts, it took ages to update source and in the process of cross compiling a NetBSD/evbmips64-el release, the system locked hard. That was it, time to look what was possible elsewhere. While I have been using OS X for many years, I'm not tied to anything exclusive on it, maybe tweetbot, perhaps, but that's it. On the BSDnow podcast they've been covering changes coming in to TrueOS (formerly PC-BSD – a desktop focused distro based on FreeBSD), their experiments seemed interesting, the project now tracks FreeBSD-CURRENT, they've replaced rcng with OpenRC as the init system and it comes with a pre-configured desktop environment, using their own window manager (Lumina). Booting the USB flash image it made it to X11 without any issue. The dock has a widget which states the detected features, no wifi (Broadcom), sound card detected and screen resolution set to 1366×768. I planned to give it a try on the weekend. Friday, I made backups and wiped the system. TrueOS installed without issue, after a short while I had a working desktop, resuming from sleep worked out of the box. I didn't spend long testing TrueOS, switching out NetBSD-HEAD only to realise that I really need ZFS so while I was testing things out, might as well give stock FreeBSD 11-STABLE a try (TrueOS was based on -CURRENT). Turns out sleep doesn't work yet but sound does work out of the box and with a few invocations of pkg(8) I had xorg, dwm, firefox, CVS and virtuabox-ose installed from binary packages. VirtualBox seems to cause the system to panic (bug 219276) but I should be able to survive without my virtual machines over the next few days as I settle in. I'm considering ditching VirtualBox and converting the vdi files to raw images so that they can be written to a new zvol for use with bhyve. As my default keyboard layout is Dvorak, OS X set the EFI settings to this layout. The first time I installed FreeBSD 11-STABLE, I opted for full disk encryption but ran into this odd issue where on boot the keyboard layout was Dvorak and password was accepted, the system would boot and as it went to mount the various filesystems it would switch back to QWERTY. I tried entering my password with both layout but wasn't able to progress any further, no bug report yet as I haven't ruled myself out as the problem. Thunderbolt gigabit adapter –bge(4) (https://www.freebsd.org/cgi/man.cgi?query=bge) and DVI adapter both worked on FreeBSD though the gigabit adapter needs to be plugged in at boot to be detected. The trackpad bind to wsp(4) (https://www.freebsd.org/cgi/man.cgi?query=wsp), left, right and middle clicks are available through single, double and tripple finger tap. Sound card binds to snd_hda(4) (https://www.freebsd.org/cgi/man.cgi?query=snd_hda) and works out of the box. For wifi I'm using a urtw(4) (https://www.freebsd.org/cgi/man.cgi?query=urtw) Alfa adapter which is a bit on the large side but works very reliably. A copy of the dmesg (https://www.geeklan.co.uk/files/macbookair/freebsd-dmesg.txt) is here. Beastie Bits OPNsense - call-for-testing for SafeStack (https://forum.opnsense.org/index.php?topic=5200.0) BSD 4.4: cat (https://www.rewritinghistorycasts.com/screencasts/bsd-4.4:-cat) Continuous Unix commit history from 1970 until today (https://github.com/dspinellis/unix-history-repo) Update on Unix Architecture Evolution Diagrams (https://www.spinellis.gr/blog/20170510/) “Relayd and Httpd Mastery” is out! (https://blather.michaelwlucas.com/archives/2951) Triangle BSD User Group Meeting -- libxo (https://www.meetup.com/Triangle-BSD-Users-Group/events/240247251/) *** Feedback/Questions Carlos - ASUS Tinkerboard (http://dpaste.com/1GJHPNY#wrap) James - Firewall question (http://dpaste.com/0QCW933#wrap) Adam - ZFS books (http://dpaste.com/0GMG5M2#wrap) David - Managing zvols (http://dpaste.com/2GP8H1E#wrap) ***

On The Box | The TV Podcast
On The Box 06 - The Fibre Channel Keeps You Regular

On The Box | The TV Podcast

Play Episode Listen Later Apr 29, 2017 65:32


So the former Top Gear presenters want to take their show on the road, BBC's Atlantis was deemed a failure, Neil Patrick Harris will host Best Night Ever - the U.S. version of Saturday Night Takeaway, The Daily Show leaves Comedy Central and other stuff. All sorts of things happened this week so join us as we provide our own rather odd perspective on comedy panel shows and recent events in Televisionland. This week we've been watching Jonathan Strange & Mr. Norrell, Shark - BBC's recent nature documentary series, and Gogglebox. #OnTheBox #Television #TheGeekShow #News #Reviews #TV

BSD Now
186: The Fast And the Firewall: Tokyo Drift

BSD Now

Play Episode Listen Later Mar 22, 2017 174:07


This week on BSDNow, reports from AsiaBSDcon, TrueOS and FreeBSD news, Optimizing IllumOS Kernel, your questions and more. This episode was brought to you by Headlines AsiaBSDcon Reports and Reviews () AsiaBSDcon schedule (https://2017.asiabsdcon.org/program.html.en) Schedule and slides from the 4th bhyvecon (http://bhyvecon.org/) Michael Dexter's trip report on the iXsystems blog (https://www.ixsystems.com/blog/ixsystems-attends-asiabsdcon-2017) NetBSD AsiaBSDcon booth report (http://mail-index.netbsd.org/netbsd-advocacy/2017/03/13/msg000729.html) *** TrueOS Community Guidelines are here! (https://www.trueos.org/blog/trueos-community-guidelines/) TrueOS has published its new Community Guidelines The TrueOS Project has existed for over ten years. Until now, there was no formally defined process for interested individuals in the TrueOS community to earn contributor status as an active committer to this long-standing project. The current core TrueOS developers (Kris Moore, Ken Moore, and Joe Maloney) want to provide the community more opportunities to directly impact the TrueOS Project, and wish to formalize the process for interested people to gain full commit access to the TrueOS repositories. These describe what is expected of community members and committers They also describe the process of getting commit access to the TrueOS repo: Previously, Kris directly handed out commit bits. Now, the Core developers have provided a small list of requirements for gaining a TrueOS commit bit: Create five or more pull requests in a TrueOS Project repository within a single six month period. Stay active in the TrueOS community through at least one of the available community channels (Gitter, Discourse, IRC, etc.). Request commit access from the core developers via core@trueos.org OR Core developers contact you concerning commit access. Pull requests can be any contribution to the project, from minor documentation tweaks to creating full utilities. At the end of every month, the core developers review the commit logs, removing elements that break the Project or deviate too far from its intended purpose. Additionally, outstanding pull requests with no active dissension are immediately merged, if possible. For example, a user submits a pull request which adds a little-used OpenRC script. No one from the community comments on the request or otherwise argues against its inclusion, resulting in an automatic merge at the end of the month. In this manner, solid contributions are routinely added to the project and never left in a state of “limbo”. The page also describes the perks of being a TrueOS committer: Contributors to the TrueOS Project enjoy a number of benefits, including: A personal TrueOS email alias: @trueos.org Full access for managing TrueOS issues on GitHub. Regular meetings with the core developers and other contributors. Access to private chat channels with the core developers. Recognition as part of an online Who's Who of TrueOS developers. The eternal gratitude of the core developers of TrueOS. A warm, fuzzy feeling. Intel Donates 250.000 $ to the FreeBSD Foundation (https://www.freebsdfoundation.org/news-and-events/latest-news/new-uranium-level-donation-and-collaborative-partnership-with-intel/) More details about the deal: Systems Thinking: Intel and the FreeBSD Project (https://www.freebsdfoundation.org/blog/systems-thinking-intel-and-the-freebsd-project/) Intel will be more actively engaging with the FreeBSD Foundation and the FreeBSD Project to deliver more timely support for Intel products and technologies in FreeBSD. Intel has contributed code to FreeBSD for individual device drivers (i.e. NICs) in the past, but is now seeking a more holistic “systems thinking” approach. Intel Blog Post (https://01.org/blogs/imad/2017/intel-increases-support-freebsd-project) We will work closely with the FreeBSD Foundation to ensure the drivers, tools, and applications needed on Intel® SSD-based storage appliances are available to the community. This collaboration will also provide timely support for future Intel® 3D XPoint™ products. Thank you very much, Intel! *** Applied FreeBSD: Basic iSCSI (https://globalengineer.wordpress.com/2017/03/05/applied-freebsd-basic-iscsi/) iSCSI is often touted as a low-cost replacement for fibre-channel (FC) Storage Area Networks (SANs). Instead of having to setup a separate fibre-channel network for the SAN, or invest in the infrastructure to run Fibre-Channel over Ethernet (FCoE), iSCSI runs on top of standard TCP/IP. This means that the same network equipment used for routing user data on a network could be utilized for the storage as well. This article will cover a very basic setup where a FreeBSD server is configured as an iSCSI Target, and another FreeBSD server is configured as the iSCSI Initiator. The iSCSI Target will export a single disk drive, and the initiator will create a filesystem on this disk and mount it locally. Advanced topics, such as multipath, ZFS storage pools, failover controllers, etc. are not covered. The real magic is the /etc/ctl.conf file, which contains all of the information necessary for ctld to share disk drives on the network. Check out the man page for /etc/ctl.conf for more details; below is the configuration file that I created for this test setup. Note that on a system that has never had iSCSI configured, there will be no existing configuration file, so go ahead and create it. Then, enable ctld and start it: sysrc ctld_enable=”YES” service ctld start You can use the ctladm command to see what is going on: root@bsdtarget:/dev # ctladm lunlist (7:0:0/0): Fixed Direct Access SPC-4 SCSI device (7:0:1/1): Fixed Direct Access SPC-4 SCSI device root@bsdtarget:/dev # ctladm devlist LUN Backend Size (Blocks) BS Serial Number Device ID 0 block 10485760 512 MYSERIAL 0 MYDEVID 0 1 block 10485760 512 MYSERIAL 1 MYDEVID 1 Now, let's configure the client side: In order for a FreeBSD host to become an iSCSI Initiator, the iscsd daemon needs to be started. sysrc iscsid_enable=”YES” service iscsid start Next, the iSCSI Initiator can manually connect to the iSCSI target using the iscsictl tool. While setting up a new iSCSI session, this is probably the best option. Once you are sure the configuration is correct, add the configuration to the /etc/iscsi.conf file (see man page for this file). For iscsictl, pass the IP address of the target as well as the iSCSI IQN for the session: + iscsictl -A -p 192.168.22.128 -t iqn.2017-02.lab.testing:basictarget You should now have a new device (check dmesg), in this case, da1 The guide them walks through partitioning the disk, and laying down a UFS file system, and mounting it This it walks through how to disconnect iscsi, incase you don't want it anymore This all looked nice and easy, and it works very well. Now lets see what happens when you try to mount the iSCSI from Windows Ok, that wasn't so bad. Now, instead of sharing an entire space disk on the host via iSCSI, share a zvol. Now your windows machine can be backed by ZFS. All of your problems are solved. Interview - Philipp Buehler - pbuehler@sysfive.com (mailto:pbuehler@sysfive.com) Technical Lead at SysFive, and Former OpenBSD Committer News Roundup Half a dozen new features in mandoc -T html (http://undeadly.org/cgi?action=article&sid=20170316080827) mandoc (http://man.openbsd.org/mandoc.1)'s HTML output mode got some new features Even though mdoc(7) is a semantic markup language, traditionally none of the semantic annotations were communicated to the reader. [...] Now, at least in -T html output mode, you can see the semantic function of marked-up words by hovering your mouse over them. In terminal output modes, we have the ctags(1)-like internal search facility built around the less(1) tag jump (:t) feature for quite some time now. We now have a similar feature in -T html output mode. To jump to (almost) the same places in the text, go to the address bar of the browser, type a hash mark ('#') after the URI, then the name of the option, command, variable, error code etc. you want to jump to, and hit enter. Check out the full report by Ingo Schwarze (schwarze@) and try out these new features *** Optimizing IllumOS Kernel Crypto (http://zfs-create.blogspot.com/2014/05/optimizing-illumos-kernel-crypto.html) Sašo Kiselkov, of ZFS fame, looked into the performance of the OpenSolaris kernel crypto framework and found it lacking. The article also spends a few minutes on the different modes and how they work. Recently I've had some motivation to look into the KCF on Illumos and discovered that, unbeknownst to me, we already had an AES-NI implementation that was automatically enabled when running on Intel and AMD CPUs with AES-NI support. This work was done back in 2010 by Dan Anderson.This was great news, so I set out to test the performance in Illumos in a VM on my Mac with a Core i5 3210M (2.5GHz normal, 3.1GHz turbo). The initial tests of “what the hardware can do” were done in OpenSSL So now comes the test for the KCF. I wrote a quick'n'dirty crypto test module that just performed a bunch of encryption operations and timed the results. KCF got around 100 MB/s for each algorithm, except half that for AES-GCM OpenSSL had done over 3000 MB/s for CTR mode, 500 MB/s for CBC, and 1000 MB/s for GCM What the hell is that?! This is just plain unacceptable. Obviously we must have hit some nasty performance snag somewhere, because this is comical. And sure enough, we did. When looking around in the AES-NI implementation I came across this bit in aes_intel.s that performed the CLTS instruction. This is a problem: 3.1.2 Instructions That Cause VM Exits ConditionallyCLTS. The CLTS instruction causes a VM exit if the bits in position 3 (corresponding to CR0.TS) are set in both the CR0 guest/host mask and the CR0 read shadow. The CLTS instruction signals to the CPU that we're about to use FPU registers (which is needed for AES-NI), which in VMware causes an exit into the hypervisor. And we've been doing it for every single AES block! Needless to say, performing the equivalent of a very expensive context switch every 16 bytes is going to hurt encryption performance a bit. The reason why the kernel is issuing CLTS is because for performance reasons, the kernel doesn't save and restore FPU register state on kernel thread context switches. So whenever we need to use FPU registers inside the kernel, we must disable kernel thread preemption via a call to kpreemptdisable() and kpreemptenable() and save and restore FPU register state manually. During this time, we cannot be descheduled (because if we were, some other thread might clobber our FPU registers), so if a thread does this for too long, it can lead to unexpected latency bubbles The solution was to restructure the AES and KCF block crypto implementations in such a way that we execute encryption in meaningfully small chunks. I opted for 32k bytes, for reasons which I'll explain below. Unfortunately, doing this restructuring work was a bit more complicated than one would imagine, since in the KCF the implementation of the AES encryption algorithm and the block cipher modes is separated into two separate modules that interact through an internal API, which wasn't really conducive to high performance (we'll get to that later). Anyway, having fixed the issue here and running the code at near native speed, this is what I get: AES-128/CTR: 439 MB/s AES-128/CBC: 483 MB/s AES-128/GCM: 252 MB/s Not disastrous anymore, but still, very, very bad. Of course, you've got keep in mind, the thing we're comparing it to, OpenSSL, is no slouch. It's got hand-written highly optimized inline assembly implementations of most of these encryption functions and their specific modes, for lots of platforms. That's a ton of code to maintain and optimize, but I'll be damned if I let this kind of performance gap persist. Fixing this, however, is not so trivial anymore. It pertains to how the KCF's block cipher mode API interacts with the cipher algorithms. It is beautifully designed and implemented in a fashion that creates minimum code duplication, but this also means that it's inherently inefficient. ECB, CBC and CTR gained the ability to pass an algorithm-specific "fastpath" implementation of the block cipher mode, because these functions benefit greatly from pipelining multiple cipher calls into a single place. ECB, CTR and CBC decryption benefit enormously from being able to exploit the wide XMM register file on Intel to perform encryption/decryption operations on 8 blocks at the same time in a non-interlocking manner. The performance gains here are on the order of 5-8x.CBC encryption benefits from not having to copy the previously encrypted ciphertext blocks into memory and back into registers to XOR them with the subsequent plaintext blocks, though here the gains are more modest, around 1.3-1.5x. After all of this work, this is how the results now look on Illumos, even inside of a VM: Algorithm/Mode 128k ops AES-128/CTR: 3121 MB/s AES-128/CBC: 691 MB/s AES-128/GCM: 1053 MB/s So the CTR and GCM speeds have actually caught up to OpenSSL, and CBC is actually faster than OpenSSL. On the decryption side of things, CBC decryption also jumped from 627 MB/s to 3011 MB/s. Seeing these performance numbers, you can see why I chose 32k for the operation size in between kernel preemption barriers. Even on the slowest hardware with AES-NI, we can expect at least 300-400 MB/s/core of throughput, so even in the worst case, we'll be hogging the CPU for at most ~0.1ms per run. Overall, we're even a little bit faster than OpenSSL in some tests, though that's probably down to us encrypting 128k blocks vs 8k in the "openssl speed" utility. Anyway, having fixed this monstrous atrocity of a performance bug, I can now finally get some sleep. To made these tests repeatable, and to ensure that the changes didn't break the crypto algorithms, Saso created a crypto_test kernel module. I have recently created a FreeBSD version of crypto_test.ko, for much the same purposes Initial performance on FreeBSD is not as bad, if you have the aesni.ko module loaded, but it is not up to speed with OpenSSL. You cannot directly compare to the benchmarks Saso did, because the CPUs are vastly different. Performance results (https://wiki.freebsd.org/OpenCryptoPerformance) I hope to do some more tests on a range of different sized CPUs in order to determine how the algorithms scale across different clock speeds. I also want to look at, or get help and have someone else look at, implementing some of the same optimizations that Saso did. It currently seems like there isn't a way to perform addition crypto operations in the same session without regenerating the key table. Processing additional buffers in an existing session might offer a number of optimizations for bulk operations, although in many cases, each block is encrypted with a different key and/or IV, so it might not be very useful. *** Brendan Gregg's special freeware tools for sysadmins (http://www.brendangregg.com/specials.html) These tools need to be in every (not so) serious sysadmins toolbox. Triple ROT13 encryption algorithm (beware: export restrictions may apply) /usr/bin/maybe, in case true and false don't provide too little choice... The bottom command lists you all the processes using the least CPU cycles. Check out the rest of the tools. You wrote similar tools and want us to cover them in the show? Send us an email to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) *** A look at 2038 (http://www.lieberbiber.de/2017/03/14/a-look-at-the-year-20362038-problems-and-time-proofness-in-various-systems/) I remember the Y2K problem quite vividly. The world was going crazy for years, paying insane amounts of money to experts to fix critical legacy systems, and there was a neverending stream of predictions from the media on how it's all going to fail. Most didn't even understand what the problem was, and I remember one magazine writing something like the following: Most systems store the current year as a two-digit value to save space. When the value rolls over on New Year's Eve 1999, those two digits will be “00”, and “00” means “halt operation” in the machine language of many central processing units. If you're in an elevator at this time, it will stop working and you may fall to your death. I still don't know why they thought a computer would suddenly interpret data as code, but people believed them. We could see a nearby hydropower plant from my parents' house, and we expected it to go up in flames as soon as the clock passed midnight, while at least two airplanes crashed in our garden at the same time. Then nothing happened. I think one of the most “severe” problems was the police not being able to open their car garages the next day because their RFID tokens had both a start and end date for validity, and the system clock had actually rolled over to 1900, so the tokens were “not yet valid”. That was 17 years ago. One of the reasons why Y2K wasn't as bad as it could have been is that many systems had never used the “two-digit-year” representation internally, but use some form of “timestamp” relative to a fixed date (the “epoch”). The actual problem with time and dates rolling over is that systems calculate timestamp differences all day. Since a timestamp derived from the system clock seemingly only increases with each query, it is very common to just calculate diff = now - before and never care about the fact that now could suddenly be lower than before because the system clock has rolled over. In this case diff is suddenly negative, and if other parts of the code make further use of the suddenly negative value, things can go horribly wrong. A good example was a bug in the generator control units (GCUs) aboard Boeing 787 “Dreamliner” aircrafts, discovered in 2015. An internal timestamp counter would overflow roughly 248 days after the system had been powered on, triggering a shut down to “safe mode”. The aircraft has four generator units, but if all were powered up at the same time, they would all fail at the same time. This sounds like an overflow caused by a signed 32-bit counter counting the number of centiseconds since boot, overflowing after 248.55 days, and luckily no airline had been using their Boing 787 models for such a long time between maintenance intervals. The “obvious” solution is to simply switch to 64-Bit values and call it day, which would push overflow dates far into the future (as long as you don't do it like the IBM S/370 mentioned before). But as we've learned from the Y2K problem, you have to assume that computer systems, computer software and stored data (which often contains timestamps in some form) will stay with us for much longer than we might think. The years 2036 and 2038 might be far in the future, but we have to assume that many of the things we make and sell today are going to be used and supported for more than just 19 years. Also many systems have to store dates which are far in the future. A 30 year mortgage taken out in 2008 could have already triggered the bug, and for some banks it supposedly did. sysgettimeofday() is one of the most used system calls on a generic Linux system and returns the current time in form of an UNIX timestamp (timet data type) plus fraction (susecondst data type). Many applications have to know the current time and date to do things, e.g. displaying it, using it in game timing loops, invalidating caches after their lifetime ends, perform an action after a specific moment has passed, etc. In a 32-Bit UNIX system, timet is usually defined as a signed 32-Bit Integer. When kernel, libraries and applications are compiled, the compiler will turn this assumption machine code and all components later have to match each other. So a 32-Bit Linux application or library still expects the kernel to return a 32-Bit value even if the kernel is running on a 64-Bit architecture and has 32-Bit compatibility. The same holds true for applications calling into libraries. This is a major problem, because there will be a lot of legacy software running in 2038. Systems which used an unsigned 32-Bit Integer for timet push the problem back to 2106, but I don't know about many of those. The developers of the GNU C library (glibc), the default standard C library for many GNU/Linux systems, have come up with a design for year 2038 proofness for their library. Besides the timet data type itself, a number of other data structures have fields based on timet or the combined struct timespec and struct timeval types. Many methods beside those intended for setting and querying the current time use timestamps 32-Bit Windows applications, or Windows applications defining _USE32BITTIMET, can be hit by the year 2038 problem too if they use the timet data type. The _time64t data type had been available since Visual C 7.1, but only Visual C 8 (default with Visual Studio 2015) expanded timet to 64 bits by default. The change will only be effective after a recompilation, legacy applications will continue to be affected. If you live in a 64-Bit world and use a 64-Bit kernel with 64-Bit only applications, you might think you can just ignore the problem. In such a constellation all instances of the standard time_t data type for system calls, libraries and applications are signed 64-Bit Integers which will overflow in around 292 billion years. But many data formats, file systems and network protocols still specify 32-Bit time fields, and you might have to read/write this data or talk to legacy systems after 2038. So solving the problem on your side alone is not enough. Then the article goes on to describe how all of this will break your file systems. Not to mention your databases and other file formats. Also see Theo De Raadt's EuroBSDCon 2013 Presentation (https://www.openbsd.org/papers/eurobsdcon_2013_time_t/mgp00001.html) *** Beastie Bits Michael Lucas: Get your name in “Absolute FreeBSD 3rd Edition” (https://blather.michaelwlucas.com/archives/2895) ZFS compressed ARC stats to top (https://svnweb.freebsd.org/base?view=revision&revision=r315435) Matthew Dillon discovered HAMMER was repeating itself when writing to disk. Fixing that issue doubled write speeds (https://www.dragonflydigest.com/2017/03/14/19452.html) TedU on Meaningful Short Names (http://www.tedunangst.com/flak/post/shrt-nms-fr-clrty) vBSDcon and EuroBSDcon Call for Papers are open (https://www.freebsdfoundation.org/blog/submit-your-work-vbsdcon-and-eurobsdcon-cfps-now-open/) Feedback/Questions Craig asks about BSD server management (http://pastebin.com/NMshpZ7n) Michael asks about jails as a router between networks (http://pastebin.com/UqRwMcRk) Todd asks about connecting jails (http://pastebin.com/i1ZD6eXN) Dave writes in with an interesting link (http://pastebin.com/QzW5c9wV) > applications crash more often due to errors than corruptions. In the case of corruption, a few applications (e.g., Log-Cabin, ZooKeeper) can use checksums and redundancy to recover, leading to a correct behavior; however, when the corruption is transformed into an error, these applications crash, resulting in reduced availability. ***

The Hot Aisle
The Hot Aisle – Fibre Channel is Dead, Right? With Dr. J Metz – Episode 40

The Hot Aisle

Play Episode Listen Later Jun 14, 2016 64:21


Dr. J Metz (@drjmetz) R&D Engineer for the Office of the CTO at Cisco (@cisco) joins us this week on The Hot Aisle and expertly educates us on a number of storage (and not-so-storage) related topics including but not limited to: Fibre Channel, FCoE, Ethernet, iSCSI, PCIe, NVMe, NVMf, iSCSI, RDMA over Ethernet, and More! […]

RunAs Radio
Next Generation Storage with Stephen Foskett

RunAs Radio

Play Episode Listen Later Oct 2, 2013 40:57


Richard chats with Stephen Foskett about where storage is at these days. The conversation spans far and wide, talking about how Microsoft's latest products (like Exchange 2013) are rather SAN-hostile, why we're all happy to get away from FibreChannel, our indifference toward iSCSI and the impact of NFS and SMB3 on file systems. Stephen also talks about just how fast fast is these days - whether it's SSDs, PCI-E based storage or USB3 thumb drives! It's all about the iOPs! Make sure you check out Tech Field Day!

RunAs Radio
Stephen Foskett Talks Storage!

RunAs Radio

Play Episode Listen Later Jul 13, 2011 35:04


Richard and Greg talk to Stephen Foskett about storage technologies. Stephen gives a history of storage technologies from RAID arrays to SANs, including Microsoft's Storage Server. The conversation ranges over a huge number of storage technologies, including the new generation of low cost iSCSI targets, lamenting the death of Microsoft Home Server and even diving into obscure concepts like (get this) Fibre Channel over Token Ring! The show ends exploring the prospects of storage in the cloud and the possibilities going forward for data storage.

NetApp TV Studios
NetApp and Brocade Executives Discuss Fibre Channel over Ethernet

NetApp TV Studios

Play Episode Listen Later Aug 18, 2009 5:22


NetApp's Chief Marketing Officer, Jay Kidd, and Brocade's Senior Vice President of Worldwide Sales, Ian Whiting, discuss NetApp's introduction of Brocade's FCoE product portfolio, which include a top-of-rack switch, converged network adapters, and FCoE blade for the DCX backbone.

RunAs Radio
Alan Sugano Digs Into Storage Technologies!

RunAs Radio

Play Episode Listen Later Mar 18, 2009 44:38


Richard and Greg talk to Alan Sugano of ADS Consulting about everything storage. The conversation ranges over the different types of RAID, direct attached storage, iSCSI, Fibre Channel, Fibre Channel Over Ethernet, SANs... you name it!

Technical Council Podcast
Technical Council Podcast FAST 09 Paper - CA-NFS

Technical Council Podcast

Play Episode Listen Later Mar 12, 2009 14:05


From FAST 09 best paper authors, A Congestion-Aware Network File System an interview with Alexandros Batsakis Randal Burns Arkady Kanevsky James Lentini Thomas Talpey

Technical Council Podcast
Bob Snively on FCoE

Technical Council Podcast

Play Episode Listen Later Mar 11, 2009 20:19


Bob Snively is interviewed by TC member Steve Wilson on Fibre Channel over Ethernet Technology and Standards

NetApp TV Studios
Brocade: Fibre Channel Data Encryption

NetApp TV Studios

Play Episode Listen Later Dec 18, 2008 6:32


NetApp and Brocade experts discuss ways to strengthen data centers with storage security solutions that protect critical data.

Technical Council Podcast
Interview with Rich Ramos on Object-based Storage Devices

Technical Council Podcast

Play Episode Listen Later Aug 11, 2008 11:52


Rich Ramos, an active member of the SNIA OSD Technical Work Group talks about the OSD standards and latest developments.

Technical Council Podcast
Interview with Jim Wiliams on Data Integrity

Technical Council Podcast

Play Episode Listen Later Jun 9, 2008 14:33


Jim Williams, SNIA TC member from Oracle, talks about Data Integrity and the work being done by the new Data Integrity TWG.

NetApp TV Studios
FCOE: A New Standard Emerging for Network Convergence

NetApp TV Studios

Play Episode Listen Later Apr 8, 2008 8:52


Joel Reich discusses the technology, anticipated costs, and benefits of Fibre Channel over Ethernet (FCoE).

Technical Council Podcast
Interview with Don Deel on Management Frameworks

Technical Council Podcast

Play Episode Listen Later Apr 2, 2008 18:04


Don Deel, one of the original contributers to the SMI-S standard talks about the new work going on in standardizing Management Frameworks

RunAs Radio
Tom Clark Connects Us With iSCSI!

RunAs Radio

Play Episode Listen Later Sep 12, 2007 34:22


Tom Clark talks to Richard and Greg about iSCSI and how its bringing SAN solutions to the medium-scale enterprise. iSCSI uses TCP and Ethernet to provide SAN features and connectivity at a substantially lower cost than Fibre Channel.

Technical Council Podcast
Interview with Mike Walker on SMI

Technical Council Podcast

Play Episode Listen Later Sep 6, 2007 17:23


Mike Walker, Chair of the SNIA Storage Management Initative Technical Steering Group talks about new developments and future work of the Storage Management Initiative and SMI-S

Technical Council Podcast
Interview with Paul Strong of EBay

Technical Council Podcast

Play Episode Listen Later Sep 6, 2007 11:22


Paul Strong, from Ebay talks about his company's use of storage and their own storage management development.

Technical Council Podcast
Interview with Tom Clark on Storage

Technical Council Podcast

Play Episode Listen Later Sep 6, 2007 22:38


Tom Clark, SNIA's newest Board of Directors member talks about new developments in the storage industry including FCOE.

Technical Council Podcast
Interview with Dave Thiel on SNIA Software

Technical Council Podcast

Play Episode Listen Later Sep 6, 2007 13:58


Dave Thiel, Chair of the SNIA Technical Council talks about new developments in SNIA Technical Work Groups and the ability of SNIA to do Software

Black Hat Briefings, Las Vegas 2006 [Video] Presentations from the security conference

A fundamental of many SAN solutions is to use metadata to provide shared access to a SAN. This is true in iSCSI or FibreChannel and across a wide variety of products. Metadata can offer a way around the built-in security features provided that attackers have FibreChannel connectivity. SAN architecture represents a symbol of choosing speed over security. Metadata, the vehicle that provides speed, is a backdoor into the system built around it. In this session we will cover using Metadata to DoS or gain unauthorized access to an Xsan over the FibreChannel network."

Black Hat Briefings, Las Vegas 2006 [Audio] Presentations from the security conference

"A fundamental of many SAN solutions is to use metadata to provide shared access to a SAN. This is true in iSCSI or FibreChannel and across a wide variety of products. Metadata can offer a way around the built-in security features provided that attackers have FibreChannel connectivity. SAN architecture represents a symbol of choosing speed over security. Metadata, the vehicle that provides speed, is a backdoor into the system built around it. In this session we will cover using Metadata to DoS or gain unauthorized access to an Xsan over the FibreChannel network."

Black Hat Briefings, Las Vegas 2005 [Video] Presentations from the security conference

Himanshu Dwivedi's presentation will discuss the severe security issues that exist in the default implementations of iSCSI storage networks/products. The presentation will cover iSCSI storage as it pertains to the basic principals of security, including enumeration, authentication, authorization, and availability. The presentation will contain a short overview of iSCSI for security architects and basic security principals for storage administrators. The presentation will continue into a deep discussion of iSCSI attacks that are capable of compromising large volumes of data from iSCSI storage products/networks. The iSCSI attacks section will also show how simple attacks can make the storage network unavailable, creating a devastating problem for networks, servers, and applications. The presenter will also follow-up each discussion of iSCSI attacks with a demonstration of large data compromise. iSCSI attacks will show how a large volume of data can be compromised or simply made unavailable for long periods of time without a single root or administrator password. The presentation will concluded with existing solutions from responsible vendors that can protect iSCSI storage networks/products. Each iSCSI attack/defense described by the presenter will contain deep discussions and visual demonstrations, which will allow the audience to fully understand the security issues with iSCSI as well as the standard defenses. Himanshu Dwivedi is a founding partner of iSEC Partners, LLC. a strategic security organization. Himanshu has 11 years experience in security and information technology. Before forming iSEC, Himanshu was the Technical Director for @stake's bay area practice, the leading provider for digital security services. His professional experiences includes application programming, infrastructure security, secure product design, and is highlighted with deep research and testing on storage security for the past 5 years. Himanshu has focused his security experience towards storage security, specializing in SAN and NAS security. His research includes iSCSI and Fibre Channel (FC) Storage Area Networks as well as IP Network Attached Storage. Himanshu has given numerous presentations and workshops regarding the security in SAN and NAS networks, including conferences such as BlackHat 2004, BlackHat 2003, Storage Networking World, Storage World Conference, TechTarget, the Fibre Channel Conference, SAN-West, SAN-East, SNIA Security Summit, Syscan 2004, and Bellua 2005. Himanshu currently has a patent pending on a storage design architecture that he co-developed with other @stake professionals. The patent is for a storage security design that can be implemented on enterprise storage products deployed in Fibre Channel storage networks. Additionally, Himanshu has published three books, including "The Complete Storage Reference" - Chapter 25 Security Considerations (McGraw-Hill/Osborne), "Implementing SSH" (Wiley Publishing), and "Securing Storage" (Addison Wesley Publishing), which is due out in the fall of 2005. Furthermore, Himanshu has also published two white papers. The first white paper Himanshu wrote is titled "Securing Intellectual Property", which provides insight and recommendations on how to protect an organization's network from the inside out. Additionally, Himanshu has written a second white paper titled Storage Security, which provides the basic best practices and recommendations in order to secure a SAN or a NAS storage network.

Black Hat Briefings, Las Vegas 2005 [Audio] Presentations from the security conference

Himanshu Dwivedi's presentation will discuss the severe security issues that exist in the default implementations of iSCSI storage networks/products. The presentation will cover iSCSI storage as it pertains to the basic principals of security, including enumeration, authentication, authorization, and availability. The presentation will contain a short overview of iSCSI for security architects and basic security principals for storage administrators. The presentation will continue into a deep discussion of iSCSI attacks that are capable of compromising large volumes of data from iSCSI storage products/networks. The iSCSI attacks section will also show how simple attacks can make the storage network unavailable, creating a devastating problem for networks, servers, and applications. The presenter will also follow-up each discussion of iSCSI attacks with a demonstration of large data compromise. iSCSI attacks will show how a large volume of data can be compromised or simply made unavailable for long periods of time without a single root or administrator password. The presentation will concluded with existing solutions from responsible vendors that can protect iSCSI storage networks/products. Each iSCSI attack/defense described by the presenter will contain deep discussions and visual demonstrations, which will allow the audience to fully understand the security issues with iSCSI as well as the standard defenses. Himanshu Dwivedi is a founding partner of iSEC Partners, LLC. a strategic security organization. Himanshu has 11 years experience in security and information technology. Before forming iSEC, Himanshu was the Technical Director for @stake's bay area practice, the leading provider for digital security services. His professional experiences includes application programming, infrastructure security, secure product design, and is highlighted with deep research and testing on storage security for the past 5 years. Himanshu has focused his security experience towards storage security, specializing in SAN and NAS security. His research includes iSCSI and Fibre Channel (FC) Storage Area Networks as well as IP Network Attached Storage. Himanshu has given numerous presentations and workshops regarding the security in SAN and NAS networks, including conferences such as BlackHat 2004, BlackHat 2003, Storage Networking World, Storage World Conference, TechTarget, the Fibre Channel Conference, SAN-West, SAN-East, SNIA Security Summit, Syscan 2004, and Bellua 2005. Himanshu currently has a patent pending on a storage design architecture that he co-developed with other @stake professionals. The patent is for a storage security design that can be implemented on enterprise storage products deployed in Fibre Channel storage networks. Additionally, Himanshu has published three books, including "The Complete Storage Reference" - Chapter 25 Security Considerations (McGraw-Hill/Osborne), "Implementing SSH" (Wiley Publishing), and "Securing Storage" (Addison Wesley Publishing), which is due out in the fall of 2005. Furthermore, Himanshu has also published two white papers. The first white paper Himanshu wrote is titled "Securing Intellectual Property", which provides insight and recommendations on how to protect an organization's network from the inside out. Additionally, Himanshu has written a second white paper titled Storage Security, which provides the basic best practices and recommendations in order to secure a SAN or a NAS storage network.

NetApp TV Studios
Virtualization for Enterprise Environments with Jeff Hornung

NetApp TV Studios

Play Episode Listen Later Jan 30, 2006 3:50


Virtualization is a concept that's been in and out of fashion. However, as more and more companies can attest there are compelling reasons to include it in today's enterprise environments. Here to talk with Elisa Steele about some of these reasons is Jeff Hornung, VP and GM of the Virtualization Network at NetApp.

IT-Pod
Der Cisco-Podcast Nr. 10 - Unterwegs im Technik-Zelt der Cisco-Expo

IT-Pod

Play Episode Listen Later Dec 31, 1969 9:34


Im Technik-Zelt präsentiert der Netzwerkspezialist ganz neue Technologien, die entweder noch gar nicht, oder erst seit kurzem auf dem Markt sind. So finden sich dort die neuesten Switches aus der Nexus-Familie für das Rechenzentrum der nächsten Generation. Sie bringen die Netzwerktechnologien Ethernet und Fibre Channel zusammen. Das spart Kabel. Weitere Themen im Technik-Zelt sind Mobility und WLAN-Lösungen. Silke Thole hat sich alles ganz genau erklären lassen.