Podcasts about ISCSI

Internet Protocol-based storage networking standard for linking data storage facilities

  • 44PODCASTS
  • 83EPISODES
  • 39mAVG DURATION
  • ?INFREQUENT EPISODES
  • Sep 20, 2023LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ISCSI

Latest podcast episodes about ISCSI

Matt Brown Show
MBS734- Innovating Data Centers: The StoneFly Story Since 1996 (Secrets of #Fail 117)

Matt Brown Show

Play Episode Listen Later Sep 20, 2023 11:15


Welcome to the "Secrets of #Fail," a new pod storm series hosted by Matt Brown. In this series of 2023, Matt dives deep into the world of failures and lessons learned along the way from high-net-worth individuals.  Join Matt as he dives into the world of failures and lessons.Series: Secret of #FailFounded in 1996 and headquartered in Castro Valley – StoneFly, Inc. was established with the vision to simplify, optimize and deliver high performance budget-friendly data center solutions for SMBs, SMEs, and large enterprises. Beginning with its registration of the iSCSI.com Internet domain name in March 1996, StoneFly has made iSCSI into a standard which is now used by IT professionals around the world.Get an interview on the Matt Brown Show: www.mattbrownshow.comSupport the show

CISSP Cyber Training Podcast - CISSP Training Program
CCT 057: CISSP Exam Questions (Domain 4)

CISSP Cyber Training Podcast - CISSP Training Program

Play Episode Listen Later Jul 27, 2023 11:49 Transcription Available


Ever wondered how to ace the CISSP Cyber exam's domain four? Or, perhaps, you're merely intrigued by the intricate world of Voiceover IP (VOIP)? Either way, this episode is packed with the insights you've been seeking! Join me, Sean Gerber, as we dissect the key protocols that VOIP uses for multimedia transmissions. Together, we'll unravel the complex intricacies of Session Initiation Protocol (SIP) messages and how sessions kick off in a VOIP implementation. You'll also gain an understanding of the differences between Real-Time Transport Protocol (RTP) and Real-Time Transport Control Protocol (RTCP) and how they're applied.As we journey deeper into this episode, we'll explore the fascinating world of Internet Small Computer Systems Interface (iSCSI), focusing on its functions and default ports. Fear not, the mystery of SCSI command encapsulation will no longer be a mystery to you! We'll then shift our attention to the security aspects of SIP-based VOIP traffic, scrutinizing SIP-aware firewalls and the implementation of Transport Layer Security (TLS). Finally, we'll round off our discussion by examining RTCP's role in providing quality of service feedback in a VOIP implementation and wrapping up with an understanding of block-level transport in iSCSI. Prepare to expand your cybersecurity knowledge in a way you never thought possible!Gain access to 30 FREE CISSP Exam Questions each and every month by going to FreeCISSPQuestions.com and sign-up to join the team for Free.

fear domain sip voip scsi iscsi transport layer security sean gerber cissp exam
CISSP Cyber Training Podcast - CISSP Training Program
CCT 056: Unraveling the Intricacies of VOIP and iSCSI in Cybersecurity - CISSP Domain 4

CISSP Cyber Training Podcast - CISSP Training Program

Play Episode Listen Later Jul 24, 2023 39:51 Transcription Available


Ever wish you could decrypt the mysteries of cybersecurity and ace your CISSP exam? This episode is your treasure map to success, guiding you through the labyrinthine layers of the OSI model, starting with the physical transmission of data and the crucial role of physical access controls. We also enlighten you about MAC address filtering and how it fortifies network security. As we move deeper, we unlock the secrets of encryption, digital signatures, and secure coding practices. We delve into the heart of the session and presentation layers, spotlighting the importance of input validation and secure API design. Get to appreciate the role of protocols like Session Initiation Protocol and Real-Time Transport Protocol in VoIP. We also bring to light the security risks associated with VoIP and iSCSI, introducing you to the sinister world of call hijacking, eavesdropping, and toll fraud.Finally, we don our armor and arm you with the best security controls for VoIP, such as encryption, authentication, and access control. And just when you thought it couldn't get better, we guide you on how to hit the bullseye in your CISSP exam. Exploring the benefits of a CISSP Cyber Training membership and how it sets you up for a triumphant win in the exam. So, gear up for a thrilling voyage into the captivating realm of cybersecurity.Gain access to 30 FREE CISSP Exam Questions each and every month by going to FreeCISSPQuestions.com and sign-up to join the team for Free.

BSD Now
504: Release the BSD

BSD Now

Play Episode Listen Later Apr 27, 2023 36:06


FreeBSD 13.2 Release, Using DTrace to find block sizes of ZFS, NFS, and iSCSI, Midnight BSD 3.0.1, Closing a stale SSH connection, How to automatically add identity to the SSH authentication agent, Pros and Cons of FreeBSD for virtual Servers, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines FreeBSD 13.2 Release Announcement (https://www.freebsd.org/releases/13.2R/announce/) Using DTrace to find block sizes of ZFS, NFS, and iSCSI (https://axcient.com/blog/using-dtrace-to-find-block-sizes-of-zfs-nfs-and-iscsi/) News Roundup Midnight BSD 3.0.1 (https://www.phoronix.com/news/MidnightBSD-3.0.1) Closing a stale SSH connection (https://davidisaksson.dev/posts/closing-stale-ssh-connections/) How to automatically add identity to the SSH authentication agent (https://sleeplessbeastie.eu/2023/04/10/how-to-automatically-add-identity-to-the-ssh-authentication-agent/) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Dan - ZFS question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/504/feedback/Dan%20-%20ZFS%20question.md) Matt - Thanks (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/504/feedback/Matt%20-%20Thanks.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***

ScanNetSecurity 最新セキュリティ情報
Microsoft Windows において WOW64 版の Microsoft iSCSI サービスバイナリで DLL Hijacking による UAC Bypass が可能となる手法(Scan Tech Report)

ScanNetSecurity 最新セキュリティ情報

Play Episode Listen Later Aug 8, 2022 0:10


2022 年 7 月に、Microsoft Windows OS に、UAC による権限制御が回避可能となる手法が公開されています。

Screaming in the Cloud
Developing Storage Solutions Before the Rest with AB Periasamay

Screaming in the Cloud

Play Episode Listen Later Feb 2, 2022 38:54


About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world.  AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy MinIO Slack channel: https://minio.slack.com/join/shared_invite/zt-11qsphhj7-HpmNOaIh14LHGrmndrhocA LinkedIn: https://www.linkedin.com/in/abperiasamy/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: This episode is sponsored in part by our friends at Rising Cloud, which I hadn't heard of before, but they're doing something vaguely interesting here. They are using AI, which is usually where my eyes glaze over and I lose attention, but they're using it to help developers be more efficient by reducing repetitive tasks. So, the idea being that you can run stateless things without having to worry about scaling, placement, et cetera, and the rest. They claim significant cost savings, and they're able to wind up taking what you're running as it is, in AWS, with no changes, and run it inside of their data centers that span multiple regions. I'm somewhat skeptical, but their customers seem to really like them, so that's one of those areas where I really have a hard time being too snarky about it because when you solve a customer's problem, and they get out there in public and say, “We're solving a problem,” it's very hard to snark about that. Multus Medical, Construx.ai, and Stax have seen significant results by using them, and it's worth exploring. So, if you're looking for a smarter, faster, cheaper alternative to EC2, Lambda, or batch, consider checking them out. Visit risingcloud.com/benefits. That's risingcloud.com/benefits, and be sure to tell them that I said you because watching people wince when you mention my name is one of the guilty pleasures of listening to this podcast.in a siloCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by someone who's doing something a bit off the beaten path when we talk about cloud. I've often said that S3 is sort of a modern wonder of the world. It was the first AWS service brought into general availability. Today's promoted guest is the co-founder and CEO of MinIO, Anand Babu Periasamy, or AB as he often goes, depending upon who's talking to him. Thank you so much for taking the time to speak with me today.AB: It's wonderful to be here, Corey. Thank you for having me.Corey: So, I want to start with the obvious thing, where you take a look at what is the cloud and you can talk about AWS's ridiculous high-level managed services, like Amazon Chime. Great, we all see how that plays out. And those are the higher-level offerings, ideally aimed at problems customers have, but then they also have the baseline building blocks services, and it's hard to think of a more baseline building block than an object store. That's something every cloud provider has, regardless of how many scare quotes there are around the word cloud; everyone offers the object store. And your solution is to look at this and say, “Ah, that's a market ripe for disruption. We're going to build through an open-source community software that emulates an object store.” I would be sitting here, more or less poking fun at the idea except for the fact that you're a billion-dollar company now.AB: Yeah.Corey: How did you get here?AB: So, when we started, right, we did not actually think about cloud that way, right? “Cloud, it's a hot trend, and let's go disrupt is like that. It will lead to a lot of opportunity.” Certainly, it's true, it lead to the M&S, right, but that's not how we looked at it, right? It's a bad idea to build startups for M&A.When we looked at the problem, when we got back into this—my previous background, some may not know that it's actually a distributed file system background in the open-source space.Corey: Yeah, you were one of the co-founders of Gluster—AB: Yeah.Corey: —which I have only begrudgingly forgiven you. But please continue.AB: [laugh]. And back then we got the idea right, but the timing was wrong. And I had—while the data was beginning to grow at a crazy rate, end of the day, GlusterFS has to still look like an FS, it has to look like a file system like NetApp or EMC, and it was hugely limiting what we can do with it. The biggest problem for me was legacy systems. I have to build a modern system that is compatible with a legacy architecture, you cannot innovate.And that is where when Amazon introduced S3, back then, like, when S3 came, cloud was not big at all, right? When I look at it, the most important message of the cloud was Amazon basically threw everything that is legacy. It's not [iSCSI 00:03:21] as a Service; it's not even FTP as a Service, right? They came up with a simple, RESTful API to store your blobs, whether it's JavaScript, Android, iOS, or [AAML 00:03:30] application, or even Snowflake-type application.Corey: Oh, we spent ten years rewriting our apps to speak object store, and then they released EFS, which is NFS in the cloud. It's—AB: Yeah.Corey: —I didn't realize I could have just been stubborn and waited, and the whole problem would solve itself. But here we are. You're quite right.AB: Yeah. And even EFS and EBS are more for legacy stock can come in, buy some time, but that's not how you should stay on AWS, right? When Amazon did that, for me, that was the opportunity. I saw that… while world is going to continue to produce lots and lots of data, if I built a brand around that, I'm not going to go wrong.The problem is data at scale. And what do I do there? The opportunity I saw was, Amazon solved one of the largest problems for a long time. All the legacy systems, legacy protocols, they convinced the industry, throw them away and then start all over from scratch with the new API. While it's not compatible, it's not standard, it is ridiculously simple compared to anything else.No fstabs, no [unintelligible 00:04:27], no [root 00:04:28], nothing, right? From any application anywhere you can access was a big deal. When I saw that, I was like, “Thank you Amazon.” And I also knew Amazon would convince the industry that rewriting their application is going to be better and faster and cheaper than retrofitting legacy applications.Corey: I wonder how much that's retconned because talking to some of the people involved in the early days, they were not at all convinced they [laugh] would be able to convince the industry to do this.AB: Actually, if you talk to the analyst reporters, the IDC's, Gartner's of the world to the enterprise IT, the VMware community, they would say, “Hell no.” But if you talk to the actual application developers, data infrastructure, data architects, the actual consumers of data, for them, it was so obvious. They actually did not know how to write an fstab. The iSCSI and NFS, you can't even access across the internet, and the modern applications, they ran across the globe, in JavaScript, and all kinds of apps on the device. From [Snap 00:05:21] to Snowflake, today is built on object store. It was more natural for the applications team, but not from the infrastructure team. So, who you asked that mattered.But nevertheless, Amazon convinced the rest of the world, and our bet was that if this is going to be the future, then this is also our opportunity. S3 is going to be limited because it only runs inside AWS. Bulk of the world's data is produced everywhere and only a tiny fraction will go to AWS. And where will the rest of the data go? Not SAN, NAS, HDFS, or other blob store, Azure Blob, or GCS; it's not going to be fragmented. And if we built a better object store, lightweight, faster, simpler, but fully compatible with S3 API, we can sweep and consolidate the market. And that's what happened.Corey: And there is a lot of validity to that. We take a look across the industry, when we look at various standards—I mean, one of the big problems with multi-cloud in many respects is the APIs are not quite similar enough. And worse, the failure patterns are very different, of I don't just need to know how the load balancer works, I need to know how it breaks so I can detect and plan for that. And then you've got the whole identity problem as well, where you're trying to manage across different frames of reference as you go between providers, and leads to a bit of a mess. What is it that makes MinIO something that has been not just something that has endured since it was created, but clearly been thriving?AB: The real reason, actually is not the multi-cloud compatibility, all that, right? Like, while today, it is a big deal for the users because the deployments have grown into 10-plus petabytes, and now the infrastructure team is taking it over and consolidating across the enterprise, so now they are talking about which key management server for storing the encrypted keys, which key management server should I talk to? Look at AWS, Google, or Azure, everyone has their own proprietary API. Outside they, have [YAML2 00:07:18], HashiCorp Vault, and, like, there is no standard here. It is supposed to be a [KMIP 00:07:23] standard, but in reality, it is not. Even different versions of Vault, there are incompatibilities for us.That is where—like from Key Management Server, Identity Management Server, right, like, everything that you speak around, how do you talk to different ecosystem? That, actually, MinIO provides connectors; having the large ecosystem support and large community, we are able to address all that. Once you bring MinIO into your application stack like you would bring Elasticsearch or MongoDB or anything else as a container, your application stack is just a Kubernetes YAML file, and you roll it out on any cloud, it becomes easier for them, they're able to go to any cloud they want. But the real reason why it succeeded was not that. They actually wrote their applications as containers on Minikube, then they will push it on a CI/CD environment.They never wrote code on EC2 or ECS writing objects on S3, and they don't like the idea of [past 00:08:15], where someone is telling you just—like you saw Google App Engine never took off, right? They liked the idea, here are my building blocks. And then I would stitch them together and build my application. We were part of their application development since early days, and when the application matured, it was hard to remove. It is very much like Microsoft Windows when it grew, even though the desktop was Microsoft Windows Server was NetWare, NetWare lost the game, right?We got the ecosystem, and it was actually developer productivity, convenience, that really helped. The simplicity of MinIO, today, they are arguing that deploying MinIO inside AWS is easier through their YAML and containers than going to AWS Console and figuring out how to do it.Corey: As you take a look at how customers are adopting this, it's clear that there is some shift in this because I could see the story for something like MinIO making an awful lot of sense in a data center environment because otherwise, it's, “Great. I need to make this app work with my SAN as well as an object store.” And that's sort of a non-starter for obvious reasons. But now you're available through cloud marketplaces directly.AB: Yeah.Corey: How are you seeing adoption patterns and interactions from customers changing as the industry continues to evolve?AB: Yeah, actually, that is how my thinking was when I started. If you are inside AWS, I would myself tell them that why don't use AWS S3? And it made a lot of sense if it's on a colo or your own infrastructure, then there is an object store. It even made a lot of sense if you are deploying on Google Cloud, Azure, Alibaba Cloud, Oracle Cloud, it made a lot of sense because you wanted an S3 compatible object store. Inside AWS, why would you do it, if there is AWS S3?Nowadays, I hear funny arguments, too. They like, “Oh, I didn't know that I could use S3. Is S3 MinIO compatible?” Because they will be like, “It came along with the GitLab or GitHub Enterprise, a part of the application stack.” They didn't even know that they could actually switch it over.And otherwise, most of the time, they developed it on MinIO, now they are too lazy to switch over. That also happens. But the real reason that why it became serious for me—I ignored that the public cloud commercialization; I encouraged the community adoption. And it grew to more than a million instances, like across the cloud, like small and large, but when they start talking about paying us serious dollars, then I took it seriously. And then when I start asking them, why would you guys do it, then I got to know the real reason why they wanted to do was they want to be detached from the cloud infrastructure provider.They want to look at cloud as CPU network and drive as a service. And running their own enterprise IT was more expensive than adopting public cloud, it was productivity for them, reducing the infrastructure, people cost was a lot. It made economic sense.Corey: Oh, people always cost more the infrastructure itself does.AB: Exactly right. 70, 80%, like, goes into people, right? And enterprise IT is too slow. They cannot innovate fast, and all of those problems. But what I found was for us, while we actually build the community and customers, if you're on AWS, if you're running MinIO on EBS, EBS is three times more expensive than S3.Corey: Or a single copy of it, too, where if you're trying to go multi-AZ and you have the replication traffic, and not to mention you have to over-provision it, which is a bit of a different story as well. So, like, it winds up being something on the order of 30 times more expensive, in many cases, to do it right. So, I'm looking at this going, the economics of running this purely by itself in AWS don't make sense to me—long experience teaches me the next question of, “What am I missing?” Not, “That's ridiculous and you're doing it wrong.” There's clearly something I'm not getting. What am I missing?AB: I was telling them until we made some changes, right—because we saw a couple of things happen. I was initially like, [unintelligible 00:12:00] does not make 30 copies. It makes, like, 1.4x, 1.6x.But still, the underlying block storage is not only three times more expensive than S3, it's also slow. It's a network storage. Trying to put an object store on top of it, another, like, software-defined SAN, like EBS made no sense to me. Smaller deployments, it's okay, but you should never scale that on EBS. So, it did not make economic sense. I would never take it seriously because it would never help them grow to scale.But what changed in recent times? Amazon saw that this was not only a problem for MinIO-type players. Every database out there today, every modern database, even the message queues like Kafka, they all have gone scale-out. And they all depend on local block store and putting a scale-out distributed database, data processing engines on top of EBS would not scale. And Amazon introduced storage optimized instances. Essentially, that reduced to bet—the data infrastructure guy, data engineer, or application developer asking IT, “I want a SuperMicro, or Dell server, or even virtual machines.” That's too slow, too inefficient.They can provision these storage machines on demand, and then I can do it through Kubernetes. These two changes, all the public cloud players now adopted Kubernetes as the standard, and they have to stick to the Kubernetes API standard. If they are incompatible, they won't get adopted. And storage optimized that is local drives, these are machines, like, [I3 EN 00:13:23], like, 24 drives, they have SSDs, and fast network—like, 25-gigabit 200-gigabit type network—availability of these machines, like, what typically would run any database, HDFS cluster, MinIO, all of them, those machines are now available just like any other EC2 instance.They are efficient. You can actually put MinIO side by side to S3 and still be price competitive. And Amazon wants to—like, just like their retail marketplace, they want to compete and be open. They have enabled it. In that sense, Amazon is actually helping us. And it turned out that now I can help customers build multiple petabyte infrastructure on Amazon and still stay efficient, still stay price competitive.Corey: I would have said for a long time that if you were to ask me to build out the lingua franca of all the different cloud providers into a common API, the S3 API would be one of them. Now, you are building this out, multi-cloud, you're in all three of the major cloud marketplaces, and the way that you do that and do those deployments seems like it is the modern multi-cloud API of Kubernetes. When you first started building this, Kubernetes was very early on. What was the evolution of getting there? Or were you one of the first early-adoption customers in a Kubernetes space?AB: So, when we started, there was no Kubernetes. But we saw the problem was very clear. And there was containers, and then came Docker Compose and Swarm. Then there was Mesos, Cloud Foundry, you name it, right? Like, there was many solutions all the way up to even VMware trying to get into that space.And what did we do? Early on, I couldn't choose. I couldn't—it's not in our hands, right, who is going to be the winner, so we just simply embrace everybody. It was also tiring that to allow implement native connectors to all of them different orchestration, like Pivotal Cloud Foundry alone, they have their own standard open service broker that's only popular inside their system. Go outside elsewhere, everybody was incompatible.And outside that, even, Chef Ansible Puppet scripts, too. We just simply embraced everybody until the dust settle down. When it settled down, clearly a declarative model of Kubernetes became easier. Also Kubernetes developers understood the community well. And coming from Borg, I think they understood the right architecture. And also written in Go, unlike Java, right?It actually matters, these minute new details resonating with the infrastructure community. It took off, and then that helped us immensely. Now, it's not only Kubernetes is popular, it has become the standard, from VMware to OpenShift to all the public cloud providers, GKS, AKS, EKS, whatever, right—GKE. All of them now are basically Kubernetes standard. It made not only our life easier, it made every other [ISV 00:16:11], other open-source project, everybody now can finally write one code that can be operated portably.It is a big shift. It is not because we chose; we just watched all this, we were riding along the way. And then because we resonated with the infrastructure community, modern infrastructure is dominated by open-source. We were also the leading open-source object store, and as Kubernetes community adopted us, we were naturally embraced by the community.Corey: Back when AWS first launched with S3 as its first offering, there were a bunch of folks who were super excited, but object stores didn't make a lot of sense to them intrinsically, so they looked into this and, “Ah, I can build a file system and users base on top of S3.” And the reaction was, “Holy God don't do that.” And the way that AWS decided to discourage that behavior is a per request charge, which for most workloads is fine, whatever, but there are some that causes a significant burden. With running something like MinIO in a self-hosted way, suddenly that costing doesn't exist in the same way. Does that open the door again to so now I can use it as a file system again, in which case that just seems like using the local file system, only with extra steps?AB: Yeah.Corey: Do you see patterns that are emerging with customers' use of MinIO that you would not see with the quote-unquote, “Provider's” quote-unquote, “Native” object storage option, or do the patterns mostly look the same?AB: Yeah, if you took an application that ran on file and block and brought it over to object storage, that makes sense. But something that is competing with object store or a layer below object store, that is—end of the day that drives our block devices, you have a block interface, right—trying to bring SAN or NAS on top of object store is actually a step backwards. They completely missed the message that Amazon told that if you brought a file system interface on top of object store, you missed the point, that you are now bringing the legacy things that Amazon intentionally removed from the infrastructure. Trying to bring them on top doesn't make it any better. If you are arguing from a compatibility some legacy applications, sure, but writing a file system on top of object store will never be better than NetApp, EMC, like EMC Isilon, or anything else. Or even GlusterFS, right?But if you want a file system, I always tell the community, they ask us, “Why don't you add an FS option and do a multi-protocol system?” I tell them that the whole point of S3 is to remove all those legacy APIs. If I added POSIX, then I'll be a mediocre object storage and a terrible file system. I would never do that. But why not write a FUSE file system, right? Like, S3Fs is there.In fact, initially, for legacy compatibility, we wrote MinFS and I had to hide it. We actually archived the repository because immediately people started using it. Even simple things like end of the day, can I use Unix [Coreutils 00:19:03] like [cp, ls 00:19:04], like, all these tools I'm familiar with? If it's not file system object storage that S3 [CMD 00:19:08] or AWS CLI is, like, to bloatware. And it's not really Unix-like feeling.Then what I told them, “I'll give you a BusyBox like a single static binary, and it will give you all the Unix tools that works for local filesystem as well as object store.” That's where the [MC tool 00:19:23] came; it gives you all the Unix-like programmability, all the core tool that's object storage compatible, speaks native object store. But if I have to make object store look like a file system so UNIX tools would run, it would not only be inefficient, Unix tools never scaled for this kind of capacity.So, it would be a bad idea to take step backwards and bring legacy stuff back inside. For some very small case, if there are simple POSIX calls using [ObjectiveFs 00:19:49], S3Fs, and few, for legacy compatibility reasons makes sense, but in general, I would tell the community don't bring file and block. If you want file and block, leave those on virtual machines and leave that infrastructure in a silo and gradually phase them out.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: So, my big problem, when I look at what S3 has done is in it's name because of course, naming is hard. It's, “Simple Storage Service.” The problem I have is with the word simple because over time, S3 has gotten more and more complex under the hood. It automatically tiers data the way that customers want. And integrated with things like Athena, you can now query it directly, whenever of an object appears, you can wind up automatically firing off Lambda functions and the rest.And this is increasingly looking a lot less like a place to just dump my unstructured data, and increasingly, a lot like this is sort of a database, in some respects. Now, understand my favorite database is Route 53; I have a long and storied history of misusing services as databases. Is this one of those scenarios, or is there some legitimacy to the idea of turning this into a database?AB: Actually, there is now S3 Select API that if you're storing unstructured data like CSV, JSON, Parquet, without downloading even a compressed CSV, you can actually send a SQL query into the system. IN MinIO particularly the S3 Select is [CMD 00:21:16] optimized. We can load, like, every 64k worth of CSV lines into registers and do CMD operations. It's the fastest SQL filter out there. Now, bringing these kinds of capabilities, we are just a little bit away from a database; should we do database? I would tell definitely no.The very strength of S3 API is to actually limit all the mutations, right? Particularly if you look at database, they're dealing with metadata, and querying; the biggest value they bring is indexing the metadata. But if I'm dealing with that, then I'm dealing with really small block lots of mutations, the separation of objects storage should be dealing with persistence and not mutations. Mutations are [AWS 00:21:57] problem. Separation of database work function and persistence function is where object storage got the storage right.Otherwise, it will, they will make the mistake of doing POSIX-like behavior, and then not only bringing back all those capabilities, doing IOPS intensive workloads across the HTTP, it wouldn't make sense, right? So, object storage got the API right. But now should it be a database? So, it definitely should not be a database. In fact, I actually hate the idea of Amazon yielding to the file system developers and giving a [file three 00:22:29] hierarchical namespace so they can write nice file managers.That was a terrible idea. Writing a hierarchical namespace that's also sorted, now puts tax on how the metadata is indexed and organized. The Amazon should have left the core API very simple and told them to solve these problems outside the object store. Many application developers don't need. Amazon was trying to satisfy everybody's need. Saying no to some of these file system-type, file manager-type users, what should have been the right way.But nevertheless, adding those capabilities, eventually, now you can see, S3 is no longer simple. And we had to keep that compatibility, and I hate that part. I actually don't mind compatibility, but then doing all the wrong things that Amazon is adding, now I have to add because it's compatible. I kind of hate that, right?But now going to a database would be pushing it to the whole new level. Here is the simple reason why that's a bad idea. The right way to do database—in fact, the database industry is already going in the right direction. Unstructured data, the key-value or graph, different types of data, you cannot possibly solve all that even in a single database. They are trying to be multimodal database; even they are struggling with it.You can never be a Redis, Cassandra, like, a SQL all-in-one. They tried to say that but in reality, that you will never be better than any one of those focused database solutions out there. Trying to bring that into object store will be a mistake. Instead, let the databases focus on query language implementation and query computation, and leave the persistence to object store. So, object store can still focus on storing your database segments, the table segments, but the index is still in the memory of the database.Even the index can be snapshotted once in a while to object store, but use objects store for persistence and database for query is the right architecture. And almost all the modern databases now, from Elasticsearch to [unintelligible 00:24:21] to even Kafka, like, message queue. They all have gone that route. Even Microsoft SQL Server, Teradata, Vertica, name it, Splunk, they all have gone object storage route, too. Snowflake itself is a prime example, BigQuery and all of them.That's the right way. Databases can never be consolidated. There will be many different kinds of databases. Let them specialize on GraphQL or Graph API, or key-value, or SQL. Let them handle the indexing and persistence, they cannot handle petabytes of data. That [unintelligible 00:24:51] to object store is how the industry is shaping up, and it is going in the right direction.Corey: One of the ways I learned the most about various services is by talking to customers. Every time I think I've seen something, this is amazing. This service is something I completely understand. All I have to do is talk to one more customer. And when I was doing a bill analysis project a couple of years ago, I looked into a customer's account and saw a bucket with okay, that has 280 billion objects in it—and wait was that billion with a B?And I asked them, “So, what's going on over there?” And there's, “Well, we built our own columnar database on top of S3. This may not have been the best approach.” It's, “I'm going to stop you there. With no further context, it was not, but please continue.”It's the sort of thing that would never have occurred to me to even try, do you tend to see similar—I would say they're anti-patterns, except somehow they're made to work—in some of your customer environments, as they are using the service in ways that are very different than ways encouraged or even allowed by the native object store options?AB: Yeah, when I first started seeing the database-type workloads coming on to MinIO, I was surprised, too. That was exactly my reaction. In fact, they were storing these 256k, sometimes 64k table segments because they need to index it, right, and the table segments were anywhere between 64k to 2MB. And when they started writing table segments, it was more often [IOPS-type 00:26:22] I/O pattern, then a throughput-type pattern. Throughput is an easier problem to solve, and MinIO always saturated these 100-gigabyte NVMe-type drives, they were I/O intensive, throughput optimized.When I started seeing the database workloads, I had to optimize for small-object workloads, too. We actually did all that because eventually I got convinced the right way to build a database was to actually leave the persistence out of database; they made actually a compelling argument. If historically, I thought metadata and data, data to be very big and coming to object store make sense. Metadata should be stored in a database, and that's only index page. Take any book, the index pages are only few, database can continue to run adjacent to object store, it's a clean architecture.But why would you put database itself on object store? When I saw a transactional database like MySQL, changing the [InnoDB 00:27:14] to [RocksDB 00:27:15], and making changes at that layer to write the SS tables [unintelligible 00:27:19] to MinIO, and then I was like, where do you store the memory, the journal? They said, “That will go to Kafka.” And I was like—I thought that was insane when it started. But it continued to grow and grow.Nowadays, I see most of the databases have gone to object store, but their argument is, the databases also saw explosive growth in data. And they couldn't scale the persistence part. That is where they realized that they still got very good at the indexing part that object storage would never give. There is no API to do sophisticated query of the data. You cannot peek inside the data, you can just do streaming read and write.And that is where the databases were still necessary. But databases were also growing in data. One thing that triggered this was the use case moved from data that was generated by people to now data generated by machines. Machines means applications, all kinds of devices. Now, it's like between seven billion people to a trillion devices is how the industry is changing. And this led to lots of machine-generated, semi-structured, structured data at giant scale, coming into database. The databases need to handle scale. There was no other way to solve this problem other than leaving the—[unintelligible 00:28:31] if you looking at columnar data, most of them are machine-generated data, where else would you store? If they tried to build their own object storage embedded into the database, it would make database mentally complicated. Let them focus on what they are good at: Indexing and mutations. Pull the data table segments which are immutable, mutate in memory, and then commit them back give the right mix. What you saw what's the fastest step that happened, we saw that consistently across. Now, it is actually the standard.Corey: So, you started working on this in 2014, and here we are—what is it—eight years later now, and you've just announced a Series B of $100 million dollars on a billion-dollar valuation. So, it turns out this is not just one of those things people are using for test labs; there is significant momentum behind using this. How did you get there from—because everything you're saying makes an awful lot of sense, but it feels, at least from where I sit, to be a little bit of a niche. It's a bit of an edge case that is not the common case. Obviously, I missing something because your investors are not the types of sophisticated investors who see something ridiculous and, “Yep. That's the thing we're going to go for.” There right more than they're not.AB: Yeah. The reason for that was the saw what we were set to do. In fact, these are—if you see the lead investor, Intel, they watched us grow. They came into Series A and they saw, everyday, how we operated and grew. They believed in our message.And it was actually not about object store, right? Object storage was a means for us to get into the market. When we started, our idea was, ten years from now, what will be a big problem? A lot of times, it's hard to see the future, but if you zoom out, it's hidden in plain sight.These are simple trends. Every major trend pointed to world producing more data. No one would argue with that. If I solved one important problem that everybody is suffering, I won't go wrong. And when you solve the problem, it's about building a product with fine craftsmanship, attention to details, connecting with the user, all of that standard stuff.But I picked object storage as the problem because the industry was fragmented across many different data stores, and I knew that won't be the case ten years from now. Applications are not going to adopt different APIs across different clouds, S3 to GCS to Azure Blob to HDFS to everything is incompatible. I saw that if I built a data store for persistence, industry will consolidate around S3 API. Amazon S3, when we started, it looked like they were the giant, there was only one cloud industry, it believed mono-cloud. Almost everyone was talking to me like AWS will be the world's data center.I certainly see that possibility, Amazon is capable of doing it, but my bet was the other way, that AWS S3 will be one of many solutions, but not—if it's all incompatible, it's not going to work, industry will consolidate. Our bet was, if world is producing so much data, if you build an object store that is S3 compatible, but ended up as the leading data store of the world and owned the application ecosystem, you cannot go wrong. We kept our heads low and focused on the first six years on massive adoption, build the ecosystem to a scale where we can say now our ecosystem is equal or larger than Amazon, then we are in business. We didn't focus on commercialization; we focused on convincing the industry that this is the right technology for them to use. Once they are convinced, once you solve business problems, making money is not hard because they are already sold, they are in love with the product, then convincing them to pay is not a big deal because data is so critical, central part of their business.We didn't worry about commercialization, we worried about adoption. And once we got the adoption, now customers are coming to us and they're like, “I don't want open-source license violation. I don't want data breach or data loss.” They are trying to sell to me, and it's an easy relationship game. And it's about long-term partnership with customers.And so the business started growing, accelerating. That was the reason that now is the time to fill up the gas tank and investors were quite excited about the commercial traction as well. And all the intangible, right, how big we grew in the last few years.Corey: It really is an interesting segment, that has always been something that I've mostly ignored, like, “Oh, you want to run your own? Okay, great.” I get it; some people want to cosplay as cloud providers themselves. Awesome. There's clearly a lot more to it than that, and I'm really interested to see what the future holds for you folks.AB: Yeah, I'm excited. I think end of the day, if I solve real problems, every organization is moving from compute technology-centric to data-centric, and they're all looking at data warehouse, data lake, and whatever name they give data infrastructure. Data is now the centerpiece. Software is a commodity. That's how they are looking at it. And it is translating to each of these large organizations—actually, even the mid, even startups nowadays have petabytes of data—and I see a huge potential here. The timing is perfect for us.Corey: I'm really excited to see this continue to grow. And I want to thank you for taking so much time to speak with me today. If people want to learn more, where can they find you?AB: I'm always on the community, right. Twitter and, like, I think the Slack channel, it's quite easy to reach out to me. LinkedIn. I'm always excited to talk to our users or community.Corey: And we will of course put links to this in the [show notes 00:33:58]. Thank you so much for your time. I really appreciate it.AB: Again, wonderful to be here, Corey.Corey: Anand Babu Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with what starts out as an angry comment but eventually turns into you, in your position on the S3 product team, writing a thank you note to MinIO for helping validate your market.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Enterprise OSS
Ceph e DR - Dal Blog EOSS

Enterprise OSS

Play Episode Listen Later Dec 8, 2021 5:50


Oggi è possibile realizzare un DR (disaster recovery) di macchine virtuali su una infrastruttura open e closed, (Es: Vmware) in modo semplice ed efficiente.Per fronteggiare questa esigenza avanzata da svariati clienti nel tempo, la scelta del prodotto è ricaduta sul sistema di storage distribuito open source Ceph.Ora il prodotto Ceph è indiscutibilmente il leader nel settore open source e non solo, ne abbiamo parlato in vari articoli del blog e del podcast.Per approfondimenti:le tecnologie menzionate sono Ceph, Kvm, Vmware, Proxmox, iSCSI, Rbd.

Tech ONTAP Podcast
Episode 301: NVMe over TCP in ONTAP 9.10.1

Tech ONTAP Podcast

Play Episode Listen Later Oct 1, 2021 25:56


This week, the SAN guy at NetApp - Mike Peppers - drops by to discuss the latest advancement in NVMe as a protocol and what it means for the other TCP based SAN protocol - iSCSI.

Mosqueteroweb Tecnología
OMV, bloqueando publicidad y series

Mosqueteroweb Tecnología

Play Episode Listen Later Nov 4, 2020 24:20


En este episodio os hablo sobre: - OMV. OpenMediaVault como NAS casero. - iSCSI: https://diario.mosqueteroweb.eu/2020/04/ampliar-discos-de-tu-mini-pc-o.html - Bloqueando publicidad Adguard - Docker y Portainer - Serie Gambito de dama - Serie The good doctor Amazon afiliados : https://amzn.to/2VAfFYz canal de Telegram: Mosqueteroweb en canal, https://t.me/mosqueterowebencanal Podcast Pedro el de la suerte: https://anchor.fm/s/1053b00/podcast/rss Licencia del podcast: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)

BSD Now
372: Slow SSD scrubs

BSD Now

Play Episode Listen Later Oct 15, 2020 48:04


Wayland on BSD, My BSD sucks less than yours, Even on SSDs, ongoing activity can slow down ZFS scrubs drastically, OpenBSD on the Desktop, simple shell status bar for OpenBSD and cwm, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) Headlines Wayland on BSD (https://blog.netbsd.org/tnf/entry/wayland_on_netbsd_trials_and) After I posted about the new default window manager in NetBSD I got a few questions, including "when is NetBSD switching from X11 to Wayland?", Wayland being X11's "new" rival. In this blog post, hopefully I can explain why we aren't yet! My BSD sucks less than yours (https://www.bsdfrog.org/pub/events/my_bsd_sucks_less_than_yours-full_paper.pdf) This paper will look at some of the differences between the FreeBSD and OpenBSD operating systems. It is not intended to be solely technical but will also show the different "visions" and design decisions that rule the way things are implemented. It is expected to be a subjective view from two BSD developers and does not pretend to represent these projects in any way. Video + EuroBSDCon 2017 Part 1 (https://www.youtube.com/watch?v=ZhpaKuXKob4) + EuroBSDCon 2017 Part 2 (https://www.youtube.com/watch?v=cYp70KWD824) News Roundup Even on SSDs, ongoing activity can slow down ZFS scrubs drastically (https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSSSDActivitySlowsScrubs) Back in the days of our OmniOS fileservers, which used HDs (spinning rust) across iSCSI, we wound up changing kernel tunables to speed up ZFS scrubs and saw a significant improvement. When we migrated to our current Linux fileservers with SSDs, I didn't bother including these tunables (or the Linux equivalent), because I expected that SSDs were fast enough that it didn't matter. Indeed, our SSD pools generally scrub like lightning. OpenBSD on the Desktop (Part I) (https://paedubucher.ch/articles/2020-09-05-openbsd-on-the-desktop-part-i.html) Let's install OpenBSD on a Lenovo Thinkpad X270. I used this computer for my computer science studies. It has both Arch Linux and Windows 10 installed as dual boot. Now that I'm no longer required to run Windows, I can ditch the dual boot and install an operating system of my choice. A simple shell status bar for OpenBSD and cwm(1) (https://www.tumfatig.net/20200923/a-simple-shell-status-bar-for-cwm/) These days, I try to use simple and stock software as much as possible on my OpenBSD laptop. I’ve been playing with cwm(1) for weeks and I was missing a status bar. After trying things like Tint2, Polybar etc, I discovered @gonzalo’s termbar. Thanks a lot! As I love scripting, I decided to build my own. Beastie Bits DragonFly v5.8.3 released to address to issues (http://lists.dragonflybsd.org/pipermail/commits/2020-September/769777.html) OpenSSH 8.4 released (http://www.openssh.com/txt/release-8.4) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Dane - FreeBSD vs Linux in Microservices and Containters (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/372/feedback/Dane%20-%20FreeBSD%20vs%20Linux%20in%20Microservices%20and%20Containters.md) Mason - questions.md (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/372/feedback/Mason%20-%20questions.md) Michael - Tmux License.md (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/372/feedback/Michael%20-%20Tmux%20License.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***

The Business of Open Source
Enabling Cloud Native Environments with Gou Rao

The Business of Open Source

Play Episode Listen Later Sep 16, 2020 29:38


The conversation covers:  Gou's role as CTO of Portworx, and how he works with customers on a day to day basis. Common pain points that Gou talks about with customers. Gou explains how he helps customers create agile and cost-effective application development and deployment environments. The types of people that Gou talks to when approaching customers about cloud native discussions. Why customers often struggle with infrastructure related problems during their cloud native journeys, and how Gou and his team help. Common misconceptions that exist among customers when exploring cloud native solutions. For example, Gou mentions moving to Kubernetes for the sake of moving to Kubernetes.  Gou's thoughts on state — including why there is no such thing as an end-to-end stateless architecture. Some cloud native vertical trends that Gou is noticing taking place in the market.  The issue of vendor lock-in, and how data and state fit into lock-in discussions.  Gou's opinion on where he sees the cloud native ecosystem heading. Links Portworx: https://portworx.com/  Portworx Blog: https://portworx.com/blog/ Gou Rao Email: mailto:gou@portworx.com  TranscriptEmily: Hi everyone. I'm Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product's value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn't talk about them. Instead, we talk a lot about technical reasons. I'm hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you'll join me.Emily: Welcome to The Business of Cloud Native, I'm your host Emily Omier, and today I am chatting with Gou Rao. Gou, I want to go ahead and have you introduce yourself. Where do you work? What do you do?Gou: Sure. Hi, Emily, and hi to everybody that's listening in. Thanks for having me on this podcast. My name is Gou Rao. I'm the CTO at Portworx. Portworx is a leader in the cloud-native storage space. We help companies run mission-critical stateful applications in production in hybrid, multi-cloud, and cloud-native environments.Emily: So, when you say you're CTO, obviously that's a job title everyone, sort of, understands. But what does that mean you spend your day doing?Gou: Yeah, it is an overloaded term. As a CTO, I think CTOs in different companies wear multiple hats doing different things. Here at Portworx, technically I'm in charge of this company strategy and technical direction. What does that mean in terms of my day to day activities? And it's spending a lot of time with customers understanding the problems that they're trying to solve, and then trying to build a pattern around what different people in different industries and companies are doing, and then identifying common problems and trying to bring solutions to market, by working with our engineering teams, that sort of address, holistically, the underlying areas that I see people try and craft solutions around, whether it's enabling an agile development environment for their internal developers, or cost optimization, there's usually some underlying theme, and my job is to identify what that is, and come up with a meaningful solution that addresses a wide segment of the market.Emily: What are the most common pain points that you end up talking to customers about?Gou: Over the past, I think, eight-plus years or so—I think the enterprise software space goes through iterations in the types of problems that are being solved. Over the past eight-plus years or so, it really has been around this—we use this term cloud-native—enabling cloud-native environments. And what does that really mean? In talking to customers, what this is really meant recently is enabling an agile application development and deployment environment. And let's even define what that is. Me as an application developer, I have to rely on traditional IT techniques where there's a separate storage department, compute department, networking department, security department, and I have to interact with all of them just to develop and try out an application. But that really is impeding me as a developer from how fast I can iterate and build product and get it out there, so by and large, the common underlying theme has been, “Make that process better for me.” So, if I'm head of infrastructure how can I enable my developers to build and push product faster? So, getting that agility up in a sense where it makes—cost-wise, too, so it has to make cost sense—how do I enable an efficient, cost-efficient development platform? That has been the underlying theme. That sort of defines a set of technologies that we call cloud-native, and so orchestration tools like Kubernetes, and storage technologies like, hopefully, what we're doing at Portworx, these are all aimed at facilitating that. That's been sort of what we've been focused on over the past couple of years.Emily: And when you talk to customers, do they tend to say, “Hey, we need to figure out a way to increase our development velocity?” Or do they tend to say, “We need a better solution for stateful applications?” What's the type of vocabulary that they're attempting to use to describe their problems, and how high-level do they usually go?Gou: That's a good question. Both. So, the backdrop really is, “Increase my development velocity. Make it easier for me to put product out there faster.” Now, what does it take to get there? So, the second-order problems then become do I run in the public cloud, private cloud? Do I need help running stateful applications? So, these are all pillars that support the main theme here, which is increasing development velocity. So, the primary umbrella under which our customers are operating under is really around increasing the development velocity in a way that makes cost sense. And if you double-click on that and look at the type of problems that they're solving, they would include, “How do I efficiently run my applications in a public cloud? Or a hybrid cloud? How do I enable workflows that need to span multiple clouds?” Again because maybe they're using cloud provider technologies, like either compute resources, or even services that a cloud provider may be offering, so that, again, all of this so that they can increase their development velocity.Emily: And in the past, and to a certain extent now, storage was somewhat of a siloed area of expertise. When you're talking to customers, who are you talking to in an organization? I mean, is it somebody who's a storage specialist or is it someone who's not?Gou: No, they're not. So, that's been one of the things that have really changed in this ecosystem, which is the shift away from this kind of like, hey, there's a storage admin and a storage architect, and then there's a compute admin or BM admin or a security admin, that's really not who are driving this because if you look at that—that world really thinks in terms of infrastructure first.Actually, let me just take a step back for a second. One of the things that has actually changed here in the industry is this: a move from a machine-centric organization to an application-centric organization. So, let me explain what that means. Historically, enterprises have been run by data centers that have been run by a machine-centric control plane. This is your typical VMware type of control plane where the most important concept in a data center is a machine. So, if you need storage, it's for a machine. If you need an IP address, it's for a machine. If you want to secure something, you're securing a machine. And if you look at it, that really is not what an enterprise or business is trying to solve. What enterprises and businesses are trying to do is put their product out there faster. And so for them, what is more important? It's an application. And so what it's actually changed here? And one of the things that defines this cloud-native movement, at least in my mind, is this move from a machine-centric control plane to an application-centric control plane where the first-class citizen or the most important thing in an enterprise data center is not a machine, it's an application. And really, that's where technologies like Kubernetes and things like that come in.So, now your question is, who do we talk to? We don't talk to a storage administrator or machine-centric administrator; we talk to the people that are more focused on building and harnessing that application-centric control plane. So, these are people like a cloud architect, or it's a CIO level—or a CTO level driven decision to enable their application developers to move faster. These are application owners, application architects, it's that kind of people. You had an additional question there which is, are they storage experts? And so by definition, these people are not. So, they know they need storage, they know they need to secure their applications, they know they need networking, but they're not experts in any one of those domains. There are more application-level architects and application experts.Emily: Do you find that there tends to be some knowledge gaps? If so, is there anything that you keep sort of repeating over and over again when you have these conversations?Gou: So, it's not about having a knowledge gap, it's more about solving an infrastructure problem—which is what storage is—is not necessarily their primary task. So, in a sense that they're not experienced with that, they know that they need infrastructure support, they know they need storage support, and networking support, but they expect that to be part of the core platform. So, one of the things that we've had to do—and I suspect others in this cloud-native ecosystem—is to take all of that the heavy lifting that storage software would do and then package it in such a way that it's easy to consume by application architects; easy to consume by people that are putting together this cloud-native platform. So, dealing with things like drive failures, or how do I properly [unintelligible] my data or RAID protect my data, or how do I deal with backups? These are things that they kind of know they need me to do, but they're not really experienced with it, and they expect the cloud-native software platforms to handle that functionality for them. So, in other words, doing the job of a storage admin, but in a form of software has been really important.Emily: Do you find that there's any common misconceptions out there?Gou: Yeah. With any new technology or evolving space, I think there are bound to be some misconceptions in how you approach things. So, just moving to Kubernetes for the sake of moving to Kubernetes, for instance, is a common—or improperly embracing is as a common mistake that we see. You really need to think about why you're moving to this platform, and if you're really doing it to embrace developer agility. For instance, one mistake we see, and this is especially true with the ecosystem we work in because Portworx is enabling storage technologies in Kubernetes. We see people try and take Kubernetes and leverage legacy infrastructure tools like either NFS or connecting their Kubernetes systems to storage arrays—which are really machine-centric storage arrays—over protocols like iSCSI, and you kind of look at that, and what is the problem with that? And one way I like to describe it is if you take an agile platform like Kubernetes, and then you make that platform rely on legacy infrastructure, well, you're kind of bringing down the power of Kubernetes to the lowest common denominator here, which is your legacy platform. And your Kubernetes platform is only going to be as agile as the least nimble element in the stack. And so a mis-pattern that we see here is where people try and take them, and then they say, “Well, geez, I'm not getting the agility out of Kubernetes, I don't really see what all the fuss about this is. I'm still moving IT tickets around, and make developers still rely on storage admins.” And then we have to tell them, “Well, that's because you've kind of tied your Kubernetes platform down to legacy infrastructure, and let's now think about modernizing that infrastructure.” And that's sort of—I do find myself in a spot where we have to educate the partners about that. And they eventually see that and hopefully, that's why they work with us.Emily: Why do you think they tend to make that mistake?Gou: It's what's lying around, right? It's the ease of just leveraging the tools and the equipment you already have, not really fully understanding the problem all the way through. It takes time for people to try out a pattern—take Kubernetes, try and see what you have lying around, connect it to learn from mistakes. And so that really takes some time. They look at others to see what others are doing, and it's not immediately evident. I think, with any new technology, there's always some learning that goes along with it.Emily: Well, let's talk a little bit about stateful apps in general. Why do people need stateful apps? What business problem do stateful apps deal with, or make it possible to solve that you just can't do with stateless?Gou: Sure. Yeah very rarely is there really something that is a stateless app, so unless you're doing some sort of ephemeral experiment—and there are certain use cases for that—especially in enterprises, you always rely on data. And data is essentially your state in some shape or form, whether it's short-lived state, long-lived state, there's always some state and data that's involved in an application. And people try to think of things in terms of stateless, and what that really means is, they're punting the state problem to some other part of their overall solution; it doesn't really go away. What do I mean by that? Well, if you put all your stateless apps on your cloud-native platform, where did the state go? You're probably dealing with it in some VM or some bare-metal machine. It didn't really go away; it's there, and you're still left with trying to connect to it, and access it, and manage it. And then you start wondering, what kind of northbound impact is that having on your, what you think is your stateless side of things. And it has an impact. Anytime you need to make changes to your data structure, it's not like your stateless side of things is not impacted, so it kind of halts the entire pipeline. So, what we see happening as a pattern is people finally understanding that there's really no such thing as an end-to-end stateless architecture—especially in enterprises—and that they need to embrace the state and manage it in a cloud-native way. And so, that's really where a lot of this talk you see around these days, around how do you manage stateful applications in Kubernetes, that's where that is coming from. How do you govern it? How do you enforce things like RBAC on it? How do you manage its accessibility? How do you deal with its portability, because state has gravity? These are the main topics these days that people have to think about.Emily: Can you think of any misconceptions other than the ones that you've just mentioned that are specifically related to state?Gou: Sure. I think costs—well, less a misconception than I think not fully appreciating or understanding the impact that this would have, and it really is around cost and especially when running in the public cloud. People underestimate the cost associated with state in the public cloud, and especially when you're enabling an agile platform, with agility comes—really determines automation, and people tend to do a lot of things and so, when you have a lot of state laying around, the cost can add up. Simple example is if you have a lot of volumes that are sitting around in, let's say, Amazon, well, you're paying for those bills. And density is another related concept, which is, if you're running thousands of applications, and thousands of databases, or thousands of volumes, how do you manage that with not spending that much money on the resources? How do you virtually manage that? So, cost planning in the public cloud is something that I see is so commonly overlooked or not really fully thought through. And it does come to bite people down the line because those bills stack up quickly. So, coming up with an architecture where that is well thought through becomes really important.Emily: When do people tend to think about state when they're thinking about a digital transformation towards cloud-native?Gou: I think, Emily, state is something that is front and center of any enterprise architecture because it really—look at it this way, any application you're going to build, generally speaking, revolves around the data structures and the data it operates on. You really typically start with—and there's a few different ways in which you start with cracking your application stack, right? One is you start with the user experience, like what end-user experience you want to fulfill, what is the problem you're solving? Another core element is then what are the data structures needed to solve that problem? So, state becomes an important thing right from day one. Now, in terms of managing it, people can choose to defer that problem saying, “I'm not going to deal with managing the state in my cloud-native platform.” This goes to your comment about stateless architectures. Again, if they do that, then they're punting the problem until the point where they actually need to implement it and manage it in production. Either way, that problem comes up. You're thinking about the data structures and development time for sure, you can avoid that. In terms of embracing how you're going to run it, it definitely comes into place as you're rolling up your application as you're getting close to production because somebody has to manage that.Emily: And there are probably people out there who would argue, hey, cloud-native means stateless. What would you say to those people?Gou: Yeah, I think we've had this debate many years ago, too. This notion of ‘twelve-factor applications' is kind of where it started, and I think people realized that, unless you're dealing with an architecture where state is sort of a small subset of the overall product offering, where really a lot of value is in maybe your UI, or it's a web app where people are interacting with each other and there's just a small amount of data sitting in a database that's recording a few messages, there really is no such thing as a safe plus architecture. So, in that case, what you could do is you architect your database, maybe just once and you're really not touching it—any changes you're making to your application are more cosmetic—then you can put your state somewhere else and an external database, have an endpoint that your applications interact with. Keep in mind, though, that still—that's not stateless; you've just put your state outside of your cloud-native platform. But what I would say to your question, what would you say to somebody that says that's the way to do things, my point is that's not how enterprise applications and architectures work. They're more complex than that, and the state is more central, the data is more central to the application, where changes intimately involve changing how your data structure is organized, changing tables around. In that case, it doesn't make sense to treat that outside of your cloud-native stack: because you're making so many changes to it, you're then going to lose the agility that the cloud-native platform can offer compared to bringing those microservice databases into your Kubernetes platform, letting the developers make the data structure changes they need to do as frequently as they need to, all within the same platform, within the same control plane. That makes a better enterprise architecture.Emily: Sort of a different type of question. But I'm just curious if there's anything that continues to surprise you about the cloud-native ecosystem, even about your conversations with customers, what continues to surprise you, but also what continues to surprise them?Gou: I was a little bit more surprised a couple of years ago, but does continue to surprise me is the mixing of the cloud-native and the legacy tools that still happens, and it's not just with storage, I see the same thing happening around security, or even networking. And some of that, not so much as surprise as and it makes less sense to me—and eventually I think people realize it—that they make this change, and then it sort of looks like a lateral move to them because they didn't get the agility they wanted. If somebody has to roll out an application and they have to sit through a machine type of security audit, for instance, if they're doing security, and they're not leveraging the new tools that do container-native security, those kinds of things do raise a flag and trying to work with our partners and customers and trying to point these things out, I still do see some of that happening. And I think it has been getting better over time because people learn from their peers within their industries, whether it's banking—you'll see banks kind of look at each other, and they develop similar architectures. It's a close-knit ecosystem, for instance.Emily: Actually, do you see any specific trends that are vertical-specific?Gou: So, industry-specific, right? So, for instance, in the financial sector, they're certainly trends, whether it's embracing hybrid-cloud technologies, and they kind of do things similarly. An example is, some industries are okay with completely relying on one single cloud provider. Generally speaking—and I'm just giving an example in the financial space—we've seen that not be the case, where they kind of want more of a multi-cloud or cloud-neutral platform, they probably will run on-prem and in the public cloud, but they don't want to be locked into a particular cloud provider. And so that's, for example, a trend in that industry. So, yeah, different industries have different kinds of specific features that they're looking for and architectures that they're circling around.Emily: You brought up lock-in. Lock-in is a big topic with a lot of people, but often the idea of data gravity doesn't make its way into the primary conversations about lock-in. How does data and state fit into these lock-in discussions?Gou: That's a really good point, and I'll break down into two things. So, data has gravity, and ultimately if you have petabytes of data sitting around somewhere, it becomes harder and harder to move that, but even before that, one of the most important things that we try and teach people, and I think people, kind of, realize this themselves is lock-in starts even before the data gravity has become an issue. It starts with, is your enterprise architecture or platform itself, just completely relying on a particular provider's APIs and offerings? And that's where we really caution our customers and partners to say, that's the layer at which you want to build your—that's your demarcation point, which is run your services in a cloud provider but don't lock your architecture around relying on that cloud provider providing those services. So, instance as database, if you're running a database, it's better to run your database on your platform in the public cloud, as opposed to putting all of your data in the cloud providers database because then you're really locked in, not just by way of data, but even by way of your enterprise architecture. So, that's one of the first things that I think people are looking for which is, how do I build this cloud-agnostic platform? Forget cloud-native, but cloud-agnostic platform where I can run all of my databases at ease, but I've not locked in my database to a cloud provider. Once you do that, then you can start breaking up your applica—especially in enterprises, it's not like you just have one very large database; you typically have many, many, many small databases, so it becomes easy to start with portability, and you can have sets of your—if you need to—applications run in different cloud providers and your ways to connect these and this is the notion of hybrid-cloud, which is becoming real.Emily: What do you see as the future? Where do you see this ecosystem going?Gou: You know, there's a lot happening in different segments, right. And 5G, for example, is a huge enabler of Kubernetes, or consumer of cloud-native technologies, and there's just a lot happening over there. Just around edge to core compute, and patterns that are emerging with how you move data from the edge to the core or vice versa, and how you distribute data at the edge. So, there's a lot of innovations happening there in the 5G space. Every segment has something interesting going on. From Portworx's standpoint, what we're doing over the next couple of years and helping people in two main areas.I mentioned 5G, so enabling workflows that are truly cloud-native. What do I mean by that? Driven by Kubernetes, that allow the developers to create higher-level workflows where they can either move data between Kubernetes clusters—again, it doesn't have to be between cloud providers; just to take a step back for a second. Whenever somebody is running a cloud-native platform, what we find is that it's not like an enterprise has one very large Kubernetes clusters. Typically people are managing fleets of Kubernetes clusters, so whether they're doing blue/green deployments, or they just have different clusters for different types of applications, or they're compartmentalized for different departments, we find that people having to move and facilitate movement of applications between clusters is very important. So, that's an area where we're focusing on; certainly makes sense in the 5G space.The other area is around putting in more AI into the platform. So, here's an example. Does every enterprise out there, does every developer need to be—if they're running stateful applications, do they need to become a database expert to run it? And our answer is no. We want to bring in this notion of self-driving technologies into stateful applications where—hopefully with the Portworx software—if you're running stateful applications, the Portworx software is managing the performance of the database, maybe even the sharding of the database, the vertical and horizontal scaling of it, monitoring the database for performance problems, hotspots, reacting to it. We've introduced some technologies like that already over the past couple of years. An example is capacity management. What we've found is—and I think you asked this question early on—people that are running stateful applications on their Kubernetes platform, they're not storage experts. People routinely make mistakes when it comes to capacity planning. One of the things that we found is people had underestimated how much space they're going to use or how much storage they're going to use, and so they ran out of capacity in their cloud-native platform. Why should it be their problem to deal with that? So, we added software that's intelligent enough to detect these kind of cases and react to it. Similarly, we'll do the same thing around vertical scaling, reducing hotspots. And bringing in that kind of intelligence and little platform is something not just Portworx but we expect other people in this ecosystem to solve.Emily: Do you think there's any business problems that customers talk about that you don't have a good answer to? Or that you don't think the ecosystem has a good answer to, yet?Gou: Yeah, no, it's a good question. So, making the platform simple. So, the simplicity is one aspect. I think we do see enterprises struggling with the cloud-native space being slightly complex enough with so many technologies out there. Kubernetes itself, certainly, it has its own learning curve. So, making some of those things more simple and cookie-cutter so that enterprises can simply focus on just their application development, that is an area that needs to still be worked on. And we see enterprises, I wouldn't say really struggling, but trying to wrap their head around how to make the platform more simple for their developers. There are projects in the cloud-native space that are focused on this. I think a lot of this is going to come down to establishing a set of patterns for different use-cases. The more material and examples that are out there and use-cases that are out there will certainly help.Emily: Do you have an engineering tool that you can't live without? If so, what is it?Gou: [laughs]. My go-to is obviously GDB. It's our debugger environment. I certainly wouldn't be able to develop without that. When I am looking into insights into how a certain customer's environment is doing, Prometheus and Grafana are, sort of, my go-to tools in terms of looking at metrics, and application performance health, and things like that.Emily: How could listeners connect with you or follow you?Gou: On our site portworx.com, P-O-R-T-W-O-R-X-dot-com. We frequently blog on that site; that would be a good way to follow what we're up to and what we're thinking. I certainly respond to people via email. So, reach out to me if you have any questions. I'm pretty good replying to that: Gou—that's G-O-U—at portworx.com. I don't nearly tweet as much as our PR team would like me to tweet so there's not too much information there, unfortunately.Emily: Thank you so much for joining us on The Business of Cloud Native.Gou: Thank you, Emily. Thanks for having me.Emily: Thanks for listening. I hope you've learned just a little bit more about The Business of Cloud Native. If you'd like to connect with me or learn more about my positioning services, look me up on LinkedIn: I'm Emily Omier, that's O-M-I-E-R, or visit my website which is emilyomier.com. Thank you, and until next time.Announcer: This has been a HumblePod production. Stay humble.

Cisco Champion Radio
S7|E34 Cisco HyperFlex with iSCSI Helps Consolidate Workloads

Cisco Champion Radio

Play Episode Listen Later Sep 7, 2020 33:54


IT organizations that get the most out of their technology investments tend to grow faster than their peers. As a result, they are always exploring enhancements to existing products that can improve operational efficiency and reduce costs. Listen in as Cisco Champions sit down with the Cisco expert to discuss using HyperFlex as an iSCSI target to address block storage use cases, such as externalized LUNs for Oracle or shared LUNs for Windows failover clusters—or for baremetal, virtualized or containerized applications. Learn more: https://www.cisco.com/c/en/us/products/hyperconverged-infrastructure/index.html?dtid=opdcsnc001469 Cisco Champion Hosts: Joseph Houghes (twitter.com/jhoughes), Veam Software, Solutions Architect Dan Kelcher, NTT DATA Services, Enterprise Architecture Adviser Michael Rhoades (twitter.com/ciscomikey), North American Hoganas, Manager Information Technology Guest: Ronnie Chan (twitter.com/ronnielchan), Cisco, Product Manager Engineer Moderator: Amilee San Juan (twitter.com/amileesan1), Cisco, Technical Influencer Marketing and Cisco Champion Program

helps windows oracle cisco workload consolidate iscsi luns hyperflex technical influencer marketing
Cisco Champion Radio
S7|E34 Cisco HyperFlex with iSCSI Helps Consolidate Workloads

Cisco Champion Radio

Play Episode Listen Later Sep 7, 2020 33:54


IT organizations that get the most out of their technology investments tend to grow faster than their peers. As a result, they are always exploring enhancements to existing products that can improve operational efficiency and reduce costs. Listen in as Cisco Champions sit down with the Cisco expert to discuss using HyperFlex as an iSCSI target to address block storage use cases, such as externalized LUNs for Oracle or shared LUNs for Windows failover clusters—or for baremetal, virtualized or containerized applications. Learn more: https://www.cisco.com/c/en/us/products/hyperconverged-infrastructure/index.html?dtid=opdcsnc001469 Cisco Champion Hosts: Joseph Houghes (twitter.com/jhoughes), Veam Software, Solutions Architect Dan Kelcher, NTT DATA Services, Enterprise Architecture Adviser Michael Rhoades (twitter.com/ciscomikey), North American Hoganas, Manager Information Technology Guest: Ronnie Chan (twitter.com/ronnielchan), Cisco, Product Manager Engineer Moderator: Amilee San Juan (twitter.com/amileesan1), Cisco, Technical Influencer Marketing and Cisco Champion Program

helps windows oracle cisco workload consolidate iscsi luns hyperflex technical influencer marketing
Virtually Speaking Podcast
Back to Basics: iSCSI

Virtually Speaking Podcast

Play Episode Listen Later Aug 7, 2020 39:57


This week on the Virtually Speaking Podcast we welcome Jason Massae, Jacob Hopkinson, and Cody Hosterman to discuss eeveything you need to know about iSCSI. read more The Virtually Speaking Podcast The Virtually Speaking Podcast is a weekly technical podcast dedicated to discussing VMware topics related to storage and availability. Each week Pete Flecha and John Nicholson bring in various subject matter experts from VMware and within the industry to discuss their respective areas of expertise. If you’re new to the Virtually Speaking Podcast check out all episodes on vSpeakingPodcast.com.

Arrow Bandwidth
Arrow Quick Hits: NetApp AFF All-SAN Solutions

Arrow Bandwidth

Play Episode Listen Later May 28, 2020 3:51


In this episode of Arrow Bandwidth, you’ll hear about NetApp’s new AFF All-SAN (ASA) solution, a dedicated, block-only SAN solution that includes FC and ISCSI protocols (no NFS or SMB) — a perfect fit for customers looking to isolate SAN workloads for mission-critical applications. The AFF ASA is based on the already popular AFF A220, A400 and A700 controller hardware that your customers know and trust with added benefits that position a great fit for many SAN customers. With NetApp's unparalleled cloud integration, your customers can expect reliability and performance. Tune in to learn about the added benefits and how you can overcome objections you may have faced in the past, continue to grow your SAN business and offer your customers yet another amazing solution from NetApp. Read the full Quick Hit on Arrow Channel Advisor: https://arrow.com/ecs/na/channeladvisor/channel-advisor-articles/arrow-quick-hits-netapp-aff-all-san-solutions/ SPEAKERS: Ryan Rascop Technical Solutions Engineer, Arrow Ryan Rascop is an Arrow technical solutions engineer who has a combined 11 years at Arrow, focused on multiple storage suppliers. As a dedicated technical resource to a national Arrow partner, he helps support growth in their supplier-specific storage business. He is passionate about creating technical and sales enablement programs such as Arrow’s Quick Hits, creating content with a focus on simplicity and ease of consumption. Rascop was selected as a President’s Club MVP pick for the NetApp line of business for 2019. Walter Lemieux Technical Solutions Engineer, Arrow Walter Lemieux is a NetApp-proficient engineer with more than five years of experience working with customers to attain a plethora of goals and achievements. Lemieux brings his hands-on experience to Arrow as a dedicated technical resource to a national Arrow partner. He has a zeal for creative solutions and collaboration, and has contributed to Arrow’s technical and sales enablement programs, including Arrow Quick Hits podcasts.

Sysadmin Today Podcast
Sysadmin Today #76: VMware iSCSI Tuning

Sysadmin Today Podcast

Play Episode Listen Later May 17, 2020 40:17


In this episode, I am going to discuss tuning iSCSI for maximum performance, and more. Host: Paul Joyner Email: paul@sysadmintoday.com Facebook: https://www.facebook.com/sysadmintoday Twitter: https://twitter.com/SysadminToday Training: ITPRO.TV - https://go.itpro.tv/sat - For 30% off use code sat30 Show Sponsor Halp: Conversational Ticketing for Ops Teams That Love Slack Mention the Sysadmin Today podcast, and they'll set you up with an extended 6-week trial. Show Links Copy and Paste in Outlook 365 from Remote Desktop Connection https://social.technet.microsoft.com/Forums/en-US/3e3d3923-31bf-477e-b10d-09901afd89e4/copy-and-paste-in-outlook-365-from-remote-desktop-connection?forum=outlook  How-to safely change from LSI logic SAS into VMware Paravirtual https://www.vladan.fr/safely-change-lsi-logic-sas-vmware-paravirtual Tuning VMware vSphere ESXi iSCSI SAN http://www.adnsolutions.com/tuning-vmware-vsphere-esxi-and-equallogic-iscsi-san Adjusting Round Robin IOPS limit from default 1000 to 1 kb.vmware.com/s/article/2069356

Storage Consortium
Wie zukunftsfähig ist iSCSI oder macht NVMeoF längerfristig das Rennen?

Storage Consortium

Play Episode Listen Later Feb 25, 2020


Mit dem Aufkommen von schnellen NAND Flash Speichern und Storage Class Memory (SCM) gewinnt das Thema ‚Latenz’ im I/O-Stack aus Applikationssicht im Datacenter verstärkt an Bedeutung. Ein (technisch orientierter) Blogpost zu den Vor- und Nachteilen beider Plattformen und einem kurezn Ausblick auf künftige Entwicklungen.

Self-Hosted
5: ZFS Isn’t the Only Option

Self-Hosted

Play Episode Listen Later Nov 7, 2019 44:24


Getting your storage setup just right often takes making painful mistakes first. We share ours, our current storage setups, when ZFS is not the tool for the job, and what you should consider when protecting your data. Plus, we share a few recent project mishaps.

Packet Pushers - Fat Pipe
BiB 074: Replace iSCSI With NVMe/TCP From Lightbits Labs

Packet Pushers - Fat Pipe

Play Episode Listen Later Mar 28, 2019 5:06


Lightbits Labs has announced a software defined storage with hardware acceleration product. In a nutshell, the product is a global flash translation layer that decouples SSDs and compute. Put your compute wherever, mount a box full of fast storage via Lightbits using the NVMe over TCP protocol, and get storage latency that performs like directly attached storage, but without the waste of space. Lightbits is aiming this offering at folks running their own composable stack who want an API and storage speed. But another big winning use case? iSCSI replacement. Hmm...interesting! The post BiB 074: Replace iSCSI With NVMe/TCP From Lightbits Labs appeared first on Packet Pushers.

Packet Pushers - Briefings In Brief
BiB 074: Replace iSCSI With NVMe/TCP From Lightbits Labs

Packet Pushers - Briefings In Brief

Play Episode Listen Later Mar 28, 2019 5:06


Lightbits Labs has announced a software defined storage with hardware acceleration product. In a nutshell, the product is a global flash translation layer that decouples SSDs and compute. Put your compute wherever, mount a box full of fast storage via Lightbits using the NVMe over TCP protocol, and get storage latency that performs like directly attached storage, but without the waste of space. Lightbits is aiming this offering at folks running their own composable stack who want an API and storage speed. But another big winning use case? iSCSI replacement. Hmm...interesting! The post BiB 074: Replace iSCSI With NVMe/TCP From Lightbits Labs appeared first on Packet Pushers.

Packet Pushers - Full Podcast Feed
BiB 074: Replace iSCSI With NVMe/TCP From Lightbits Labs

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Mar 28, 2019 5:06


Lightbits Labs has announced a software defined storage with hardware acceleration product. In a nutshell, the product is a global flash translation layer that decouples SSDs and compute. Put your compute wherever, mount a box full of fast storage via Lightbits using the NVMe over TCP protocol, and get storage latency that performs like directly attached storage, but without the waste of space. Lightbits is aiming this offering at folks running their own composable stack who want an API and storage speed. But another big winning use case? iSCSI replacement. Hmm...interesting! The post BiB 074: Replace iSCSI With NVMe/TCP From Lightbits Labs appeared first on Packet Pushers.

DevOps and Docker Talk
Swarm Volume Storage Drivers

DevOps and Docker Talk

Play Episode Listen Later Mar 18, 2019 7:01


Docker Captain Michael Irwin and I go over various storage options for persistent volumes in Swarm, and how you need to think about storage for Docker.

Microsoft Mechanics Podcast
Windows Server 2019 deep dive | Best of Microsoft Ignite 2018

Microsoft Mechanics Podcast

Play Episode Listen Later Oct 11, 2018 22:30


Demo-rich overview of major advances coming to Windows Server 2019 including: managing hyper-converged infrastructure; support for storage class memory; the new Storage Migration Service; deduplication coming to ReFS; new Hybrid integration between Windows Server datacenter and Azure with the Windows Admin Center and more. Session THR2320 - Filmed Monday, September 24, 15:25 EDT at Microsoft Ignite in Orlando, Florida. Subject Matter Expert: Jeff Woolsey is a Principal Program Manager at Microsoft for Windows Server and a leading expert on Virtualization, Private and Hybrid Cloud. Jeff has worked on virtualization and server technology for over eighteen years. He plays a leading role in the Windows Server Engineering team, helping to shape the design requirements for Windows Server 2019, 2016, 2012 / R2, 2008 R2, Microsoft Hyper-V Server 2019, 2016, 2012/R2, 2008 R2, System Center 2012 / R2, and Windows Admin Center. Jeff is an authority on ways customers can benefit from virtualization, management, and cloud computing across public, private, and hybrid environments. Jeff is a sought-after hybrid cloud expert presenting in numerous keynotes and conferences worldwide with Bill Gates, Steve Ballmer, & Satya Nadella.

Storage Consortium
Speichernetzwerk Trends am Beispiel von iSCSI Extensions for RDMA, iSER

Storage Consortium

Play Episode Listen Later Aug 30, 2018


30. Aug. 2018 - iSCSI versus iSER versus NVMeoF. Gegensätze oder Ergänzungen? Ein technischer Überblick zum gegenwärtigen Stand der Entwicklung...

Ask SME Anything
What does GDPR mean to my organization?

Ask SME Anything

Play Episode Listen Later May 31, 2018 52:46


In this episode, we answer these questions: 1. What does GDPR mean to my organization? 2:36 2. What is the difference between Machine Learning and A.I.? 12:10 3. What does the April 2018 CISSP Domain refresh mean for me as an IT professional that may be pursuing certification? 18:08 4. What are the key changes/new features available with the release of vSphere 6.7? 26:03 5. How do I build an iSCSI device that my VMware box can get to?41:00

BSD Now
Episode 243: Understanding The Scheduler | BSD Now 243

BSD Now

Play Episode Listen Later Apr 25, 2018 85:24


OpenBSD 6.3 and DragonflyBSD 5.2 are released, bug fix for disappearing files in OpenZFS on Linux (and only Linux), understanding the FreeBSD CPU scheduler, NetBSD on RPI3, thoughts on being a committer for 20 years, and 5 reasons to use FreeBSD in 2018. Headlines OpenBSD 6.3 released Punctual as ever, OpenBSD 6.3 has been releases with the following features/changes: Improved HW support, including: SMP support on OpenBSD/arm64 platforms vmm/vmd improvements: IEEE 802.11 wireless stack improvements Generic network stack improvements Installer improvements Routing daemons and other userland network improvements Security improvements dhclient(8) improvements Assorted improvements OpenSMTPD 6.0.4 OpenSSH 7.7 LibreSSL 2.7.2 DragonFlyBSD 5.2 released Big-ticket items Meltdown and Spectre mitigation support Meltdown isolation and spectre mitigation support added. Meltdown mitigation is automatically enabled for all Intel cpus. Spectre mitigation must be enabled manually via sysctl if desired, using sysctls machdep.spectremitigation and machdep.meltdownmitigation. HAMMER2 H2 has received a very large number of bug fixes and performance improvements. We can now recommend H2 as the default root filesystem in non-clustered mode. Clustered support is not yet available. ipfw Updates Implement state based "redirect", i.e. without using libalias. ipfw now supports all possible ICMP types. Fix ICMPMAXTYPE assumptions (now 40 as of this release). Improved graphics support The drm/i915 kernel driver has been updated to support Intel Coffeelake GPUs Add 24-bit pixel format support to the EFI frame buffer code. Significantly improve fbio support for the "scfb" XOrg driver. This allows EFI frame buffers to be used by X in situations where we do not otherwise support the GPU. Partly implement the FBIOBLANK ioctl for display powersaving. Syscons waits for drm modesetting at appropriate places, avoiding races. + For more details, check out the “All changes since DragonFly 5.0” section. ZFS on Linux bug causes files to disappear A bug in ZoL 0.7.7 caused 0.7.8 to be released just 3 days after the release The bug only impacts Linux, the change that caused the problem was not upstreamed yet, so does not impact ZFS on illumos, FreeBSD, OS X, or Windows The bug can cause files being copied into a directory to not be properly linked to the directory, so they will no longer be listed in the contents of the directory ZoL developers are working on a tool to allow you to recover the data, since no data was actually lost, the files were just not properly registered as part of the directory The bug was introduced in a commit made in February, that attempted to improve performance of datasets created with the case insensitivity option. In an effort to improve performance, they introduced a limit to cap to give up (return ENOSPC) if growing the directory ZAP failed twice. The ZAP is the key-value pair data structure that contains metadata for a directory, including a hash table of the files that are in a directory. When a directory has a large number of files, the ZAP is converted to a FatZAP, and additional space may need to be allocated as additional files are added. Commit cc63068 caused ENOSPC error when copy a large amount of files between two directories. The reason is that the patch limits zap leaf expansion to 2 retries, and return ENOSPC when failed. Finding the root cause of this issue was somewhat hampered by the fact that many people were not able to reproduce the issue. It turns out this was caused by an entirely unrelated change to GNU coreutils. On later versions of GNU Coreutils, the files were returned in a sorted order, resulting in them hitting different buckets in the hash table, and not tripping the retry limit Tools like rsync were unaffected, because they always sort the files before copying If you did not see any ENOSPC errors, you were likely not impacted The intent for limiting retries is to prevent pointlessly growing table to max size when adding a block full of entries with same name in different case in mixed mode. However, it turns out we cannot use any limit on the retry. When we copy files from one directory in readdir order, we are copying in hash order, one leaf block at a time. Which means that if the leaf block in source directory has expanded 6 times, and you copy those entries in that block, by the time you need to expand the leaf in destination directory, you need to expand it 6 times in one go. So any limit on the retry will result in error where it shouldn't. Recommendations for Users from Ryan Yao: The regression makes it so that creating a new file could fail with ENOSPC after which files created in that directory could become orphaned. Existing files seem okay, but I have yet to confirm that myself and I cannot speak for what others know. It is incredibly difficult to reproduce on systems running coreutils 8.23 or later. So far, reports have only come from people using coreutils 8.22 or older. The directory size actually gets incremented for each orphaned file, which makes it wrong after orphan files happen. We will likely have some way to recover the orphaned files (like ext4’s lost+found) and fix the directory sizes in the very near future. Snapshots of the damaged datasets are problematic though. Until we have a subcommand to fix it (not including the snapshots, which we would have to list), the damage can be removed from a system that has it either by rolling back to a snapshot before it happened or creating a new dataset with 0.7.6 (or another release other than 0.7.7), moving everything to the new dataset and destroying the old. That will restore things to pristine condition. It should also be possible to check for pools that are affected, but I have yet to finish my analysis to be certain that no false negatives occur when checking, so I will avoid saying how for now. Writes to existing files cannot trigger this bug, only adding new files to a directory in bulk News Roundup des@’s thoughts on being a FreeBSD committer for 20 years Yesterday was the twentieth anniversary of my FreeBSD commit bit, and tomorrow will be the twentieth anniversary of my first commit. I figured I’d split the difference and write a few words about it today. My level of engagement with the FreeBSD project has varied greatly over the twenty years I’ve been a committer. There have been times when I worked on it full-time, and times when I did not touch it for months. The last few years, health issues and life events have consumed my time and sapped my energy, and my contributions have come in bursts. Commit statistics do not tell the whole story, though: even when not working on FreeBSD directly, I have worked on side projects which, like OpenPAM, may one day find their way into FreeBSD. My contributions have not been limited to code. I was the project’s first Bugmeister; I’ve served on the Security Team for a long time, and have been both Security Officer and Deputy Security Officer; I managed the last four Core Team elections and am doing so again this year. In return, the project has taught me much about programming and software engineering. It taught me code hygiene and the importance of clarity over cleverness; it taught me the ins and outs of revision control; it taught me the importance of good documentation, and how to write it; and it taught me good release engineering practices. Last but not least, it has provided me with the opportunity to work with some of the best people in the field. I have the privilege today to count several of them among my friends. For better or worse, the FreeBSD project has shaped my career and my life. It set me on the path to information security in general and IAA in particular, and opened many a door for me. I would not be where I am now without it. I won’t pretend to be able to tell the future. I don’t know how long I will remain active in the FreeBSD project and community. It could be another twenty years; or it could be ten, or five, or less. All I know is that FreeBSD and I still have things to teach each other, and I don’t intend to call it quits any time soon. iXsystems unveils new TrueNAS M-Series Unified Storage Line San Jose, Calif., April 10, 2018 — iXsystems, the leader in Enterprise Open Source servers and software-defined storage, announced the TrueNAS M40 and M50 as the newest high-performance models in its hybrid, unified storage product line. The TrueNAS M-Series harnesses NVMe and NVDIMM to bring all-flash array performance to the award-winning TrueNAS hybrid arrays. It also includes the Intel® Xeon® Scalable Family of Processors and supports up to 100GbE and 32Gb Fibre Channel networking. Sitting between the all-flash TrueNAS Z50 and the hybrid TrueNAS X-Series in the product line, the TrueNAS M-Series delivers up to 10 Petabytes of highly-available and flash-powered network attached storage and rounds out a comprehensive product set that has a capacity and performance option for every storage budget. Designed for On-Premises & Enterprise Cloud Environments As a unified file, block, and object sharing solution, TrueNAS can meet the needs of file serving, backup, virtualization, media production, and private cloud users thanks to its support for the SMB, NFS, AFP, iSCSI, Fibre Channel, and S3 protocols. At the heart of the TrueNAS M-Series is a custom 4U, dual-controller head unit that supports up to 24 3.5” drives and comes in two models, the M40 and M50, for maximum flexibility and scalability. The TrueNAS M40 uses NVDIMMs for write cache, SSDs for read cache, and up to two external 60-bay expansion shelves that unlock up to 2PB in capacity. The TrueNAS M50 uses NVDIMMs for write caching, NVMe drives for read caching, and up to twelve external 60-bay expansion shelves to scale upwards of 10PB. The dual-controller design provides high-availability failover and non-disruptive upgrades for mission-critical enterprise environments. By design, the TrueNAS M-Series unleashes cutting-edge persistent memory technology for demanding performance and capacity workloads, enabling businesses to accelerate enterprise applications and deploy enterprise private clouds that are twice the capacity of previous TrueNAS models. It also supports replication to the Amazon S3, BackBlaze B2, Google Cloud, and Microsoft Azure cloud platforms and can deliver an object store using the ubiquitous S3 object storage protocol at a fraction of the cost of the public cloud. Fast As a true enterprise storage platform, the TrueNAS M50 supports very demanding performance workloads with up to four active 100GbE ports, 3TB of RAM, 32GB of NVDIMM write cache and up to 15TB of NVMe flash read cache. The TrueNAS M40 and M50 include up to 24/7 and global next-business-day support, putting IT at ease. The modular and tool-less design of the M-Series allows for easy, non-disruptive servicing and upgrading by end-users and support technicians for guaranteed uptime. TrueNAS has US-Based support provided by the engineering team that developed it, offering the rapid response that every enterprise needs. Award-Winning TrueNAS Features Enterprise: Perfectly suited for private clouds and enterprise workloads such as file sharing, backups, M&E, surveillance, and hosting virtual machines. Unified: Utilizes SMB, AFP, NFS for file storage, iSCSI, Fibre Channel and OpenStack Cinder for block storage, and S3-compatible APIs for object storage. Supports every common operating system, hypervisor, and application. Economical: Deploy an enterprise private cloud and reduce storage TCO by 70% over AWS with built-in enterprise-class features such as in-line compression, deduplication, clones, and thin-provisioning. Safe: The OpenZFS file system ensures data integrity with best-in-class replication and snapshotting. Customers can replicate data to the rest of the iXsystems storage lineup and to the public cloud. Reliable: High Availability option with dual hot-swappable controllers for continuous data availability and 99.999% uptime. Familiar: Provision and manage storage with the same simple and powerful WebUI and REST APIs used in all iXsystems storage products, as well as iXsystems’ FreeNAS Software. Certified: TrueNAS has passed the Citrix Ready, VMware Ready, and Veeam Ready certifications, reducing the risk of deploying a virtualized infrastructure. Open: By using industry-standard sharing protocols, the OpenZFS Open Source enterprise file system and FreeNAS, the world’s #1 Open Source storage operating system (and also engineered by iXsystems), TrueNAS is the most open enterprise storage solution on the market. Availability The TrueNAS M40 and M50 will be generally available in April 2018 through the iXsystems global channel partner network. The TrueNAS M-Series starts at under $20,000 USD and can be easily expanded using a linear “per terabyte” pricing model. With typical compression, a Petabtye can be stored for under $100,000 USD. TrueNAS comes with an all-inclusive software suite that provides NFS, Windows SMB, iSCSI, snapshots, clones and replication. For more information, visit www.ixsystems.com/TrueNAS TrueNAS M-Series What's New Video Understanding and tuning the FreeBSD Scheduler ``` Occasionally I noticed that the system would not quickly process the tasks i need done, but instead prefer other, longrunning tasks. I figured it must be related to the scheduler, and decided it hates me. A closer look shows the behaviour as follows (single CPU): Lets run an I/O-active task, e.g, postgres VACUUM that would continuously read from big files (while doing compute as well [1]): pool alloc free read write read write cache - - - - - - ada1s4 7.08G 10.9G 1.58K 0 12.9M 0 Now start an endless loop: while true; do :; done And the effect is: pool alloc free read write read write cache - - - - - - ada1s4 7.08G 10.9G 9 0 76.8K 0 The VACUUM gets almost stuck! This figures with WCPU in "top": PID USERNAME PRI NICE SIZE RES STATE TIME WCPU COMMAND 85583 root 99 0 7044K 1944K RUN 1:06 92.21% bash 53005 pgsql 52 0 620M 91856K RUN 5:47 0.50% postgres Hacking on kern.sched.quantum makes it quite a bit better: sysctl kern.sched.quantum=1 kern.sched.quantum: 94488 -> 7874 pool alloc free read write read write cache - - - - - - ada1s4 7.08G 10.9G 395 0 3.12M 0 PID USERNAME PRI NICE SIZE RES STATE TIME WCPU COMMAND 85583 root 94 0 7044K 1944K RUN 4:13 70.80% bash 53005 pgsql 52 0 276M 91856K RUN 5:52 11.83% postgres Now, as usual, the "root-cause" questions arise: What exactly does this "quantum"? Is this solution a workaround, i.e. actually something else is wrong, and has it tradeoff in other situations? Or otherwise, why is such a default value chosen, which appears to be ill-deceived? The docs for the quantum parameter are a bit unsatisfying - they say its the max num of ticks a process gets - and what happens when they're exhausted? If by default the endless loop is actually allowed to continue running for 94k ticks (or 94ms, more likely) uninterrupted, then that explains the perceived behaviour - buts thats certainly not what a scheduler should do when other procs are ready to run. 11.1-RELEASE-p7, kern.hz=200. Switching tickless mode on or off does not influence the matter. Starting the endless loop with "nice" does not influence the matter. [1] A pure-I/O job without compute load, like "dd", does not show this behaviour. Also, when other tasks are running, the unjust behaviour is not so stongly pronounced. ``` aarch64 support added I have committed about adding initial support for aarch64. booting log on RaspberryPI3: ``` boot NetBSD/evbarm (aarch64) Drop to EL1...OK Creating VA=PA tables Creating KSEG tables Creating KVA=PA tables Creating devmap tables MMU Enable...OK VSTART = ffffffc000001ff4 FDT devmap cpufunc bootstrap consinit ok uboot: args 0x3ab46000, 0, 0, 0 NetBSD/evbarm (fdt) booting ... FDT /memory [0] @ 0x0 size 0x3b000000 MEM: add 0-3b000000 MEM: res 0-1000 MEM: res 3ab46000-3ab4a000 Usable memory: 1000 - 3ab45fff 3ab4a000 - 3affffff initarm: kernel phys start 1000000 end 17bd000 MEM: res 1000000-17bd000 bootargs: root=axe0 1000 - ffffff 17bd000 - 3ab45fff 3ab4a000 - 3affffff ------------------------------------------ kern_vtopdiff = 0xffffffbfff000000 physical_start = 0x0000000000001000 kernel_start_phys = 0x0000000001000000 kernel_end_phys = 0x00000000017bd000 physical_end = 0x000000003ab45000 VM_MIN_KERNEL_ADDRESS = 0xffffffc000000000 kernel_start_l2 = 0xffffffc000000000 kernel_start = 0xffffffc000000000 kernel_end = 0xffffffc0007bd000 kernel_end_l2 = 0xffffffc000800000 (kernel va area) (devmap va area) VM_MAX_KERNEL_ADDRESS = 0xffffffffffe00000 ------------------------------------------ Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018 The NetBSD Foundation, Inc. All rights reserved. Copyright (c) 1982, 1986, 1989, 1991, 1993 The Regents of the University of California. All rights reserved. NetBSD 8.99.14 (RPI64) #11: Fri Mar 30 12:34:19 JST 2018 ryo@moveq:/usr/home/ryo/tmp/netbsd-src-ryo-wip/sys/arch/evbarm/compile/RPI64 total memory = 936 MB avail memory = 877 MB … Starting local daemons:. Updating motd. Starting sshd. Starting inetd. Starting cron. The following components reported failures: /etc/rc.d/swap2 See /var/run/rc.log for more information. Fri Mar 30 12:35:31 JST 2018 NetBSD/evbarm (rpi3) (console) login: root Last login: Fri Mar 30 12:30:24 2018 on console rpi3# uname -ap NetBSD rpi3 8.99.14 NetBSD 8.99.14 (RPI64) #11: Fri Mar 30 12:34:19 JST 2018 ryo@moveq:/usr/home/ryo/tmp/netbsd-src-ryo-wip/sys/arch/evbarm/compile/RPI64 evbarm aarch64 rpi3# ``` Now, multiuser mode works stably on fdt based boards (RPI3,SUNXI,TEGRA). But there are still some problems, more time is required for release. also SMP is not yet. See sys/arch/aarch64/aarch64/TODO for more detail. Especially the problems around TLS of rtld, and C++ stack unwindings are too difficult for me to solve, I give up and need someone's help (^o^)/ Since C++ doesn't work, ATF also doesn't work. If the ATF works, it will clarify more issues. sys/arch/evbarm64 is gone and integrated into sys/arch/evbarm. One evbarm/conf/GENERIC64 kernel binary supports all fdt (bcm2837,sunxi,tegra) based boards. While on 32bit, sys/arch/evbarm/conf/GENERIC will support all fdt based boards...but doesn't work yet. (WIP) My deepest appreciation goes to Tohru Nishimura (nisimura@) whose writes vector handlers, context switchings, and so on. and his comments and suggestions were innumerably valuable. I would also like to thank Nick Hudson (skrll@) and Jared McNeill (jmcneill@) whose added support FDT and integrated into evbarm. Finally, I would like to thank Matt Thomas (matt@) whose commited aarch64 toolchains and preliminary support for aarch64. Beastie Bits 5 Reasons to Use FreeBSD in 2018 Rewriting Intel gigabit network driver in Rust Recruiting to make Elastic Search on FreeBSD better Windows Server 2019 Preview, in bhyve on FreeBSD “SSH Mastery, 2nd ed” in hardcover Feedback/Questions Jason - ZFS Transfer option Luis - ZFS Pools ClonOS Michael - Tech Conferences anonymous - BSD trash on removable drives Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

AWS re:Invent 2017
STG309: Deep Dive: Using Hybrid Storage with AWS Storage Gateway to Solve On-Premises Storage Problems

AWS re:Invent 2017

Play Episode Listen Later Nov 30, 2017 58:16


Enterprises of all sizes face continuing data growth and persistent requirements to back up and recover application data. The pains of recurring storage hardware purchasing, management, and failures are still acute for many IT organizations. Some also need to integrate on-premises datasets with in-cloud workloads, such as big data processing and analytics. Learn how to use AWS Storage Gateway to connect on-premises applications to AWS storage services using standard storage protocols, such as NFS, iSCSI, and VTL. Storage Gateway enables hybrid cloud storage solutions for backup and disaster recovery, file sharing, in-cloud processing, or bulk ingest for migration. We discuss use cases with real-life customer stories, and offer best practices.

Säkerhetspodcasten
Säkerhetspodcasten #103 - Lucas Lundgren

Säkerhetspodcasten

Play Episode Listen Later Oct 23, 2017 16:27


I dagens avsnitt av Säkerhetspodcasten intervjuar vi Lucas Lundgren efter hans talk på Sec-T 2017. Vi pratar om hur man äger andras hårddiskar på internet genom ISCSI.

BSD Now
215: Turning FreeBSD up to 100 Gbps

BSD Now

Play Episode Listen Later Oct 11, 2017 93:35


We look at how Netflix serves 100 Gbps from an Open Connect Appliance, read through the 2nd quarter FreeBSD status report, show you a freebsd-update speedup via nginx reverse proxy, and customize your OpenBSD default shell. This episode was brought to you by Headlines Serving 100 Gbps from an Open Connect Appliance (https://medium.com/netflix-techblog/serving-100-gbps-from-an-open-connect-appliance-cdb51dda3b99) In the summer of 2015, the Netflix Open Connect CDN team decided to take on an ambitious project. The goal was to leverage the new 100GbE network interface technology just coming to market in order to be able to serve at 100 Gbps from a single FreeBSD-based Open Connect Appliance (OCA) using NVM Express (NVMe)-based storage. At the time, the bulk of our flash storage-based appliances were close to being CPU limited serving at 40 Gbps using single-socket Xeon E5–2697v2. The first step was to find the CPU bottlenecks in the existing platform while we waited for newer CPUs from Intel, newer motherboards with PCIe Gen3 x16 slots that could run the new Mellanox 100GbE NICs at full speed, and for systems with NVMe drives. Fake NUMA Normally, most of an OCA's content is served from disk, with only 10–20% of the most popular titles being served from memory (see our previous blog, Content Popularity for Open Connect (https://medium.com/@NetflixTechBlog/content-popularity-for-open-connect-b86d56f613b) for details). However, our early pre-NVMe prototypes were limited by disk bandwidth. So we set up a contrived experiment where we served only the very most popular content on a test server. This allowed all content to fit in RAM and therefore avoid the temporary disk bottleneck. Surprisingly, the performance actually dropped from being CPU limited at 40 Gbps to being CPU limited at only 22 Gbps! The ultimate solution we came up with is what we call “Fake NUMA”. This approach takes advantage of the fact that there is one set of page queues per NUMA domain. All we had to do was to lie to the system and tell it that we have one Fake NUMA domain for every 2 CPUs. After we did this, our lock contention nearly disappeared and we were able to serve at 52 Gbps (limited by the PCIe Gen3 x8 slot) with substantial CPU idle time. After we had newer prototype machines, with an Intel Xeon E5 2697v3 CPU, PCIe Gen3 x16 slots for 100GbE NIC, and more disk storage (4 NVMe or 44 SATA SSD drives), we hit another bottleneck, also related to a lock on a global list. We were stuck at around 60 Gbps on this new hardware, and we were constrained by pbufs. Our first problem was that the list was too small. We were spending a lot of time waiting for pbufs. This was easily fixed by increasing the number of pbufs allocated at boot time by increasing the kern.nswbuf tunable. However, this update revealed the next problem, which was lock contention on the global pbuf mutex. To solve this, we changed the vnode pager (which handles paging to files, rather than the swap partition, and hence handles all sendfile() I/O) to use the normal kernel zone allocator. This change removed the lock contention, and boosted our performance into the 70 Gbps range. As noted above, we make heavy use of the VM page queues, especially the inactive queue. Eventually, the system runs short of memory and these queues need to be scanned by the page daemon to free up memory. At full load, this was happening roughly twice per minute. When this happened, all NGINX processes would go to sleep in vm_wait() and the system would stop serving traffic while the pageout daemon worked to scan pages, often for several seconds. This problem is actually made progressively worse as one adds NUMA domains, because there is one pageout daemon per NUMA domain, but the page deficit that it is trying to clear is calculated globally. So if the vm pageout daemon decides to clean, say 1GB of memory and there are 16 domains, each of the 16 pageout daemons will individually attempt to clean 1GB of memory. To solve this problem, we decided to proactively scan the VM page queues. In the sendfile path, when allocating a page for I/O, we run the pageout code several times per second on each VM domain. The pageout code is run in its lightest-weight mode in the context of one unlucky NGINX process. Other NGINX processes continue to run and serve traffic while this is happening, so we can avoid bursts of pager activity that blocks traffic serving. Proactive scanning allowed us to serve at roughly 80 Gbps on the prototype hardware. Hans Petter Selasky, Mellanox's 100GbE driver developer, came up with an innovative solution to our problem. Most modern NICs will supply an Receive Side Scaling (RSS) hash result to the host. RSS is a standard developed by Microsoft wherein TCP/IP traffic is hashed by source and destination IP address and/or TCP source and destination ports. The RSS hash result will almost always uniquely identify a TCP connection. Hans' idea was that rather than just passing the packets to the LRO engine as they arrive from the network, we should hold the packets in a large batch, and then sort the batch of packets by RSS hash result (and original time of arrival, to keep them in order). After the packets are sorted, packets from the same connection are adjacent even when they arrive widely separated in time. Therefore, when the packets are passed to the FreeBSD LRO routine, it can aggregate them. With this new LRO code, we were able to achieve an LRO aggregation rate of over 2 packets per aggregation, and were able to serve at well over 90 Gbps for the first time on our prototype hardware for mostly unencrypted traffic. So the job was done. Or was it? The next goal was to achieve 100 Gbps while serving only TLS-encrypted streams. By this point, we were using hardware which closely resembles today's 100GbE flash storage-based OCAs: four NVMe PCIe Gen3 x4 drives, 100GbE ethernet, Xeon E5v4 2697A CPU. With the improvements described in the Protecting Netflix Viewing Privacy at Scale blog entry, we were able to serve TLS-only traffic at roughly 58 Gbps. In the lock contention problems we'd observed above, the cause of any increased CPU use was relatively apparent from normal system level tools like flame graphs, DTrace, or lockstat. The 58 Gbps limit was comparatively strange. As before, the CPU use would increase linearly as we approached the 58 Gbps limit, but then as we neared the limit, the CPU use would increase almost exponentially. Flame graphs just showed everything taking longer, with no apparent hotspots. We finally had a hunch that we were limited by our system's memory bandwidth. We used the Intel® Performance Counter Monitor Tools to measure the memory bandwidth we were consuming at peak load. We then wrote a simple memory thrashing benchmark that used one thread per core to copy between large memory chunks that did not fit into cache. According to the PCM tools, this benchmark consumed the same amount of memory bandwidth as our OCA's TLS-serving workload. So it was clear that we were memory limited. At this point, we became focused on reducing memory bandwidth usage. To assist with this, we began using the Intel VTune profiling tools to identify memory loads and stores, and to identify cache misses. Because we are using sendfile() to serve data, encryption is done from the virtual memory page cache into connection-specific encryption buffers. This preserves the normal FreeBSD page cache in order to allow serving of hot data from memory to many connections. One of the first things that stood out to us was that the ISA-L encryption library was using half again as much memory bandwidth for memory reads as it was for memory writes. From looking at VTune profiling information, we saw that ISA-L was somehow reading both the source and destination buffers, rather than just writing to the destination buffer. We realized that this was because the AVX instructions used by ISA-L for encryption on our CPUs worked on 256-bit (32-byte) quantities, whereas the cache line size was 512-bits (64 bytes)?—?thus triggering the system to do read-modify-writes when data was written. The problem is that the the CPU will normally access the memory system in 64 byte cache line-sized chunks, reading an entire 64 bytes to access even just a single byte. After a quick email exchange with the ISA-L team, they provided us with a new version of the library that used non-temporal instructions when storing encryption results. Non-temporals bypass the cache, and allow the CPU direct access to memory. This meant that the CPU was no longer reading from the destination buffers, and so this increased our bandwidth from 58 Gbps to 65 Gbps. At 100 Gbps, we're moving about 12.5 GB/s of 4K pages through our system unencrypted. Adding encryption doubles that to 25 GB/s worth of 4K pages. That's about 6.25 Million mbufs per second. When you add in the extra 2 mbufs used by the crypto code for TLS metadata at the beginning and end of each TLS record, that works out to another 1.6M mbufs/sec, for a total of about 8M mbufs/second. With roughly 2 cache line accesses per mbuf, that's 128 bytes * 8M, which is 1 GB/s (8 Gbps) of data that is accessed at multiple layers of the stack (alloc, free, crypto, TCP, socket buffers, drivers, etc). At this point, we're able to serve 100% TLS traffic comfortably at 90 Gbps using the default FreeBSD TCP stack. However, the goalposts keep moving. We've found that when we use more advanced TCP algorithms, such as RACK and BBR, we are still a bit short of our goal. We have several ideas that we are currently pursuing, which range from optimizing the new TCP code to increasing the efficiency of LRO to trying to do encryption closer to the transfer of the data (either from the disk, or to the NIC) so as to take better advantage of Intel's DDIO and save memory bandwidth. FreeBSD April to June 2017 Status Report (https://www.freebsd.org/news/status/report-2017-04-2017-06.html) FreeBSD Team Reports FreeBSD Release Engineering Team Ports Collection The FreeBSD Core Team The FreeBSD Foundation The Postmaster Team Projects 64-bit Inode Numbers Capability-Based Network Communication for Capsicum/CloudABI Ceph on FreeBSD DTS Updates Kernel Coda revival FreeBSD Driver for the Annapurna Labs ENA Intel 10G Driver Update pNFS Server Plan B Architectures FreeBSD on Marvell Armada38x FreeBSD/arm64 Userland Programs DTC Using LLVM's LLD Linker as FreeBSD's System Linker Ports A New USES Macro for Porting Cargo-Based Rust Applications GCC (GNU Compiler Collection) GNOME on FreeBSD KDE on FreeBSD New Port: FRRouting PHP Ports: Help Improving QA Rust sndio Support in the FreeBSD Ports Collection TensorFlow Updating Port Metadata for non-x86 Architectures Xfce on FreeBSD Documentation Absolute FreeBSD, 3rd Edition Doc Version Strings Improved by Their Absence New Xen Handbook Section Miscellaneous BSD Meetups at Rennes (France) Third-Party Projects HardenedBSD DPDK, VPP, and the future of pfSense @ the DPDK Summit (https://www.pscp.tv/DPDKProject/1dRKZnleWbmKB?t=5h1m0s) The DPDK (Data Plane Development Kit) conference included a short update from the pfSense project The video starts with a quick introduction to pfSense and the company behind it It covers the issues they ran into trying to scale to 10gbps and beyond, and some of the solutions they tried: libuinet, netmap, packet-journey Then they discovered VPP (Vector Packet Processing) The video then covers the architecture of the new pfSense pfSense has launched of EC2, on Azure soon, and will launch support for the new Atom C3000 and Xeon hardware with built-in QAT (Quick-Assist crypto offload) in November The future: 100gbps, MPLS, VXLANs, and ARM64 hardware support *** News Roundup Local nginx reverse proxy cache for freebsd-update (https://wiki.freebsd.org/VladimirKrstulja/Guides/FreeBSDUpdateReverseProxy) Vladimir Krstulja has created this interesting tutorial on the FreeBSD wiki about a freebsd-update reverse proxy cache Either because you're a good netizen and don't want to repeatedly hammer the FreeBSD mirrors to upgrade all your systems, or you want to benefit from the speed of having a local "mirror" (cache, more precisely), running a freebsd update reverse proxy cache with, say, nginx is dead simple. 1. Install nginx somewhere 2. Configure nginx for a subdomain, say, freebsd-update.example.com 3. On all your hosts, in all your jails, configure /etc/freebsd-update.conf for new ServerName And... that's it. Running freebsd-update will use the ServerName domain which is your reverse nginx proxy. Note the comment about using a "nearby" server is not quite true. FreeBSD update mirrors are frequently slow and running such a reverse proxy cache significantly speeds things up. Caveats: This is a simple cache. That means it doesn't consider the files as a whole repository, which in turn means updates to your cache are not atomic. It'd be advised to nuke your cache before your update run, as its point is only to retain the files in a local cache for some short period of time required for all your machines to be updated. ClonOS is a free, open-source FreeBSD-based platform for virtual environment creation and management (https://clonos.tekroutine.com/) The operating system uses FreeBSD's development branch (12.0-CURRENT) as its base. ClonOS uses ZFS as the default file system and includes web-based administration tools for managing virtual machines and jails. The project's website also mentions the availability of templates for quickly setting up new containers and web-based VNC access to jails. Puppet, we are told, can be used for configuration management. ClonOS can be downloaded as a disk image file (IMG) or as an optical media image (ISO). I downloaded the ISO file which is 1.6GB in size. Booting from ClonOS's media displays a text console asking us to select the type of text terminal we are using. There are four options and most people can probably safely take the default, xterm, option. The operating system, on the surface, appears to be a full installation of FreeBSD 12. The usual collection of FreeBSD packages are available, including manual pages, a compiler and the typical selection of UNIX command line utilities. The operating system uses ZFS as its file system and uses approximately 3.3GB of disk space. ClonOS requires about 50MB of active memory and 143MB of wired memory before any services or jails are created. Most of the key features of ClonOS, the parts which set it apart from vanilla FreeBSD, can be accessed through a web-based control panel. When we connect to this control panel, over a plain HTTP connection, using our web browser, we are not prompted for an account name or password. The web-based interface has a straight forward layout. Down the left side of the browser window we find categories of options and controls. Over on the right side of the window are the specific options or controls available in the selected category. At the top of the page there is a drop-down menu where we can toggle the displayed language between English and Russian, with English being the default. There are twelve option screens we can access in the ClonOS interface and I want to quickly give a summary of each one: Overview - this page shows a top-level status summary. The page lists the number of jails and nodes in the system. We are also shown the number of available CPU cores and available RAM on the system. Jail containers - this page allows us to create and delete jails. We can also change some basic jail settings on this page, adjusting the network configuration and hostname. Plus we can click a button to open a VNC window that allows us to access the jail's command line interface. Template for jails - provides a list of available jail templates. Each template is listed with its name and a brief description. For example, we have a Wordpress template and a bittorrent template. We can click a listed template to create a new jail with a vanilla installation of the selected software included. We cannot download or create new templates from this page. Bhyve VMs - this page is very much like the Jails containers page, but concerns the creation of new virtual machines and managing them. Virtual Private Network - allows for the management of subnets Authkeys - upload security keys for something, but it is not clear for what these keys will be used. Storage media - upload ISO files that will be used when creating virtual machines and installing an operating system in the new virtual environment. FreeBSD Bases - I think this page downloads and builds source code for alternative versions of FreeBSD, but I am unsure and could not find any associated documentation for this page. FreeBSD Sources - download source code for various versions of FreeBSD. TaskLog - browse logs of events, particularly actions concerning jails. SQLite admin - this page says it will open an interface for managing a SQLite database. Clicking link on the page gives a file not found error. Settings - this page simply displays a message saying the settings page has not been implemented yet. While playing with ClonOS, I wanted to perform a couple of simple tasks. I wanted to use the Wordpress template to set up a blog inside a jail. I wanted a generic, empty jail in which I could play and run commands without harming the rest of the operating system. I also wanted to try installing an operating system other than FreeBSD inside a Bhyve virtual environment. I thought this would give me a pretty good idea of how quick and easy ClonOS would make common tasks. Conclusions ClonOS appears to be in its early stages of development, more of a feature preview or proof-of-concept than a polished product. A few of the settings pages have not been finished yet, the web-based controls for jails are unable to create jails that connect to the network and I was unable to upload even small ISO files to create virtual machines. The project's website mentions working with Puppet to handle system configuration, but I did not encounter any Puppet options. There also does not appear to be any documentation on using Puppet on the ClonOS platform. One of the biggest concerns I had was the lack of security on ClonOS. The web-based control panel and terminal both automatically login as the root user. Passwords we create for our accounts are ignored and we cannot logout of the local terminal. This means anyone with physical access to the server automatically gains root access and, in addition, anyone on our local network gets access to the web-based admin panel. As it stands, it would not be safe to install ClonOS on a shared network. Some of the ideas present are good ones. I like the idea of jail templates and have used them on other systems. The graphical Bhyve tools could be useful too, if the limitations of the ISO manager are sorted out. But right now, ClonOS still has a way to go before it is likely to be safe or practical to use. Customize ksh display for OpenBSD (http://nanxiao.me/en/customize-ksh-display-for-openbsd/) The default shell for OpenBSD is ksh, and it looks a little monotonous. To make its user-experience more friendly, I need to do some customizations: (1) Modify the “Prompt String” to display the user name and current directory: PS1='$USER:$PWD# ' (2) Install colorls package: pkg_add colorls Use it to replace the shipped ls command: alias ls='colorls -G' (3) Change LSCOLORS environmental variable to make your favorite color. For example, I don't want the directory is displayed in default blue, change it to magenta: LSCOLORS=fxexcxdxbxegedabagacad For detailed explanation of LSCOLORS, please refer manual of colorls. This is my final modification of .profile: PS1='$USER:$PWD# ' export PS1 LSCOLORS=fxexcxdxbxegedabagacad export LSCOLORS alias ls='colorls -G' DragonFly 5 release candidate (https://www.dragonflydigest.com/2017/10/02/20295.html) Commit (http://lists.dragonflybsd.org/pipermail/commits/2017-September/626463.html) I tagged DragonFly 5.0 (commit message list in that link) over the weekend, and there's a 5.0 release candidate for download (http://mirror-master.dragonflybsd.org/iso-images/). It's RC2 because the recent Radeon changes had to be taken out. (http://lists.dragonflybsd.org/pipermail/commits/2017-September/626476.html) Beastie Bits Faster forwarding (http://www.grenadille.net/post/2017/08/21/Faster-forwarding) DRM-Next-Kmod hits the ports tree (http://www.freshports.org/graphics/drm-next-kmod/) OpenBSD Community Goes Platinum (https://undeadly.org/cgi?action=article;sid=20170829025446) Setting up iSCSI on TrueOS and FreeBSD12 (https://www.youtube.com/watch?v=4myESLZPXBU) *** Feedback/Questions Christopher - Virtualizing FreeNAS (http://dpaste.com/38G99CK#wrap) Van - Tar Question (http://dpaste.com/3MEPD3S#wrap) Joe - Book Reviews (http://dpaste.com/0T623Z6#wrap) ***

IT Manager Podcast (DE, german) - IT-Begriffe einfach und verständlich erklärt

Was bedeutet Virtualisierung und in welchem Zusammenhang steht es mit VMware? Dies und mehr erfahren Sie heute von Daniel Ascencao in unserer neuen Folge. Viel Spaß beim Zuhören!   Mehr Informationen zur VMware-Grundlagen-Schulung: Mit dieser 2-tägigen VMware-Grundlagen-Schulung bauen IT-Mitarbeiter ihr VMware-KnowHow auf und aus. Die Inhalte an den beiden Tagen werden unter anderem sein: - VMware Produktpalette (T) - vSphere: Funktionen, Architektur, Lizenzmodell (T) - ESXi: Architektur, Installation, Grundkonfiguration mit vSphere Client (P) - ESXi Networking: interne Netzwerkstruktur, virtuelle Standard Switche, Portgruppen, Physische Uplinks (P) - ESX Storage: Storagetechnologien im Überblick, FibreChannel und iSCSI, SAN, vSphere VMFS Datenspeicher (T + P) - Virtuelle Maschinen (VMs): Virtuelle Hardware, Erstellung von VMs, Installation von Betriebssystemen, VMware Tools (P) - vCenter Server (VC): Architektur, Installation, VC Datenbank, - vCenter Bestandslisten, Richtlinien, vCenter Appliance, WebClient (P) - VM Management: Vorlagen und automatisches Bereitstellen von VMs, Cloning, Anpassung von Gastbetriebssystemen (P) - Zugriffskontrolle: vSphere Sicherheitsmodell & Nutzerauthentifizierung, MS Active Directory (P) - Management: VMotion Migration, VMware EVC (Enhanced VMotion Compatibility) (P) - vSphere DRS (Distributed Resource Scheduler) und Storage DRS: Funktionsweise, Loadbalancing Cluster Setup, Automatisierungslevel, Best Practices (P) - VMware HA (High Availability): Failovercluster (P) - Backup: Backupstrategien (T) - VMware Update Manager (P) - VMware vShield, SyslogCollector, DUMP- Collector (P) 447 Euro netto für ITleague-Partner (sonst 547 Euro netto) Bei Interesse an der Veranstaltung können sich ITleague Partner per Mail an veranstaltungen@itleague.de anmelden. Ansonsten bitte die Online-Anmeldung nutzen: https://itleague.javis.de/onlineregistration/2

BSD Now
200: Getting Scrubbed to Death

BSD Now

Play Episode Listen Later Jun 28, 2017 94:57


The NetBSD 8.0 release process is underway, we try to measure the weight of an electron, and look at stack clashing. This episode was brought to you by Headlines NetBSD 8.0 release process underway (https://mail-index.netbsd.org/netbsd-announce/2017/06/06/msg000267.html) Soren Jacobsen writes on NetBSD-announce: If you've been reading source-changes@, you likely noticed the recent creation of the netbsd-8 branch. If you haven't been reading source-changes@, here's some news: the netbsd-8 branch has been created, signaling the beginning of the release process for NetBSD 8.0. We don't have a strict timeline for the 8.0 release, but things are looking pretty good at the moment, and we expect this release to happen in a shorter amount of time than the last couple major releases did. At this point, we would love for folks to test out netbsd-8 and let us know how it goes. A couple of major improvements since 7.0 are the addition of USB 3 support and an overhaul of the audio subsystem, including an in-kernel mixer. Feedback about these areas is particularly desired. To download the latest binaries built from the netbsd-8 branch, head to [http://daily-builds.NetBSD.org/pub/NetBSD-daily/netbsd-8/(]http://daily-builds.NetBSD.org/pub/NetBSD-daily/netbsd-8/) Thanks in advance for helping make NetBSD 8.0 a stellar release! OpenIndiana Hipster 2017.04 is here (https://www.openindiana.org/2017/05/03/openindiana-hipster-2017-04-is-here/) Desktop software and libraries Xorg was updated to 1.18.4, xorg libraries and drivers were updated. Mate was updated to 1.16 Intel video driver was updated, the list of supported hardware has significantly extended (https://wiki.openindiana.org/oi/Intel+KMS+driver) libsmb was updated to 4.4.6 gvfs was updated to 1.26.0 gtk3 was updated to 3.18.9 Major text editors were updated (we ship vim 8.0.104, joe 4.4, emacs 25.2, nano 2.7.5 pulseaudio was updated to 10.0 firefox was updated to 45.9.0 thunderbird was updated to 45.8.0 critical issue in enlightenment was fixed, now it's operational again privoxy was updated to 3.0.26 Mesa was updated to 13.0.6 Nvidia driver was updated to 340.102 Development tools and libraries GCC 6 was added. Patches necessary to compile illumos-gate with GCC 6 were added (note, compiling illumos-gate with version other than illumos-gcc-4.4.4 is not supported) GCC 7.1 added to Hipster (https://www.openindiana.org/2017/05/05/gcc-7-1-added-the-hipster-and-rolling-forward/) Bison was updated to 3.0.4 Groovy 2.4 was added Ruby 1.9 was removed, Ruby 2.3 is the default Ruby now Perl 5.16 was removed. 64-bit Perl 5.24 is shipped. 64-bit OpenJDK 8 is the default OpenJDK version now. Mercurial was updated to 4.1.3 Git was updated to 2.12.2 ccache was updated to 3.3.3 QT 5.8.0 was added Valgrind was updated to 3.12.0 Server software PostgreSQL 9.6 was added, PostgreSQL 9.3-9.5 were updated to latest minor versions MongoDB 3.4 was added MariaDB 10.1 was added NodeJS 7 was added Percona Server 5.5/5.6/5.7 and MariaDB 5.5 were updated to latest minor versions OpenVPN was updated to 2.4.1 ISC Bind was updated to 9.10.4-P8 Squid was updated to 3.5.25 Nginx was updated to 1.12.0 Apache 2.4 was updated to 2.4.25. Apache 2.4 is the default Apache server now. Apache 2.2 will be removed before the next snapshot. ISC ntpd was updated to 4.2.8p10 OpenSSH was updated to 7.4p1 Samba was updated to 4.4.12 Tcpdump was updated to 4.9.0 Snort was updated to 2.9.9.0 Puppet was updated to 3.8.6 A lot of other bug fixes and minor software updates included. *** PKGSRC at The University of Wisconsin–Milwaukee (https://uwm.edu/hpc/software-management/) This piece is from the University of Wisconsin, Milwaukee Why Use Package Managers? Why Pkgsrc? Portability Flexibility Modernity Quality and Security Collaboration Convenience Growth Binary Packages for Research Computing The University of Wisconsin — Milwaukee provides binary pkgsrc packages for selected operating systems as a service to the research computing community. Unlike most package repositories, which have a fixed prefix and frequently upgraded packages, these packages are available for multiple prefixes and remain unchanged for a given prefix. Additional packages may be added and existing packages may be patched to fix bugs or security issues, but the software versions will not be changed. This allows researchers to keep older software in-place indefinitely for long-term studies while deploying newer software in later snapshots. Contributing to Pkgsrc Building Your Own Binary Packages Check out the full article and consider using pkgsrc for your own research purposes. PKGSrc Con is this weekend! (http://www.pkgsrc.org/pkgsrcCon/2017/) *** Measuring the weight of an electron (https://deftly.net/posts/2017-06-01-measuring-the-weight-of-an-electron.html) An interesting story of the struggles of one person, aided only by their pet Canary, porting Electron to OpenBSD. This is a long rant. A rant intended to document lunacy, hopefully aid others in the future and make myself feel better about something I think is crazy. It may seem like I am making an enemy of electron, but keep in mind that isn't my intention! The enemy here, is complexity! My friend Henry, a canary, is coming along for the ride! Getting the tools At first glance Electron seems like a pretty solid app, it has decent docs, it's consolidated in a single repository, has a lot of visibility, porting it shouldn't be a big deal, right? After cloning the repo, trouble starts: Reading through the doc, right off the bat there are a few interesting things: At least 25GB disk space. Huh, OK, some how this ~47M repository is going to blow up to 25G? Continuing along with the build, I know I have two versions of clang installed on OpenBSD, one from ports and one in base. Hopefully I will be able to tell the build to use one of these versions. Next, it's time to tell the bootstrap that OpenBSD exists as a platform. After that is fixed, the build-script runs. Even though cloning another git repo fails, the build happily continues. Wait. Another repository failed to clone? At least this time the build failed after trying to clone boto.. again. I am guessing it tried twice because something might have changed between now and the last clone? Off in the distance we catch a familiar tune, it almost sounds like Gnarls Barkley's song Crazy, can't tell for sure. As it turns out, if you are using git-fsck, you are unable to clone boto and requests. Obviously the proper fix for his is to not care about the validity of the git objects! So we die a little inside and comment out fsckobjects in our ~/.gitconfig. Next up, chromium-58 is downloaded… Out of curiosity we look at vendor/libchromiumcontent/script/update, it seems its purpose is to download / extract chromium clang and node, good thing we already specified --clang_dir or it might try to build clang again! 544 dots and 45 minutes later, we have an error! The chromium-58.0.3029.110.tar.xz file is mysteriously not there anymore.. Interesting. Wut. “Updating Clang…”. Didn't I explicitly say not to build clang? At this point we have to shift projects, no longer are we working on Electron.. It's libchromiumcontent that needs our attention. Fixing sub-tools Ahh, our old friends the dots! This is the second time waiting 45+ minutes for a 500+ MB file to download. We are fairly confident it will fail, delete the file out from under itself and hinder the process even further, so we add an explicit exit to the update script. This way we can copy the file somewhere safe! Another 45 minute chrome build and saving the downloaded executable to a save space seems in order. Fixing another 50 occurrences of error conditions let's the build continue - to another clang build. We remove the call to update_clang, because.. well.. we have two copies of it already and the Electron doc said everything would be fine if we had >= clang 3.4! More re-builds and updates of clang and chromium are being commented out, just to get somewhere close to the actual electron build. Fixing sub-sub-tools Ninja needs to be build and the script for that needs to be told to ignore this “unsupported OS” to continue. No luck. At this point we are faced with a complex web of python scripts that execute gn on GN files to produce ninja files… which then build the various components and somewhere in that cluster, something doesn't know about OpenBSD… I look at Henry, he is looking a photo of his wife and kids. They are sitting on a telephone wire, the morning sun illuminating their beautiful faces. Henry looks back at me and says “It's not worth it.” We slam the laptop shut and go outside. Interview - Dan McDonald - allcoms@gmail.com (mailto:allcoms@gmail.com) (danboid) News Roundup g4u 2.6 (ghosting for unix) released 18th birthday (https://mail-index.netbsd.org/netbsd-users/2017/06/08/msg019625.html) Hubert Feyrer writes in his mail to netbsd-users: After a five-year period for beta-testing and updating, I have finally released g4u 2.6. With its origins in 1999, I'd like to say: Happy 18th Birthday, g4u! About g4u: g4u ("ghosting for unix") is a NetBSD-based bootfloppy/CD-ROM that allows easy cloning of PC harddisks to deploy a common setup on a number of PCs using FTP. The floppy/CD offers two functions. The first is to upload the compressed image of a local harddisk to a FTP server, the other is to restore that image via FTP, uncompress it and write it back to disk. Network configuration is fetched via DHCP. As the harddisk is processed as an image, any filesystem and operating system can be deployed using g4u. Easy cloning of local disks as well as partitions is also supported. The past: When I started g4u, I had the task to install a number of lab machines with a dual-boot of Windows NT and NetBSD. The hype was about Microsoft's "Zero Administration Kit" (ZAK) then, but that did barely work for the Windows part - file transfers were slow, depended on the clients' hardware a lot (requiring fiddling with MS DOS network driver disks), and on the ZAK server the files for installing happened do disappear for no good reason every now and then. Not working well, and leaving out NetBSD (and everything else), I created g4u. This gave me the (relative) pain of getting things working once, but with the option to easily add network drivers as they appeared in NetBSD (and oh they did!), plus allowed me to install any operating system. The present: We've used g4u successfully in our labs then, booting from CDROM. I also got many donations from public and private institutions plus companies from many sectors, indicating that g4u does make a difference. In the meantime, the world has changed, and CDROMs aren't used that much any more. Network boot and USB sticks are today's devices of choice, cloning of a full disk without knowing its structure has both advantages but also disadvantages, and g4u's user interface is still command-line based with not much space for automation. For storage, FTP servers are nice and fast, but alternatives like SSH/SFTP, NFS, iSCSI and SMB for remote storage plus local storage (back to fun with filesystems, anyone? avoiding this was why g4u was created in the first place!) should be considered these days. Further aspects include integrity (checksums), confidentiality (encryption). This leaves a number of open points to address either by future releases, or by other products. The future: At this point, my time budget for g4u is very limited. I welcome people to contribute to g4u - g4u is Open Source for a reason. Feel free to get back to me for any changes that you want to contribute! The changes: Major changes in g4u 2.6 include: Make this build with NetBSD-current sources as of 2017-04-17 (shortly before netbsd-8 release branch), binaries were cross-compiled from Mac OS X 10.10 Many new drivers, bugfixes and improvements from NetBSD-current (see beta1 and beta2 announcements) Go back to keeping the disk image inside the kernel as ramdisk, do not load it as separate module. Less error prone, and allows to boot the g4u (NetBSD) kernel from a single file e.g. via PXE (Testing and documentation updates welcome!) Actually DO provide the g4u (NetBSD) kernel with the embedded g4u disk image from now on, as separate file, g4u-kernel.gz In addition to MD5, add SHA512 checksums Congratulation, g4u. Check out the g4u website (http://fehu.org/~feyrer/g4u/) and support the project if you are using it. *** Fixing FreeBSD Networking on Digital Ocean (https://wycd.net/posts/2017-05-19-fixing-freebsd-networking-on-digital-ocean.html) Most cloud/VPS providers use some form of semi-automated address assignment, rather than just regular static address configuration, so that newly created virtual machines can configure themselves. Sometimes, especially during the upgrade process, this can break. This is the story of one such user: I decided it was time to update my FreeBSD Digital Ocean droplet from the end-of-life version 10.1 (shame on me) to the modern version 10.3 (good until April 2018), and maybe even version 11 (good until 2021). There were no sensitive files on the VM, so I had put it off. Additionally, cloud providers tend to have shoddy support for BSDs, so breakages after messing with the kernel or init system are rampant, and I had been skirting that risk. The last straw for me was a broken pkg: /usr/local/lib/libpkg.so.3: Undefined symbol "openat" So the user fires up freebsd-update and upgrades to FreeBSD 10.3 I rebooted, and of course, it happened: no ssh access after 30 seconds, 1 minute, 2 minutes…I logged into my Digital Ocean account and saw green status lights for the instance, but something was definitely wrong. Fortunately, Digital Ocean provides console access (albeit slow, buggy, and crashes my browser every time I run ping). ifconfig revealed that the interfaces vtnet0 (public) and vtnet1 (private) haven't been configured with IP addresses. Combing through files in /etc/rc.*, I found a file called /etc/rc.digitalocean.d/${DROPLETID}.conf containing static network settings for this droplet (${DROPLETID} was something like 1234567). It seemed that FreeBSD wasn't picking up the Digital Ocean network settings config file. The quick and dirty way would have been to messily append the contents of this file to /etc/rc.conf, but I wanted a nicer way. Reading the script in /etc/rc.d/digitalocean told me that /etc/rc.digitalocean.d/${DROPLET_ID}.conf was supposed to have a symlink at /etc/rc.digitalocean.d/droplet.conf. It was broken and pointed to /etc/rc.digitalocean.d/.conf, which could happen when the curl command in /etc/rc.d/digitalocean fails Maybe the curl binary was also in need for an upgrade so failed to fetch the droplet ID Using grep to fish for files containing droplet.conf, I discovered that it was hacked into the init system via loadrcconfig() in /etc/rc.subr I would prefer if Digital Ocean had not customized the version of FreeBSD they ship quite so much I could fix that symlink and restart the services: set DROPLET_ID=$(curl -s http://169.254.169.254/metadata/v1/id) ln -s -f /etc/rc.digitalocean.d/${DROPLET_ID}.conf /etc/rc.digitalocean.d/droplet.conf /etc/rc.d/netif restart /etc/rc.d/routing restart Networking was working again, and I could then ssh into my server and run the following to finish the upgrade: freebsd-update install At this point, I decided that I didn't want to deal with this mess again until at least 2021, so I decided to go for 11.0-RELEASE freebsd-update -r 11.0-RELEASE update freebsd-update install reboot freebsd-update install pkg-static install -f pkg pkg update pkg upgrade uname -a FreeBSD hostname 11.0-RELEASE-p9 FreeBSD 11.0-RELEASE-p9 pkg -v 1.10.1 The problem was solved correctly, and my /etc/rc.conf remains free of generated cruft. The Digital Ocean team can make our lives easier by having their init scripts do more thorough system checking, e.g., catching broken symlinks and bad network addresses. I'm hopeful that collaboration of the FreeBSD team and cloud providers will one day result in automatic fixing of these situations, or at least a correct status indicator. The Digital Ocean team didn't really know many FreeBSD people when they made the first 10.1 images, they have improved a lot, but they of course could always use more feedback from BSD users ** Stack Clash (https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt) A 12-year-old question: "If the heap grows up, and the stack grows down, what happens when they clash? Is it exploitable? How? In 2005, Gael Delalleau presented "Large memory management vulnerabilities" and the first stack-clash exploit in user-space (against mod_php 4.3.0 on Apache 2.0.53) (http://cansecwest.com/core05/memory_vulns_delalleau.pdf) In 2010, Rafal Wojtczuk published "Exploiting large memory management vulnerabilities in Xorg server running on Linux", the second stack-clash exploit in user-space (CVE-2010-2240) (http://www.invisiblethingslab.com/resources/misc-2010/xorg-large-memory-attacks.pdf) Since 2010, security researchers have exploited several stack-clashes in the kernel-space, In user-space, however, this problem has been greatly underestimated; the only public exploits are Gael Delalleau's and Rafal Wojtczuk's, and they were written before Linux introduced a protection against stack-clashes (a "guard-page" mapped below the stack) (https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2010-2240) In this advisory, we show that stack-clashes are widespread in user-space, and exploitable despite the stack guard-page; we discovered multiple vulnerabilities in guard-page implementations, and devised general methods for: "Clashing" the stack with another memory region: we allocate memory until the stack reaches another memory region, or until another memory region reaches the stack; "Jumping" over the stack guard-page: we move the stack-pointer from the stack and into the other memory region, without accessing the stack guard-page; "Smashing" the stack, or the other memory region: we overwrite the stack with the other memory region, or the other memory region with the stack. So this advisory itself, is not a security vulnerability. It is novel research showing ways to work around the mitigations against generic vulnerability types that are implemented on various operating systems. While this issue with the mitigation feature has been fixed, even without the fix, successful exploitation requires another application with its own vulnerability in order to be exploited. Those vulnerabilities outside of the OS need to be fixed on their own. FreeBSD-Security post (https://lists.freebsd.org/pipermail/freebsd-security/2017-June/009335.html) The issue under discussion is a limitation in a vulnerability mitigation technique. Changes to improve the way FreeBSD manages stack growth, and mitigate the issue demonstrated by Qualys' proof-of-concept code, are in progress by FreeBSD developers knowledgeable in the VM subsystem. FreeBSD address space guards (https://svnweb.freebsd.org/base?view=revision&revision=320317) HardenedBSD Proof of Concept for FreeBSD (https://github.com/lattera/exploits/blob/master/FreeBSD/StackClash/001-stackclash.c) HardenedBSD implementation: https://github.com/HardenedBSD/hardenedBSD/compare/de8124d3bf83d774b66f62d11aee0162d0cd1031...91104ed152d57cde0292b2dc09489fd1f69ea77c & https://github.com/HardenedBSD/hardenedBSD/commit/00ad1fb6b53f63d6e9ba539b8f251b5cf4d40261 Qualys PoC: freebsd_cve-2017-fgpu.c (https://www.qualys.com/2017/06/19/stack-clash/freebsd_cve-2017-fgpu.c) Qualys PoC: freebsd_cve-2017-fgpe.c (https://www.qualys.com/2017/06/19/stack-clash/freebsd_cve-2017-fgpe.c) Qualys PoC: freebsd_cve-2017-1085.c (https://www.qualys.com/2017/06/19/stack-clash/freebsd_cve-2017-1085.c) Qualys PoC: OpenBSD (https://www.qualys.com/2017/06/19/stack-clash/openbsd_at.c) Qualys PoC: NetBSD (https://www.qualys.com/2017/06/19/stack-clash/netbsd_cve-2017-1000375.c) *** Will ZFS and non-ECC RAM kill your data? (http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/) TL;DR: ECC is good, but even without, having ZFS is better than not having ZFS. What's ECC RAM? Is it a good idea? What's ZFS? Is it a good idea? Is ZFS and non-ECC worse than not-ZFS and non-ECC? What about the Scrub of Death? The article walks through ZFS folk lore, and talks about what can really go wrong, and what is just the over-active imagination of people on the FreeNAS forums But would using any other filesystem that isn't ZFS have protected that data? ‘Cause remember, nobody's arguing that you can lose data to evil RAM – the argument is about whether evil RAM is more dangerous with ZFS than it would be without it. I really, really want to use the Scrub Of Death in a movie or TV show. How can I make it happen? I don't care about your logic! I wish to appeal to authority! OK. “Authority” in this case doesn't get much better than Matthew Ahrens, one of the cofounders of ZFS at Sun Microsystems and current ZFS developer at Delphix. In the comments to one of my filesystem articles on Ars Technica, Matthew said “There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem.” Beastie Bits EuroBSDcon 2017 Travel Grant Application Now Open (https://www.freebsdfoundation.org/blog/eurobsdcon-2017-travel-grant-application-now-open/) FreeBSD 11.1-BETA3 is out, please give it a test (https://lists.freebsd.org/pipermail/freebsd-stable/2017-June/087303.html) Allan and Lacey let us know the video to the Postgresql/ZFS talk is online (http://dpaste.com/1FE80FJ) Trapsleds (https://marc.info/?l=openbsd-tech&m=149792179514439&w=2) BSD User group in North Rhine-Westphalia, Germany (https://bsd.nrw/) *** Feedback/Questions Joe - Home Server Suggestions (http://dpaste.com/2Z5BJCR#wrap) Stephen - general BSD (http://dpaste.com/1VRQYAM#wrap) Eduardo - ZFS Encryption (http://dpaste.com/2TWADQ8#wrap) Joseph - BGP Kernel Error (http://dpaste.com/0SC0GAC#wrap) ***

BSD Now
195: I don't WannaCry

BSD Now

Play Episode Listen Later May 24, 2017 75:15


A pledge of love to OpenBSD, combating ransomware like WannaCry with OpenZFS, and using PFsense to maximize your non-gigabit Internet connection This episode was brought to you by Headlines ino64 project committed to FreeBSD 12-CURRENT (https://svnweb.freebsd.org/base?view=revision&revision=318736) The ino64 project has been completed and merged into FreeBSD 12-CURRENT Extend the inot, devt, nlinkt types to 64-bit ints. Modify struct dirent layout to add doff, increase the size of dfileno to 64-bits, increase the size of dnamlen to 16-bits, and change the required alignment. Increase struct statfs fmntfromname[] and fmntonname[] array length MNAMELEN to 1024 This means the length of a mount point (MNAMELEN) has been increased from 88 byte to 1024 bytes. This allows longer ZFS dataset names and more nesting, and generally improves the usefulness of nested jails It also allow more than 4 billion files to be stored in a single file system (both UFS and ZFS). It also deals with a number of NFS problems, such as Amazon's EFS (cloud NFS), which uses 64 bit IDs even with small numbers of files. ABI breakage is mitigated by providing compatibility using versioned symbols, ingenious use of the existing padding in structures, and by employing other tricks. Unfortunately, not everything can be fixed, especially outside the base system. For instance, third-party APIs which pass struct stat around are broken in backward and forward incompatible ways. A bug in poudriere that may cause some packages to not rebuild is being fixed. Many packages like perl will need to be rebuilt after this change Update note: strictly follow the instructions in UPDATING. Build and install the new kernel with COMPAT_FREEBSD11 option enabled, then reboot, and only then install new world. So you need the new GENERIC kernel with the COMPAT_FREEBSD11 option, so that your old userland will work with the new kernel, and you need to build, install, and reboot onto the new kernel before attempting to install world. The usual process of installing both and then rebooting will NOT WORK Credits: The 64-bit inode project, also known as ino64, started life many years ago as a project by Gleb Kurtsou (gleb). Kirk McKusick (mckusick) then picked up and updated the patch, and acted as a flag-waver. Feedback, suggestions, and discussions were carried by Ed Maste (emaste), John Baldwin (jhb), Jilles Tjoelker (jilles), and Rick Macklem (rmacklem). Kris Moore (kmoore) performed an initial ports investigation followed by an exp-run by Antoine Brodin (antoine). Essential and all-embracing testing was done by Peter Holm (pho). The heavy lifting of coordinating all these efforts and bringing the project to completion were done by Konstantin Belousov (kib). Sponsored by: The FreeBSD Foundation (emaste, kib) Why I love OpenBSD (https://medium.com/@h3artbl33d/why-i-love-openbsd-ca760cf53941) Jeroen Janssen writes: I do love open source software. Oh boy, I really do love open source software. It's extendable, auditable, and customizable. What's not to love? I'm astonished by the idea that tens, hundreds, and sometimes even thousands of enthusiastic, passionate developers collaborate on an idea. Together, they make the world a better place, bit by bit. And this leads me to one of my favorite open source projects: the 22-year-old OpenBSD operating system. The origins of my love affair with OpenBSD From Linux to *BSD The advantages of OpenBSD It's extremely secure It's well documented It's open source > It's neat and clean My take on OpenBSD ** DO ** Combating WannaCry and Other Ransomware with OpenZFS Snapshots (https://www.ixsystems.com/blog/combating-ransomware/) Ransomware attacks that hold your data hostage using unauthorized data encryption are spreading rapidly and are particularly nefarious because they do not require any special access privileges to your data. A ransomware attack may be launched via a sophisticated software exploit as was the case with the recent “WannaCry” ransomware, but there is nothing stopping you from downloading and executing a malicious program that encrypts every file you have access to. If you fail to pay the ransom, the result will be indistinguishable from your simply deleting every file on your system. To make matters worse, ransomware authors are expanding their attacks to include just about any storage you have access to. The list is long, but includes network shares, Cloud services like DropBox, and even “shadow copies” of data that allow you to open previous versions of files. To make matters even worse, there is little that your operating system can do to prevent you or a program you run from encrypting files with ransomware just as it can't prevent you from deleting the files you own. Frequent backups are touted as one of the few effective strategies for recovering from ransomware attacks but it is critical that any backup be isolated from the attack to be immune from the same attack. Simply copying your files to a mounted disk on your computer or in the Cloud makes the backup vulnerable to infection by virtue of the fact that you are backing up using your regular permissions. If you can write to it, the ransomware can encrypt it. Like medical workers wearing hazmat suits for isolation when combating an epidemic, you need to isolate your backups from ransomware. OpenZFS snapshots to the rescue OpenZFS is the powerful file system at the heart of every storage system that iXsystems sells and of its many features, snapshots can provide fast and effective recovery from ransomware attacks at both the individual user and enterprise level as I talked about in 2015. As a copy-on-write file system, OpenZFS provides efficient and consistent snapshots of your data at any given point in time. Each snapshot only includes the precise delta of changes between any two points in time and can be cloned to provide writable copies of any previous state without losing the original copy. Snapshots also provide the basis of OpenZFS replication or backing up of your data to local and remote systems. Because an OpenZFS snapshot takes place at the block level of the file system, it is immune to any file-level encryption by ransomware that occurs over it. A carefully-planned snapshot, replication, retention, and restoration strategy can provide the low-level isolation you need to enable your storage infrastructure to quickly recover from ransomware attacks. OpenZFS snapshots in practice While OpenZFS is available on a number of desktop operating systems such as TrueOS and macOS, the most effective way to bring the benefits of OpenZFS snapshots to the largest number of users is with a network of iXsystems TrueNAS, FreeNAS Certified and FreeNAS Mini unified NAS and SAN storage systems. All of these can provide OpenZFS-backed SMB, NFS, AFP, and iSCSI file and block storage to the smallest workgroups up through the largest enterprises and TrueNAS offers available Fibre Channel for enterprise deployments. By sharing your data to your users using these file and block protocols, you can provide them with a storage infrastructure that can quickly recover from any ransomware attack thrown at it. To mitigate ransomware attacks against individual workstations, TrueNAS and FreeNAS can provide snapshotted storage to your VDI or virtualization solution of choice. Best of all, every iXsystems TrueNAS, FreeNAS Certified, and FreeNAS Mini system includes a consistent user interface and the ability to replicate between one another. This means that any topology of individual offices and campuses can exchange backup data to quickly mitigate ransomware attacks on your organization at all levels. Join us for a free webinar (http://www.onlinemeetingnow.com/register/?id=uegudsbc75) with iXsystems Co-Founder Matt Olander and learn more about why businesses everywhere are replacing their proprietary storage platforms with TrueNAS then email us at info@ixsystems.com or call 1-855-GREP-4-IX (1-855-473-7449), or 1-408-493-4100 (outside the US) to discuss your storage needs with one of our solutions architects. Interview - Michael W. Lucas - mwlucas@michaelwlucas.com (mailto:mwlucas@michaelwlucas.com) / @twitter (https://twitter.com/mwlauthor) Books, conferences, and how these two combine + BR: Welcome back. Tell us what you've been up to since the last time we interviewed you regarding books and such. + AJ: Tell us a little bit about relayd and what it can do. + BR: What other books do you have in the pipeline? + AJ: What are your criteria that qualifies a topic for a mastery book? + BR: Can you tell us a little bit about these writing workshops that you attend and what happens there? + AJ: Without spoiling too much: How did you come up with the idea for git commit murder? + BR: Speaking of BSDCan, can you tell the first timers about what to expect in the http://www.bsdcan.org/2017/schedule/events/890.en.html (Newcomers orientation and mentorship) session on Thursday? + AJ: Tell us about the new WIP session at BSDCan. Who had the idea and how much input did you get thus far? + BR: Have you ever thought about branching off into a new genre like children's books or medieval fantasy novels? + AJ: Is there anything else before we let you go? News Roundup Using LLDP on FreeBSD (https://tetragir.com/freebsd/networking/using-lldp-on-freebsd.html) LLDP, or Link Layer Discovery Protocol allows system administrators to easily map the network, eliminating the need to physically run the cables in a rack. LLDP is a protocol used to send and receive information about a neighboring device connected directly to a networking interface. It is similar to Cisco's CDP, Foundry's FDP, Nortel's SONMP, etc. It is a stateless protocol, meaning that an LLDP-enabled device sends advertisements even if the other side cannot do anything with it. In this guide the installation and configuration of the LLDP daemon on FreeBSD as well as on a Cisco switch will be introduced. If you are already familiar with Cisco's CDP, LLDP won't surprise you. It is built for the same purpose: to exchange device information between peers on a network. While CDP is a proprietary solution and can be used only on Cisco devices, LLDP is a standard: IEEE 802.3AB. Therefore it is implemented on many types of devices, such as switches, routers, various desktop operating systems, etc. LLDP helps a great deal in mapping the network topology, without spending hours in cabling cabinets to figure out which device is connected with which switchport. If LLDP is running on both the networking device and the server, it can show which port is connected where. Besides physical interfaces, LLDP can be used to exchange a lot more information, such as IP Address, hostname, etc. In order to use LLDP on FreeBSD, net-mgmt/lldpd has to be installed. It can be installed from ports using portmaster: #portmaster net-mgmt/lldpd Or from packages: #pkg install net-mgmt/lldpd By default lldpd sends and receives all the information it can gather , so it is advisable to limit what we will communicate with the neighboring device. The configuration file for lldpd is basically a list of commands as it is passed to lldpcli. Create a file named lldpd.conf under /usr/local/etc/ The following configuration gives an example of how lldpd can be configured. For a full list of options, see %man lldpcli To check what is configured locally, run #lldpcli show chassis detail To see the neighbors run #lldpcli show neighbors details Check out the rest of the article about enabling LLDP on a Cisco switch experiments with prepledge (http://www.tedunangst.com/flak/post/experiments-with-prepledge) Ted Unangst takes a crack at a system similar to the one being designed for Capsicum, Oblivious Sandboxing (See the presentation at BSDCan), where the application doesn't even know it is in the sandbox MP3 is officially dead, so I figure I should listen to my collection one last time before it vanishes entirely. The provenance of some of these files is a little suspect however, and since I know one shouldn't open files from strangers, I'd like to take some precautions against malicious malarkey. This would be a good use for pledge, perhaps, if we can get it working. At the same time, an occasional feature request for pledge is the ability to specify restrictions before running a program. Given some untrusted program, wrap its execution in a pledge like environment. There are other system call sandbox mechanisms that can do this (systrace was one), but pledge is quite deliberately designed not to support this. But maybe we can bend it to our will. Our pledge wrapper can't be an external program. This leaves us with the option of injecting the wrapper into the target program via LD_PRELOAD. Before main even runs, we'll initialize what needs initializing, then lock things down with a tight pledge set. Our eventual target will be ffplay, but hopefully the design will permit some flexibility and reuse. So the new code is injected to override the open syscall, and reads a list of files from an environment variable. Those files are opened and the path and file descriptor are put into a linked list, and then pledge is used to restrict further access to the file system. The replacement open call now searches just that linked list, returning the already opened file descriptors. So as long as your application only tries to open files that you have preopened, it can function without modification within the sandbox. Or at least that is the goal... ffplay tries to dlopen() some things, and because of the way dlopen() works, it doesn't go via the libc open() wrapper, so it doesn't get overridden ffplay also tries to call a few ioctl's, not allowed After stubbing both of those out, it still doesn't work and it is just getting worse Ted switches to a new strategy, using ffmpeg to convert the .mp3 to a .wav file and then just cat it to /dev/audio A few more stubs for ffmpeg, including access(), and adding tty access to the list of pledges, and it finally works This point has been made from the early days, but I think this exercise reinforces it, that pledge works best with programs where you understand what the program is doing. A generic pledge wrapper isn't of much use because the program is going to do something unexpected and you're going to have a hard time wrangling it into submission. Software is too complex. What in the world is ffplay doing? Even if I were working with the source, how long would it take to rearrange the program into something that could be pledged? One can try using another program, but I would wager that as far as multiformat media players go, ffplay is actually on the lower end of the complexity spectrum. Most of the trouble comes from using SDL as an abstraction layer, which performs a bunch of console operations. On the flip side, all of this early init code is probably the right design. Once SDL finally gets its screen handle setup, we could apply pledge and sandbox the actual media decoder. That would be the right way to things. Is pledge too limiting? Perhaps, but that's what I want. I could have just kept adding permissions until ffplay had full access to my X socket, but what kind of sandbox is that? I don't want naughty MP3s scraping my screen and spying on my keystrokes. The sandbox I created had all the capabilities one needs to convert an MP3 to audible sound, but the tool I wanted to use wasn't designed to work in that environment. And in its defense, these were new post hoc requirements. Other programs, even sed, suffer from less than ideal pledge sets as well. The best summary might be to say that pledge is designed for tomorrow's programs, not yesterday's (and vice versa). There were a few things I could have done better. In particular, I gave up getting audio to work, even though there's a nice description of how to work with pledge in the sio_open manual. Alas, even going back and with a bit more effort I still haven't succeeded. The requirements to use libsndio are more permissive than I might prefer. How I Maximized the Speed of My Non-Gigabit Internet Connection (https://medium.com/speedtest-by-ookla/engineer-maximizes-internet-speed-story-c3ec0e86f37a) We have a new post from Brennen Smith, who is the Lead Systems Engineer at Ookla, the company that runs Speedtest.net, explaining how he used pfSense to maximize his internet connection I spend my time wrangling servers and internet infrastructure. My daily goals range from designing high performance applications supporting millions of users and testing the fastest internet connections in the world, to squeezing microseconds from our stack —so at home, I strive to make sure that my personal internet performance is running as fast as possible. I live in an area with a DOCSIS ISP that does not provide symmetrical gigabit internet — my download and upload speeds are not equal. Instead, I have an asymmetrical plan with 200 Mbps download and 10 Mbps upload — this nuance considerably impacted my network design because asymmetrical service can more easily lead to bufferbloat. We will cover bufferbloat in a later article, but in a nutshell, it's an issue that arises when an upstream network device's buffers are saturated during an upload. This causes immense network congestion, latency to rise above 2,000 ms., and overall poor quality of internet. The solution is to shape the outbound traffic to a speed just under the sending maximum of the upstream device, so that its buffers don't fill up. My ISP is notorious for having bufferbloat issues due to the low upload performance, and it's an issue prevalent even on their provided routers. They walk through a list of router devices you might consider, and what speeds they are capable of handling, but ultimately ended up using a generic low power x86 machine running pfSense 2.3 In my research and testing, I also evaluated IPCop, VyOS, OPNSense, Sophos UTM, RouterOS, OpenWRT x86, and Alpine Linux to serve as the base operating system, but none were as well supported and full featured as PFSense. The main setting to look at is the traffic shaping of uploads, to keep the pipe from getting saturated and having a large buffer build up in the modem and further upstream. This build up is what increases the latency of the connection As with any experiment, any conclusions need to be backed with data. To validate the network was performing smoothly under heavy load, I performed the following experiment: + Ran a ping6 against speedtest.net to measure latency. + Turned off QoS to simulate a “normal router”. + Started multiple simultaneous outbound TCP and UDP streams to saturate my outbound link. + Turned on QoS to the above settings and repeated steps 2 and 3. As you can see from the plot below, without QoS, my connection latency increased by ~1,235%. However with QoS enabled, the connection stayed stable during the upload and I wasn't able to determine a statistically significant delta. That's how I maximized the speed on my non-gigabit internet connection. What have you done with your network? FreeBSD on 11″ MacBook Air (https://www.geeklan.co.uk/?p=2214) Sevan Janiyan writes in his tech blog about his experiences running FreeBSD on an 11'' MacBook Air This tiny machine has been with me for a few years now, It has mostly run OS X though I have tried OpenBSD on it (https://www.geeklan.co.uk/?p=1283). Besides the screen resolution I'm still really happy with it, hardware wise. Software wise, not so much. I use an external disk containing a zpool with my data on it. Among this data are several source trees. CVS on a ZFS filesystem on OS X is painfully slow. I dislike that builds running inside Terminal.app are slow at the expense of a responsive UI. The system seems fragile, at the slightest push the machine will either hang or become unresponsive. Buggy serial drivers which do not implement the break signal and cause instability are frustrating. Last week whilst working on Rump kernel (http://rumpkernel.org/) builds I introduced some new build issues in the process of fixing others, I needed to pick up new changes from CVS by updating my copy of the source tree and run builds to test if issues were still present. I was let down on both counts, it took ages to update source and in the process of cross compiling a NetBSD/evbmips64-el release, the system locked hard. That was it, time to look what was possible elsewhere. While I have been using OS X for many years, I'm not tied to anything exclusive on it, maybe tweetbot, perhaps, but that's it. On the BSDnow podcast they've been covering changes coming in to TrueOS (formerly PC-BSD – a desktop focused distro based on FreeBSD), their experiments seemed interesting, the project now tracks FreeBSD-CURRENT, they've replaced rcng with OpenRC as the init system and it comes with a pre-configured desktop environment, using their own window manager (Lumina). Booting the USB flash image it made it to X11 without any issue. The dock has a widget which states the detected features, no wifi (Broadcom), sound card detected and screen resolution set to 1366×768. I planned to give it a try on the weekend. Friday, I made backups and wiped the system. TrueOS installed without issue, after a short while I had a working desktop, resuming from sleep worked out of the box. I didn't spend long testing TrueOS, switching out NetBSD-HEAD only to realise that I really need ZFS so while I was testing things out, might as well give stock FreeBSD 11-STABLE a try (TrueOS was based on -CURRENT). Turns out sleep doesn't work yet but sound does work out of the box and with a few invocations of pkg(8) I had xorg, dwm, firefox, CVS and virtuabox-ose installed from binary packages. VirtualBox seems to cause the system to panic (bug 219276) but I should be able to survive without my virtual machines over the next few days as I settle in. I'm considering ditching VirtualBox and converting the vdi files to raw images so that they can be written to a new zvol for use with bhyve. As my default keyboard layout is Dvorak, OS X set the EFI settings to this layout. The first time I installed FreeBSD 11-STABLE, I opted for full disk encryption but ran into this odd issue where on boot the keyboard layout was Dvorak and password was accepted, the system would boot and as it went to mount the various filesystems it would switch back to QWERTY. I tried entering my password with both layout but wasn't able to progress any further, no bug report yet as I haven't ruled myself out as the problem. Thunderbolt gigabit adapter –bge(4) (https://www.freebsd.org/cgi/man.cgi?query=bge) and DVI adapter both worked on FreeBSD though the gigabit adapter needs to be plugged in at boot to be detected. The trackpad bind to wsp(4) (https://www.freebsd.org/cgi/man.cgi?query=wsp), left, right and middle clicks are available through single, double and tripple finger tap. Sound card binds to snd_hda(4) (https://www.freebsd.org/cgi/man.cgi?query=snd_hda) and works out of the box. For wifi I'm using a urtw(4) (https://www.freebsd.org/cgi/man.cgi?query=urtw) Alfa adapter which is a bit on the large side but works very reliably. A copy of the dmesg (https://www.geeklan.co.uk/files/macbookair/freebsd-dmesg.txt) is here. Beastie Bits OPNsense - call-for-testing for SafeStack (https://forum.opnsense.org/index.php?topic=5200.0) BSD 4.4: cat (https://www.rewritinghistorycasts.com/screencasts/bsd-4.4:-cat) Continuous Unix commit history from 1970 until today (https://github.com/dspinellis/unix-history-repo) Update on Unix Architecture Evolution Diagrams (https://www.spinellis.gr/blog/20170510/) “Relayd and Httpd Mastery” is out! (https://blather.michaelwlucas.com/archives/2951) Triangle BSD User Group Meeting -- libxo (https://www.meetup.com/Triangle-BSD-Users-Group/events/240247251/) *** Feedback/Questions Carlos - ASUS Tinkerboard (http://dpaste.com/1GJHPNY#wrap) James - Firewall question (http://dpaste.com/0QCW933#wrap) Adam - ZFS books (http://dpaste.com/0GMG5M2#wrap) David - Managing zvols (http://dpaste.com/2GP8H1E#wrap) ***

Storage Developer Conference
#38: SPDK - Building Blocks for Scalable, High Performance Storage Application

Storage Developer Conference

Play Episode Listen Later Mar 27, 2017 48:54


BSD Now
186: The Fast And the Firewall: Tokyo Drift

BSD Now

Play Episode Listen Later Mar 22, 2017 174:07


This week on BSDNow, reports from AsiaBSDcon, TrueOS and FreeBSD news, Optimizing IllumOS Kernel, your questions and more. This episode was brought to you by Headlines AsiaBSDcon Reports and Reviews () AsiaBSDcon schedule (https://2017.asiabsdcon.org/program.html.en) Schedule and slides from the 4th bhyvecon (http://bhyvecon.org/) Michael Dexter's trip report on the iXsystems blog (https://www.ixsystems.com/blog/ixsystems-attends-asiabsdcon-2017) NetBSD AsiaBSDcon booth report (http://mail-index.netbsd.org/netbsd-advocacy/2017/03/13/msg000729.html) *** TrueOS Community Guidelines are here! (https://www.trueos.org/blog/trueos-community-guidelines/) TrueOS has published its new Community Guidelines The TrueOS Project has existed for over ten years. Until now, there was no formally defined process for interested individuals in the TrueOS community to earn contributor status as an active committer to this long-standing project. The current core TrueOS developers (Kris Moore, Ken Moore, and Joe Maloney) want to provide the community more opportunities to directly impact the TrueOS Project, and wish to formalize the process for interested people to gain full commit access to the TrueOS repositories. These describe what is expected of community members and committers They also describe the process of getting commit access to the TrueOS repo: Previously, Kris directly handed out commit bits. Now, the Core developers have provided a small list of requirements for gaining a TrueOS commit bit: Create five or more pull requests in a TrueOS Project repository within a single six month period. Stay active in the TrueOS community through at least one of the available community channels (Gitter, Discourse, IRC, etc.). Request commit access from the core developers via core@trueos.org OR Core developers contact you concerning commit access. Pull requests can be any contribution to the project, from minor documentation tweaks to creating full utilities. At the end of every month, the core developers review the commit logs, removing elements that break the Project or deviate too far from its intended purpose. Additionally, outstanding pull requests with no active dissension are immediately merged, if possible. For example, a user submits a pull request which adds a little-used OpenRC script. No one from the community comments on the request or otherwise argues against its inclusion, resulting in an automatic merge at the end of the month. In this manner, solid contributions are routinely added to the project and never left in a state of “limbo”. The page also describes the perks of being a TrueOS committer: Contributors to the TrueOS Project enjoy a number of benefits, including: A personal TrueOS email alias: @trueos.org Full access for managing TrueOS issues on GitHub. Regular meetings with the core developers and other contributors. Access to private chat channels with the core developers. Recognition as part of an online Who's Who of TrueOS developers. The eternal gratitude of the core developers of TrueOS. A warm, fuzzy feeling. Intel Donates 250.000 $ to the FreeBSD Foundation (https://www.freebsdfoundation.org/news-and-events/latest-news/new-uranium-level-donation-and-collaborative-partnership-with-intel/) More details about the deal: Systems Thinking: Intel and the FreeBSD Project (https://www.freebsdfoundation.org/blog/systems-thinking-intel-and-the-freebsd-project/) Intel will be more actively engaging with the FreeBSD Foundation and the FreeBSD Project to deliver more timely support for Intel products and technologies in FreeBSD. Intel has contributed code to FreeBSD for individual device drivers (i.e. NICs) in the past, but is now seeking a more holistic “systems thinking” approach. Intel Blog Post (https://01.org/blogs/imad/2017/intel-increases-support-freebsd-project) We will work closely with the FreeBSD Foundation to ensure the drivers, tools, and applications needed on Intel® SSD-based storage appliances are available to the community. This collaboration will also provide timely support for future Intel® 3D XPoint™ products. Thank you very much, Intel! *** Applied FreeBSD: Basic iSCSI (https://globalengineer.wordpress.com/2017/03/05/applied-freebsd-basic-iscsi/) iSCSI is often touted as a low-cost replacement for fibre-channel (FC) Storage Area Networks (SANs). Instead of having to setup a separate fibre-channel network for the SAN, or invest in the infrastructure to run Fibre-Channel over Ethernet (FCoE), iSCSI runs on top of standard TCP/IP. This means that the same network equipment used for routing user data on a network could be utilized for the storage as well. This article will cover a very basic setup where a FreeBSD server is configured as an iSCSI Target, and another FreeBSD server is configured as the iSCSI Initiator. The iSCSI Target will export a single disk drive, and the initiator will create a filesystem on this disk and mount it locally. Advanced topics, such as multipath, ZFS storage pools, failover controllers, etc. are not covered. The real magic is the /etc/ctl.conf file, which contains all of the information necessary for ctld to share disk drives on the network. Check out the man page for /etc/ctl.conf for more details; below is the configuration file that I created for this test setup. Note that on a system that has never had iSCSI configured, there will be no existing configuration file, so go ahead and create it. Then, enable ctld and start it: sysrc ctld_enable=”YES” service ctld start You can use the ctladm command to see what is going on: root@bsdtarget:/dev # ctladm lunlist (7:0:0/0): Fixed Direct Access SPC-4 SCSI device (7:0:1/1): Fixed Direct Access SPC-4 SCSI device root@bsdtarget:/dev # ctladm devlist LUN Backend Size (Blocks) BS Serial Number Device ID 0 block 10485760 512 MYSERIAL 0 MYDEVID 0 1 block 10485760 512 MYSERIAL 1 MYDEVID 1 Now, let's configure the client side: In order for a FreeBSD host to become an iSCSI Initiator, the iscsd daemon needs to be started. sysrc iscsid_enable=”YES” service iscsid start Next, the iSCSI Initiator can manually connect to the iSCSI target using the iscsictl tool. While setting up a new iSCSI session, this is probably the best option. Once you are sure the configuration is correct, add the configuration to the /etc/iscsi.conf file (see man page for this file). For iscsictl, pass the IP address of the target as well as the iSCSI IQN for the session: + iscsictl -A -p 192.168.22.128 -t iqn.2017-02.lab.testing:basictarget You should now have a new device (check dmesg), in this case, da1 The guide them walks through partitioning the disk, and laying down a UFS file system, and mounting it This it walks through how to disconnect iscsi, incase you don't want it anymore This all looked nice and easy, and it works very well. Now lets see what happens when you try to mount the iSCSI from Windows Ok, that wasn't so bad. Now, instead of sharing an entire space disk on the host via iSCSI, share a zvol. Now your windows machine can be backed by ZFS. All of your problems are solved. Interview - Philipp Buehler - pbuehler@sysfive.com (mailto:pbuehler@sysfive.com) Technical Lead at SysFive, and Former OpenBSD Committer News Roundup Half a dozen new features in mandoc -T html (http://undeadly.org/cgi?action=article&sid=20170316080827) mandoc (http://man.openbsd.org/mandoc.1)'s HTML output mode got some new features Even though mdoc(7) is a semantic markup language, traditionally none of the semantic annotations were communicated to the reader. [...] Now, at least in -T html output mode, you can see the semantic function of marked-up words by hovering your mouse over them. In terminal output modes, we have the ctags(1)-like internal search facility built around the less(1) tag jump (:t) feature for quite some time now. We now have a similar feature in -T html output mode. To jump to (almost) the same places in the text, go to the address bar of the browser, type a hash mark ('#') after the URI, then the name of the option, command, variable, error code etc. you want to jump to, and hit enter. Check out the full report by Ingo Schwarze (schwarze@) and try out these new features *** Optimizing IllumOS Kernel Crypto (http://zfs-create.blogspot.com/2014/05/optimizing-illumos-kernel-crypto.html) Sašo Kiselkov, of ZFS fame, looked into the performance of the OpenSolaris kernel crypto framework and found it lacking. The article also spends a few minutes on the different modes and how they work. Recently I've had some motivation to look into the KCF on Illumos and discovered that, unbeknownst to me, we already had an AES-NI implementation that was automatically enabled when running on Intel and AMD CPUs with AES-NI support. This work was done back in 2010 by Dan Anderson.This was great news, so I set out to test the performance in Illumos in a VM on my Mac with a Core i5 3210M (2.5GHz normal, 3.1GHz turbo). The initial tests of “what the hardware can do” were done in OpenSSL So now comes the test for the KCF. I wrote a quick'n'dirty crypto test module that just performed a bunch of encryption operations and timed the results. KCF got around 100 MB/s for each algorithm, except half that for AES-GCM OpenSSL had done over 3000 MB/s for CTR mode, 500 MB/s for CBC, and 1000 MB/s for GCM What the hell is that?! This is just plain unacceptable. Obviously we must have hit some nasty performance snag somewhere, because this is comical. And sure enough, we did. When looking around in the AES-NI implementation I came across this bit in aes_intel.s that performed the CLTS instruction. This is a problem: 3.1.2 Instructions That Cause VM Exits ConditionallyCLTS. The CLTS instruction causes a VM exit if the bits in position 3 (corresponding to CR0.TS) are set in both the CR0 guest/host mask and the CR0 read shadow. The CLTS instruction signals to the CPU that we're about to use FPU registers (which is needed for AES-NI), which in VMware causes an exit into the hypervisor. And we've been doing it for every single AES block! Needless to say, performing the equivalent of a very expensive context switch every 16 bytes is going to hurt encryption performance a bit. The reason why the kernel is issuing CLTS is because for performance reasons, the kernel doesn't save and restore FPU register state on kernel thread context switches. So whenever we need to use FPU registers inside the kernel, we must disable kernel thread preemption via a call to kpreemptdisable() and kpreemptenable() and save and restore FPU register state manually. During this time, we cannot be descheduled (because if we were, some other thread might clobber our FPU registers), so if a thread does this for too long, it can lead to unexpected latency bubbles The solution was to restructure the AES and KCF block crypto implementations in such a way that we execute encryption in meaningfully small chunks. I opted for 32k bytes, for reasons which I'll explain below. Unfortunately, doing this restructuring work was a bit more complicated than one would imagine, since in the KCF the implementation of the AES encryption algorithm and the block cipher modes is separated into two separate modules that interact through an internal API, which wasn't really conducive to high performance (we'll get to that later). Anyway, having fixed the issue here and running the code at near native speed, this is what I get: AES-128/CTR: 439 MB/s AES-128/CBC: 483 MB/s AES-128/GCM: 252 MB/s Not disastrous anymore, but still, very, very bad. Of course, you've got keep in mind, the thing we're comparing it to, OpenSSL, is no slouch. It's got hand-written highly optimized inline assembly implementations of most of these encryption functions and their specific modes, for lots of platforms. That's a ton of code to maintain and optimize, but I'll be damned if I let this kind of performance gap persist. Fixing this, however, is not so trivial anymore. It pertains to how the KCF's block cipher mode API interacts with the cipher algorithms. It is beautifully designed and implemented in a fashion that creates minimum code duplication, but this also means that it's inherently inefficient. ECB, CBC and CTR gained the ability to pass an algorithm-specific "fastpath" implementation of the block cipher mode, because these functions benefit greatly from pipelining multiple cipher calls into a single place. ECB, CTR and CBC decryption benefit enormously from being able to exploit the wide XMM register file on Intel to perform encryption/decryption operations on 8 blocks at the same time in a non-interlocking manner. The performance gains here are on the order of 5-8x.CBC encryption benefits from not having to copy the previously encrypted ciphertext blocks into memory and back into registers to XOR them with the subsequent plaintext blocks, though here the gains are more modest, around 1.3-1.5x. After all of this work, this is how the results now look on Illumos, even inside of a VM: Algorithm/Mode 128k ops AES-128/CTR: 3121 MB/s AES-128/CBC: 691 MB/s AES-128/GCM: 1053 MB/s So the CTR and GCM speeds have actually caught up to OpenSSL, and CBC is actually faster than OpenSSL. On the decryption side of things, CBC decryption also jumped from 627 MB/s to 3011 MB/s. Seeing these performance numbers, you can see why I chose 32k for the operation size in between kernel preemption barriers. Even on the slowest hardware with AES-NI, we can expect at least 300-400 MB/s/core of throughput, so even in the worst case, we'll be hogging the CPU for at most ~0.1ms per run. Overall, we're even a little bit faster than OpenSSL in some tests, though that's probably down to us encrypting 128k blocks vs 8k in the "openssl speed" utility. Anyway, having fixed this monstrous atrocity of a performance bug, I can now finally get some sleep. To made these tests repeatable, and to ensure that the changes didn't break the crypto algorithms, Saso created a crypto_test kernel module. I have recently created a FreeBSD version of crypto_test.ko, for much the same purposes Initial performance on FreeBSD is not as bad, if you have the aesni.ko module loaded, but it is not up to speed with OpenSSL. You cannot directly compare to the benchmarks Saso did, because the CPUs are vastly different. Performance results (https://wiki.freebsd.org/OpenCryptoPerformance) I hope to do some more tests on a range of different sized CPUs in order to determine how the algorithms scale across different clock speeds. I also want to look at, or get help and have someone else look at, implementing some of the same optimizations that Saso did. It currently seems like there isn't a way to perform addition crypto operations in the same session without regenerating the key table. Processing additional buffers in an existing session might offer a number of optimizations for bulk operations, although in many cases, each block is encrypted with a different key and/or IV, so it might not be very useful. *** Brendan Gregg's special freeware tools for sysadmins (http://www.brendangregg.com/specials.html) These tools need to be in every (not so) serious sysadmins toolbox. Triple ROT13 encryption algorithm (beware: export restrictions may apply) /usr/bin/maybe, in case true and false don't provide too little choice... The bottom command lists you all the processes using the least CPU cycles. Check out the rest of the tools. You wrote similar tools and want us to cover them in the show? Send us an email to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) *** A look at 2038 (http://www.lieberbiber.de/2017/03/14/a-look-at-the-year-20362038-problems-and-time-proofness-in-various-systems/) I remember the Y2K problem quite vividly. The world was going crazy for years, paying insane amounts of money to experts to fix critical legacy systems, and there was a neverending stream of predictions from the media on how it's all going to fail. Most didn't even understand what the problem was, and I remember one magazine writing something like the following: Most systems store the current year as a two-digit value to save space. When the value rolls over on New Year's Eve 1999, those two digits will be “00”, and “00” means “halt operation” in the machine language of many central processing units. If you're in an elevator at this time, it will stop working and you may fall to your death. I still don't know why they thought a computer would suddenly interpret data as code, but people believed them. We could see a nearby hydropower plant from my parents' house, and we expected it to go up in flames as soon as the clock passed midnight, while at least two airplanes crashed in our garden at the same time. Then nothing happened. I think one of the most “severe” problems was the police not being able to open their car garages the next day because their RFID tokens had both a start and end date for validity, and the system clock had actually rolled over to 1900, so the tokens were “not yet valid”. That was 17 years ago. One of the reasons why Y2K wasn't as bad as it could have been is that many systems had never used the “two-digit-year” representation internally, but use some form of “timestamp” relative to a fixed date (the “epoch”). The actual problem with time and dates rolling over is that systems calculate timestamp differences all day. Since a timestamp derived from the system clock seemingly only increases with each query, it is very common to just calculate diff = now - before and never care about the fact that now could suddenly be lower than before because the system clock has rolled over. In this case diff is suddenly negative, and if other parts of the code make further use of the suddenly negative value, things can go horribly wrong. A good example was a bug in the generator control units (GCUs) aboard Boeing 787 “Dreamliner” aircrafts, discovered in 2015. An internal timestamp counter would overflow roughly 248 days after the system had been powered on, triggering a shut down to “safe mode”. The aircraft has four generator units, but if all were powered up at the same time, they would all fail at the same time. This sounds like an overflow caused by a signed 32-bit counter counting the number of centiseconds since boot, overflowing after 248.55 days, and luckily no airline had been using their Boing 787 models for such a long time between maintenance intervals. The “obvious” solution is to simply switch to 64-Bit values and call it day, which would push overflow dates far into the future (as long as you don't do it like the IBM S/370 mentioned before). But as we've learned from the Y2K problem, you have to assume that computer systems, computer software and stored data (which often contains timestamps in some form) will stay with us for much longer than we might think. The years 2036 and 2038 might be far in the future, but we have to assume that many of the things we make and sell today are going to be used and supported for more than just 19 years. Also many systems have to store dates which are far in the future. A 30 year mortgage taken out in 2008 could have already triggered the bug, and for some banks it supposedly did. sysgettimeofday() is one of the most used system calls on a generic Linux system and returns the current time in form of an UNIX timestamp (timet data type) plus fraction (susecondst data type). Many applications have to know the current time and date to do things, e.g. displaying it, using it in game timing loops, invalidating caches after their lifetime ends, perform an action after a specific moment has passed, etc. In a 32-Bit UNIX system, timet is usually defined as a signed 32-Bit Integer. When kernel, libraries and applications are compiled, the compiler will turn this assumption machine code and all components later have to match each other. So a 32-Bit Linux application or library still expects the kernel to return a 32-Bit value even if the kernel is running on a 64-Bit architecture and has 32-Bit compatibility. The same holds true for applications calling into libraries. This is a major problem, because there will be a lot of legacy software running in 2038. Systems which used an unsigned 32-Bit Integer for timet push the problem back to 2106, but I don't know about many of those. The developers of the GNU C library (glibc), the default standard C library for many GNU/Linux systems, have come up with a design for year 2038 proofness for their library. Besides the timet data type itself, a number of other data structures have fields based on timet or the combined struct timespec and struct timeval types. Many methods beside those intended for setting and querying the current time use timestamps 32-Bit Windows applications, or Windows applications defining _USE32BITTIMET, can be hit by the year 2038 problem too if they use the timet data type. The _time64t data type had been available since Visual C 7.1, but only Visual C 8 (default with Visual Studio 2015) expanded timet to 64 bits by default. The change will only be effective after a recompilation, legacy applications will continue to be affected. If you live in a 64-Bit world and use a 64-Bit kernel with 64-Bit only applications, you might think you can just ignore the problem. In such a constellation all instances of the standard time_t data type for system calls, libraries and applications are signed 64-Bit Integers which will overflow in around 292 billion years. But many data formats, file systems and network protocols still specify 32-Bit time fields, and you might have to read/write this data or talk to legacy systems after 2038. So solving the problem on your side alone is not enough. Then the article goes on to describe how all of this will break your file systems. Not to mention your databases and other file formats. Also see Theo De Raadt's EuroBSDCon 2013 Presentation (https://www.openbsd.org/papers/eurobsdcon_2013_time_t/mgp00001.html) *** Beastie Bits Michael Lucas: Get your name in “Absolute FreeBSD 3rd Edition” (https://blather.michaelwlucas.com/archives/2895) ZFS compressed ARC stats to top (https://svnweb.freebsd.org/base?view=revision&revision=r315435) Matthew Dillon discovered HAMMER was repeating itself when writing to disk. Fixing that issue doubled write speeds (https://www.dragonflydigest.com/2017/03/14/19452.html) TedU on Meaningful Short Names (http://www.tedunangst.com/flak/post/shrt-nms-fr-clrty) vBSDcon and EuroBSDcon Call for Papers are open (https://www.freebsdfoundation.org/blog/submit-your-work-vbsdcon-and-eurobsdcon-cfps-now-open/) Feedback/Questions Craig asks about BSD server management (http://pastebin.com/NMshpZ7n) Michael asks about jails as a router between networks (http://pastebin.com/UqRwMcRk) Todd asks about connecting jails (http://pastebin.com/i1ZD6eXN) Dave writes in with an interesting link (http://pastebin.com/QzW5c9wV) > applications crash more often due to errors than corruptions. In the case of corruption, a few applications (e.g., Log-Cabin, ZooKeeper) can use checksums and redundancy to recover, leading to a correct behavior; however, when the corruption is transformed into an error, these applications crash, resulting in reduced availability. ***

Software Defined Talk
Episode 83: I think the word we object to is "DevOps"

Software Defined Talk

Play Episode Listen Later Dec 16, 2016 54:22


...Statler and Waldorf talk with Fozzie ...What's the "OpsOps" of DevOps?. ...Never say you're going to spend $1bn on anything What exactly is DevOps? We dare to discuss that at first and then get into Amazon's new managed hosting offering. There's some new container news with containerd from DockerInc land, and some little notes on Azure's features and Cisco's InterCloud shutting down. Also, we find out which Muppet each of us would be played by in The Muppets Take Over Software Defined Talk. Mid-roll Coté: Come see me January 10th in Phoenix (https://www.meetup.com/Arizona-Cloud-Foundry-Meetup/events/236191762/), 5:30pm at the Galvanize Office. Free parking! Coté: check out my interview with Tony at Home Depot about their first year being cloud native, on Pivotal Cloud Foundry (https://blog.pivotal.io/pivotal-conversations/features/045-cloud-native-at-home-depot-with-tony-mcculley). They went from 0 to ~150 apps in their first year. Like, real, business critical apps that you probably end up interacting with (pro tools, paint), plus internal facing apps. Feedback & Follow-up The Doc Martin shoes: Hickmire (http://amzn.to/2hlPnIJ). Thanks to Chris Short (https://twitter.com/ChrisShort/status/808339167604338688). The DevOps App dev vs. IT service delivery. DevOps Kung Fu (https://www.youtube.com/watch?v=_DEToXsgrPc), Adam Jacob's talk on the inclusion of everyone in the org chart in DevOps What is DevOps without Dev? Is there OpsOps? AWS Managed Services Amazon will manage your shit now, with real live peoples (https://aws.amazon.com/blogs/aws/aws-managed-services-infrastructure-operations-management-for-the-enterprise/) "This is actually a thing. It's called managed cloud." (http://venturebeat.com/2016/12/12/amazon-launches-aws-managed-services-to-help-more-big-companies-adopt-cloud/) "This is actually a thing. It's called managed cloud." - this is a good example of the more subtle way of "paying off analysts." (https://twitter.com/cote/status/809205833586409472) More like: changing their minds. "Designed for the Fortune 1000 and the Global 2000, this service is designed to accelerate cloud adoption" AKA "We're eating our partners" AKA "RACKSPACE: YOU'RE UP!" Coté: Is this like a service desk and a runbook for spinning up AWS stuff? Plus actual AMZN staff to "manage" the infrastructure like patching and such right? Coté: I was just talking with someone yesterday who's mission was "optimize how we do IT without me telling you what I want to do with IT." That is: lower costs and give us the ability to do whatever we may want in the future in under a year's planning/effort. Bezos doesn't like meetings without a memo http://static2.businessinsider.com/image/5851aebfca7f0c24018b5b6f-2400/ap16349721408436.jpg Don't Sleep on Microsoft Damn, that's a monstrous URL (https://pages.email.microsoftemail.com/page.aspx?qs=773ed3059447707dab3a47fc5c2937dcbf750d2a6d7e8feab247991209f258cd86e8606f2837501c341831b6f3896ebcb5673dff86feb6303e458a94181db250c28f58237fd3b737cd39c6339094ff6800649c38da065423db508d0369c1992e) GPUs, HANA, Media Services, Machine Deep Learning, Data Lake, Single-instance virtual machines Coté: I hear data is a thing. And AI. Cisco Shutting Down Their InterCloud Coté's audition for an ElReg headline writer: Cloud InterRUPPTED $1 Billion isn't enough (http://www.businessinsider.com.au/amazon-claims-another-victim-cisco-kills-its-1-billion-cloud-2016-12), "score another body bag win for the unstoppable Amazon Web Services" "Meanwhile, the cloud providers like Amazon, Microsoft, and Google aren't using a lot of Cisco gear. They are increasingly using a new style to build networks that relies more on software and less on high-end, expensive hardware." Sharwood@ElReg (http://www.theregister.co.uk/2016/12/13/cisco_to_kill_its_intercloud_public_cloud_on_march_31st_2017/): "OpenStack public clouds have an unhappy history: Rackspace felt it could build a business on the platform, but has since changed tack. HP pulled out of its own Helion public cloud. If Cisco is indeed changing direction, the OpenStack Board has some interesting matters to ponder." Theory: AWS means on-premise IT is over-serving. You actually don't need all that. Incumbent vendors succumbed to the strategy aphasia of the disruptor's' dilemma (weren't willing to sacrifice/take eye off the ball of existing success and revenue) and lost to Amazon's lower capabilities, lower price approach. WHEN WILL TECH PEOPLE LURN? There was this talk several years ago that was all like: "well, obviously, we shouldn't compete strategy-to-strategy with Amazon. We should provide the enterprise version!" Apparently, that was dead wrong. People confused Apple's ability to sell at an insane premium with the market not caring about x86 &co. Docker Contributes Containerd Docker-engine standardized container runtime for the industry (https://blog.docker.com/2016/12/introducing-containerd/) Engine vs. Machine (https://docs.docker.com/machine/overview/#/whats-the-difference-between-docker-engine-and-docker-machine) Check out this TheNewStack story for a new strategy slide (http://thenewstack.io/docker-spins-containerd-independent-open-source-project/): Containers in Production! Round-up of some container survey poking (http://redmonk.com/fryan/2016/12/01/containers-in-production-is-security-a-barrier-a-dataset-from-anchore/) n=338 respondents Sidenote: Jenkins win. Good job biffing that one Oracle. But then again: is there any money in it? "This leads us to a very difficult operational problem – how do we ensure security, and understand the makeup of an application while still allowing developer velocity to increase." More Docker usage numbers from DataDog (https://www.datadoghq.com/blog/3-clear-trends-in-ecs-adoption/)! "ECS adoption has climbed steadily from zero to 15 percent of Docker organizations using Datadog. (And more than 10 percent of all Datadog customers are now using Docker.)" How do I read this? Does it mean adoption is fast after an initial tire-kicking? "In the 30 days after an organization starts reporting ECS metrics, we see a 35 percent increase in the number of running containers as compared to the 60-day baseline that came before. Using the same parameters, we see a 27 percent increase in the number of running Docker hosts." CoreOS Tectonic Goes Freemium Erryone's favorite business model (https://coreos.com/blog/tectonic-self-driving.html) Kubernetes 1.5 coming soon Shipping upstream version3 Renamed their distro to Container Linux They have attempted to coin the phrase "self-driving Kubernetes" -- God help us. BONUS LINKS! Not discussed on show. More AWS Followup Missed a talk (https://gist.github.com/stevenringo/5f0f9cc7b329dbaa76f495a6af8241e9)? Open sourced a Deep Learning library (https://github.com/amznlabs/amazon-dsstne/blob/master/FAQ.md): AWS is still really new to contributing to OSS, Cockcroft has been pushing them. Also see the Blox.github.io stuff we didn't talk about last show AWS OpsWorks for Chef Automate Q&A (https://blog.chef.io/2016/12/08/rule-the-cloud-with-chef-automate-and-aws/) AWS Canada & London! Strange Brew Region (https://aws.amazon.com/blogs/publicsector/canada-central-region-now-open/) hello hello hello what's all this then region (https://aws.amazon.com/blogs/aws/now-open-aws-london-region/) "brings our global footprint to 16 Regions and 40 Availability Zones, with seven more Availability Zones and three more Regions coming online through the next year" Docker Acquires Distributed Storage Startup Inifinit "the Infinit platform provides interfaces for block, object and file storage: NFS, SMB, AWS S3, OpenStack Swift, iSCSI, FUSE etc." (https://blog.docker.com/2016/12/docker-acquires-infinit/) To be open-sourced Extends the stateful application story CA Buys Automic for $635 million "CA fights legacy status with DevOps automation tools buy" (http://searchitoperations.techtarget.com/news/450404297/CA-fights-legacy-status-with-DevOps-automation-tools-buy) - that's not a good headline for your Christmas cards. $635 million, Crunchbase says they were founded in 1985(?) Hey look, it's my man Carl Lehmann at 451! New CEO at BMC Beauchamp goes to board, Polycom dude steps in a CEO (http://www.forbes.com/sites/maribellopez/2016/12/12/bmc-adds-peter-leav-as-ceo-prepares-for-new-growth-chapter/#1b2662bc4651) "Beauchamp said many of BMC's products are achieving double digit growth and double-digit profitability." Red Hat OpenShift on GCE and JBoss on OpenShift In case you need more management on your GCE (http://www.cio.com/article/3148671/cloud-computing/red-hat-brings-openshift-to-google-cloud-platform.html)? AWS is already there, probably Azure soon. I wonder if there's a deficiency in Google's offering that it's more of a consumed resource than a platform a la AWS? Plenty of management in AWS already? JBoss on it (http://www.zdnet.com/article/red-hat-brings-full-jboss-software-stack-to-openshift/) Dell Q3 "Dell Technologies Posts $2B Loss, But EMC Deal Already Boosting Revenue" (http://austininno.streetwise.co/2016/12/08/dell-technologies-q3-earnings-report-revenues-and-losses/) Stonic, (not) An Ansible Fork? Stonic (https://blog.stonic.io/0000-it-is-not-a-fork-c0b03c33e408) will be licensed under AGPL-3.0 :facepalm: Coté: why is AGPL bad? Australian 2016 Word of the Year: "Democracy Sausage" (saved you a click) Democracy Sausage (http://www.abc.net.au/news/2016-12-14/democracy-sausage-snags-word-of-the-year/8117684) Google Makes So Much Money It Never Had to Worry About Financial Discipline - Until Now Candy, not CREAM (https://www.bloomberg.com/news/features/2016-12-08/google-makes-so-much-money-it-never-had-to-worry-about-financial-discipline) Brandon called this way back when. But what about Google Fiber in my neighborhood? Best shruggie use of th eyear (https://assets.bwbx.io/images/users/iqjWHBFdfxIU/i2yPkZEJdBec/v0/1000x-1.jpg) NVIDA $129k computer. "Fewer than 100 companies and organizations have bought DGX-1s since they started shipping in the fall, but early adopters say Nvidia's claims about the system seem to hold up." (https://www.technologyreview.com/s/603075/the-pint-sized-supercomputer-that-companies-are-scrambling-to-get/) Does it pass the Coté AI Test? I.e.: can it fix scheduling meetings across different organizations? Recommendations Brandon: Mobile eating the world (http://ben-evans.com/benedictevans/2016/12/8/mobile-is-eating-the-world). Matt: Jenn Schiffer's "No One Expects The Lady Code Troll" (https://www.youtube.com/watch?v=wewAC5X_CZ8) Coté: Senso bluetoother headphones (http://amzn.to/2hLC0lF). Trapper hats (http://amzn.to/2hAupDi) all winter long (https://www.instagram.com/p/BODJGTPjv2b/).

BSD Now
165: Vote4BSD

BSD Now

Play Episode Listen Later Oct 26, 2016 72:52


This week on BSDNow, we've got voting news for you (No not that election), a closer look at This episode was brought to you by Headlines ARIN 38 involvement, vote! (http://lists.nycbug.org/pipermail/talk/2016-October/016878.html) Isaac (.Ike) Levy, one of our interview guests from earlier this year, is running for a seat on the 15 person ARIN Advisory Council His goal is to represent the entire *BSD community at this important body that makes decisions about how IP addresses are allocated and managed Biographies and statements for all of the candidates are available here (https://www.arin.net/participate/elections/candidate_bios.pdf) The election ends Friday October 28th If elected, Ike will be looking for input from the community *** LibreSSL not just available but default (DragonFlyBSD) (https://www.dragonflydigest.com/2016/10/19/18794.html) DragonFly has become the latest BSD to join the growing LibreSSL family. As mentioned a few weeks back, they were in the process of wiring it up as a replacement for OpenSSL. With this latest commit, you can now build the entire base and OpenSSL isn't built at all. Congrats, and hopefully more BSDs (and Linux) jump on the bandwagon Compat_43 is gone (http://lists.dragonflybsd.org/pipermail/commits/2016-October/624734.html) RiP 4.3 Compat support.. Well for DragonFly anyway. This commit finally puts out to pasture the 4.3 support, which has been disabled by default in DragonFly for almost 5 years now. This is a nice cleanup of their tree, removing more than a thousand lines of code and some of the old cruft still lingering from 4.3. *** Create your first FreeBSD kernel module (http://meltmes.kiloreux.me/create-your-first-freebsd-kernel-module/) This is an interesting tutorial from Abdelhadi Khiati, who is currently a master's student in AI and robotics I have been lucky enough to participate in Google Summer of Code with the FreeBSD foundation. I was amazed by the community surrounding it which was noob friendly and very helpful (Thank you FreeBSD We will run two storage controllers (ctrl-a, ctrl-b) and a host (cln-1). A virtual SAS drive (da0) of 256 MB is configured as “shareable” in Virtual Media Manager and simultaneously connected with both storage controllers The basic settings are applied to both controllers One interesting setting is: kern.cam.ctl.harole – configures default role for the node. So ctrl-a is set as 0 (primary node), ctrl-b – 1 (secondary node). The role also can be specified on per-LUN basis which allows to distribute LUNs over both controllers evenly. Note, kern.cam.ctl.haid and kern.cam.ctl.ha_mode are read-only parameters and must be set only via the /boot/loader.conf file. Once kern.cam.ctl.ha_peer is set, and the peers connect to each other, the log messages should reflect this: CTL: HA link status changed from 0 to 1 CTL: HA link status changed from 1 to 2 The link states can be: 0 – not configured, 1 – configured but not established and 2 – established Then ctld is configured to export /dev/da0 on each of the controllers Then the client is booted, and uses iscsid to connect to each of the exposed targets sysctl kern.iscsi.failondisconnection=1 on the client is needed to drop connection with one of the controllers in case of its failure As we know that da0 and da1 on the client are the same drive, we can put them under multipathing control: gmultipath create -A HA /dev/da0 /dev/da1 The document them shows a file being copied continuously to simulate load. Because the multipath is configured in ‘active/active' mode, the traffic is split between the two controllers Then the secondary controller is turned off, and iscsi disconnects that path, and gmultipath adapts and sends all of the traffic over the primary path. When the secondary node is brought back up, but the primary is taken down, traffic stops The console on the client is filled with errors: “Logical unit not accessible, asymmetric access state transition” The ctl(4) man page explains: > If there is no primary node (both nodes are secondary, or secondary node has no connection to primary one), secondary node(s) report Transitioning state. > Therefore, it looks like a “normal” behavior of CTL HA cluster in a case of disaster and loss of the primary node. It also means that a very lucky administrator can restore the failed primary controller before timeouts are elapsed. If the primary is down, the secondary needs to be promoted by some other process (CARP maybe?): sysctl kern.cam.ctl.ha_role=0 Then traffic follows again This is a very interesting look at this new feature, and I hope to see more about it in the future *** Is SPF Simply Too Hard for Application Developers? (http://bsdly.blogspot.com/2016/10/is-spf-simply-too-hard-for-application.html) Peter Hansteen asks an interesting question: The Sender Policy Framework (SPF) is unloved by some, because it conflicts with some long-established SMTP email use cases. But is it also just too hard to understand and to use correctly for application developers? He tells a story about trying to file his Norwegian taxes, and running into a bug Then in August 2016, I tried to report a bug via the contact form at Altinn.no, the main tax authorities web site. The report in itself was fairly trivial: The SMS alert I had just received about an invoice for taxes due contained one date, which turned out to be my birth date rather than the invoice due date. Not a major issue, but potentially confusing to the recipient until you actually log in and download the invoice as PDF and read the actual due date and other specifics. The next time I checked my mail at bsdly.net, I found this bounce: support@altinn.no: SMTP error from remote mail server after RCPT TO:: host mx.isp.as2116.net [193.75.104.7]: 550 5.7.23 SPF validation failed which means that somebody, somewhere tried to send a message to support@altinn.no, but the message could not be delivered because the sending machine did not match the published SPF data for the sender domain. What happened is actually quite clear even from the part quoted above: the host mx.isp.as2116.net [193.75.104.7] tried to deliver mail on my behalf (I received the bounce, remember), and since I have no agreement for mail delivery with the owners and operators of that host, it is not in bsdly.net's SPF record either, and the delivery fails. After having a bunch of other problems, he finally gets a message back from the tax authority support staff: It looks like you have Sender Policy Framework (SPF) enabled on your mailserver, It is a known weakness of our contact form that mailervers with SPF are not supported. The obvious answer should be, as you will agree if you're still reading: The form's developer should place the user's email address in the Reply-To: field, and send the message as its own, valid local user. That would solve the problem. Yes, I'm well aware that SPF also breaks traditional forwarding of the type generally used by mailing lists and a few other use cases. Just how afraid should we be when those same developers come to do battle with the followup specifications such as DKIM and (shudder) the full DMARC specification? Beastie Bits Looking for a very part-time SysAdmin (https://lists.freebsd.org/pipermail/freebsd-jobs/2016-October/000930.html) If anyone wants to build the latest nodejs on OpenBSD... (https://twitter.com/qb1t/status/789610796380598272) IBM considers donating Power8 servers to OpenBSD (https://marc.info/?l=openbsd-misc&m=147680858507662&w=2) Install and configure DNS server in FreeBSD (https://galaxy.ansible.com/vbotka/freebsd-dns/) bhyve vulnerability in FreeBSD 11.0 (https://www.freebsd.org/security/advisories/FreeBSD-SA-16:32.bhyve.asc) Feedback/Questions Larry - Pkg Issue (http://pastebin.com/8hwDVQjL) Larry - Followup (http://pastebin.com/3nswwk90) Jason - TrueOS (http://pastebin.com/pjfYWdXs) Matias - ZFS HALP! (http://pastebin.com/2tAmR5Wz) Robroy - User/Group (http://pastebin.com/7vWvUr8K) ***

BSD Now
149: The bhyve has been disturbed, and a wild Dexter appears!

BSD Now

Play Episode Listen Later Jul 6, 2016 140:43


Today on the show, we are going to be chatting with Michael Dexter about a variety of topics, but of course including bhyve! That plus This episode was brought to you by Headlines NetBSD Introduction (https://bsdmag.org/netbsd_intr/) We start off today's episode with a great new NetBSD article! Siju Oommen George has written an article for BSDMag, which provides a great overview of NetBSD's beginnings and what it is today. Of course you can't start an article about NetBSD without mentioning where the name came from: “The four founders of the NetBSD project, Chris Demetriou, Theo de Raadt, Adam Glass, and Charles Hannum, felt that a more open development model would benefit the project: one centered on portable, clean and correct code. They aimed to produce a unified, multi-platform, production-quality, BSD-based operating system. The name “NetBSD” was suggested by de Raadt, based on the importance and growth of networks, such as the Internet at that time, the distributed and collaborative nature of its development.” From there NetBSD has expanded, and keeping in line with its motto “Of course it runs NetBSD” it has grown to over 57 hardware platforms, including “IA-32, Alpha, PowerPC,SPARC, Raspberry pi 2, SPARC64 and Zaurus” From there topics such as pkgsrc, SMP, embedded and of course virtualization are all covered, which gives the reader a good overview of what to expect in the modern NetBSD today. Lastly, in addition to mentioning some of the vendors using NetBSD in a variety of ways, including Point-Of-Sale systems, routers and thin-clients, you may not have known about the research teams which deploy NetBSD: NASA Lewis Research Center – Satellite Networks and Architectures Branch use NetBSD almost exclusively in their investigation of TCP for use in satellite networks. KAME project – A research group for implementing IPv6, IPsec and other recent TCP/IP related technologies into BSD UNIX kernels, under BSD license. NEC Europe Ltd. established the Network Laboratories in Heidelberg, Germany in 1997, as NEC's third research facility in Europe. The Heidelberg labs focus on software-oriented research and development for the next generation Internet. SAMS-II Project – Space Acceleration Measurement System II. NASA will be measuring the microgravity environment on the International Space Station using a distributed system, consisting of NetBSD.“ My condolences, you're now the maintainer of a popular open source project (https://runcommand.io/2016/06/26/my-condolences-youre-now-the-maintainer-of-a-popular-open-source-project/) A presentation from a Wordpress conference, about what it is like to be the maintainer of a popular open source project The presentation covers the basics: Open Source is more than just the license, it is about community and involvement The difference between Maintainers and Contributors It covers some of the reasons people do not open up their code, and other common problems people run into: “I'm embarrassed by my code” (Hint: so is everyone else, post it anyway, it is the best way to learn) “I'm discouraged that I can't finish releases on time” “I'm overwhelmed by the PR backlog” “I'm frustrated when issues turn into flamewars” “I'm overcommitted on my open source involvement” “I feel all alone” Each of those points is met with advice and possible solutions So, there you have it. Open up your code, or join an existing project and help maintain it *** FreeBSD Committer Allan Jude Discusses the Advantages of FreeBSD and His Role in Keeping Millions of Servers Running (http://www.hostingadvice.com/blog/freebsd-project-under-the-hood/) An interesting twist on our normal news-stories today, we have an article featuring our very own Allan Jude, talking about why FreeBSD and the advantages of working on an open-source project. “When Allan started his own company hosting websites for video streaming, FreeBSD was the only operating system he had previously used with other hosts. Based on his experience and comfort with it, he trusted the system with the future of his budding business.A decade later, the former-SysAdmin went to a conference focused on the open-source operating system, where he ran into some of the folks on its documentation team. “They inspired me,” he told our team in a recent chat. He began writing documentation but soon wanted to contribute improvements beyond the docs.Today, Allan sits as a FreeBSD Project Committer. It's rare that you get to chat with someone involved with a massive-scale open-source project like this — rare and awesome.” From there Allan goes into some of the reasons “Why” FreeBSD, starting with Code Organization being well-maintained and documented: “The FreeBSD Project functions like an extremely well-organized world all its own. Allan explained the environment: “There's a documentation page that explains how the file system's laid out and everything has a place and it always goes in that place.”” + In addition, Allan gives us some insight into his work to bring Boot-Environments to the loader, and other reasons why FreeBSD “just makes sense” + In summary Allan wraps it up quite nicely: “An important take-away is that you don't have to be a major developer with tons of experience to make a difference in the project,” Allan said — and the difference that devs like Allan are making is incredible. If you too want to submit the commit that contributes to the project relied on by millions of web servers, there are plenty of ways to get involved! We're especially talking to SysAdmins here, as Allan noted that they are the main users of FreeBSD. “Having more SysAdmins involved in the actual build of the system means we can offer the tools they're looking for — designed the way a SysAdmin would want them designed, not necessarily the way a developer would think makes the most sense” A guide to saving electricity and time with poudriere and bhyve (http://justinholcomb.me/blog/2016/07/03/poudriere-in-bhyve-and-bare-metal.html) “This article goes over running poudriere to built packages for a Raspberry Pi with the interesting twist of running it both as a bhyve guest and then switching to running on bare metal via Fiber Channel via ctld by sharing the same ZFS volume.” “Firstly, poudriere can build packages for different architectures such as ARM. This can save hours of build time compared to building ports from said ARM device.” “Secondly, let's say a person has an always-on device (NAS) running FreeBSD. To save power, this device has a CPU with a low clock-rate and low core count. This low clock-rate and core count is great for saving power but terrible for processor intensive application such as poudriere. Let's say a person also has another physical server with fast processors and a high CPU count but draws nearly twice the power and a fan noise to match.” “To get the best of both worlds, the goal is to build the packages on the fast physical server, power it down, and then start the same ZFS volume in a bhyve environment to serve packages from the always-on device.” The tutorial walks through setting up ‘ahost', the always on machine, ‘fhost' the fast but noisy build machine, and a raspberry pi It also includes creating a zvol, configuring iSCSI over fibre channel and exporting the zvol, booting an iSCSI volume in bhyve, plus installing and setting up poudriere This it configures booting over fibre channel, and cross-building armv6 (raspberry pi) packages on the fast build machine Then the fast machine is shut down, and the zvol is booted in bhyve on the NAS Everything you need to know to make a hybrid physical/virtual machine The same setup could also work to run the same bhyve VM from either ahost or fhost bhyve does not yet support live migration, but when it does, having common network storage like the zvol will be an important part of that *** Interview - Michael Dexter - editor@callfortesting.org (mailto:editor@callfortesting.org) / @michaeldexter (https://twitter.com/michaeldexter) The RoloDexter *** iXSystems Children's Minnesota Star Studio Chooses iXsystems' TrueNAS Storage (https://www.youtube.com/watch?v=FFbdQ_05e-0) *** News Roundup FreeBSD Foundation June 2016 Update (https://www.freebsdfoundation.org/wp-content/uploads/2016/06/FreeBSD-Foundation-June-2016-Update.pdf) The FreeBSD Foundation's June newsletter is out Make sure you submit the FreeBSD Community Survey (https://www.surveymonkey.com/r/freebsd2016) by July 7th: In addition to the opening message from the executive director of the foundation, the update includes details to sponsored work on the FreeBSD VM system, reports from a number of conferences the Foundation attended, including BSDCan The results of the foundation's yearly board meeting People the foundation recognized for their contributions to FreeBSD at BSDCan And an introduction to their new “Getting Started with FreeBSD” project *** [How-To] Building the FreeBSD OS from scratch (http://www.all-nettools.com/forum/showthread.php?34422-Building-the-FreeBSD-OS-from-scratch) A tutorial over at the All-NetTools.com forums that walks through building FreeBSD from scratch I am not sure why anyone would want to build Xorg from source, but you can It covers everything in quite a bit of detail, from the installation process through adding Xorg and a window manager from source It also includes tweaking some device node permissions for easier operation as a non-root user, and configuring the firewall *** Window Systems Should Be Transparent (http://doc.cat-v.org/bell_labs/transparent_wsys/) + Rob Pike of AT&T Labs writes about why Window Systems should be transparent This is an old paper (undated, but I think from the late 80s), but may contain some timeless insights “UNIX window systems are unsatisfactory. Because they are cumbersome and complicated, they are unsuitable companions for an operating system that is appreciated for its technical elegance” “A good interface should clarify the view, not obscure it” “Mux is one window system that is popular and therefore worth studying as an example of good design. (It is not commercially important because it runs only on obsolete hardware.) This paper uses mux as a case study to illustrate some principles that can help keep a user interface simple, comfortable, and unobtrusive. When designing their products, the purveyors of commercial window systems should keep these principles in mind.” There are not many commercial window systems anymore, but “open source” was not really a big thing when this paper was written *** Roger Faulkner, of Solaris fame passed away (http://permalink.gmane.org/gmane.comp.standards.posix.austin.general/12877) “RIP Roger Faulkner: creator of the One and True /proc, slayer of the M-to-N threading model -- and the godfather of post-AT&T Unix” @bcantrill: Another great Roger Faulkner story (https://twitter.com/bcantrill/status/750442169807171584) The story of how pgrep -w saved a monitor -- if not a life (https://news.ycombinator.com/item?id=4306515) @bcantrill: With Roger Faulkner, Tim led an engineering coup inside Sun that saved Solaris circa 2.5 (https://twitter.com/bcantrill/status/750442169807171584) *** Beastie Bits: Developer Ed Maste is requesting information from those who are users of libvgl. (https://lists.freebsd.org/pipermail/freebsd-stable/2016-June/084843.html) HEADS UP: DragonFly 4.5 world reneeds rebuilding (http://lists.dragonflybsd.org/pipermail/users/2016-June/249748.html) Chris Buechler is leaving the pfSense project, the entire community thanks you for your many years of service (https://blog.pfsense.org/?p=2095) GhostBSD 10.3-BETA1 now available (http://ghostbsd.org/10.3_BETA1) DragonFlyBSD adds nvmectl (http://lists.dragonflybsd.org/pipermail/commits/2016-June/500671.html) OPNsense 16.1.18 released (https://opnsense.org/opnsense-16-1-18-released/) bhyve_graphics hit CURRENT (https://svnweb.freebsd.org/base?view=revision&revision=302332) BUG Update FreeBSD Central Twitter account looking for a new owner (https://twitter.com/freebsdcentral/status/750053703420350465) NYCBUG meeting : Meet the Smallest BSDs: RetroBSD and LiteBSD, Brian Callahan (http://lists.nycbug.org/pipermail/talk/2016-July/016732.html) NYCBUG install fest @ HOPE (http://lists.nycbug.org/pipermail/talk/2016-June/016694.html) SemiBUG is looking for presentations for September and beyond (http://lists.nycbug.org/pipermail/semibug/2016-June/000107.html) Caleb Cooper is giving a talk on Crytpo at KnoxBUG on July 26th (http://knoxbug.org/content/2016-07-26) Feedback/Questions Leif - ZFS xfer (http://pastebin.com/vvASr64P) Zach - Python3 (http://pastebin.com/SznQHq7n) Dave - Versioning (http://pastebin.com/qkpjKEr0) David - Encrypted Disk Images (http://pastebin.com/yr7BUmv2) Eli - TLF in all the wrong places (http://pastebin.com/xby81NvC) ***

The Hot Aisle
The Hot Aisle – Fibre Channel is Dead, Right? With Dr. J Metz – Episode 40

The Hot Aisle

Play Episode Listen Later Jun 14, 2016 64:21


Dr. J Metz (@drjmetz) R&D Engineer for the Office of the CTO at Cisco (@cisco) joins us this week on The Hot Aisle and expertly educates us on a number of storage (and not-so-storage) related topics including but not limited to: Fibre Channel, FCoE, Ethernet, iSCSI, PCIe, NVMe, NVMf, iSCSI, RDMA over Ethernet, and More! […]

BSD Now
136: This is GNN

BSD Now

Play Episode Listen Later Apr 6, 2016 95:56


This week on the show, we will be interviewing GNN of the FreeBSD project to talk about the new TeachBSD initiative. That plus the latest BSD headlines, all coming your way right now! This episode was brought to you by Headlines FreeBSD 10.3-RELEASE Announcement (https://www.freebsd.org/releases/10.3R/announce.html) FreeBSD 10.3 has landed, with extended support until April 30, 2018 This is likely to be the last extended support release, as starting with 11, the new support model will encourage upgrading to the latest minor version by ending support for the previous minor version approximately 2 months after each point release. The Major version / stable branch will still be supported for the same 5 year term. This will allow the FreeBSD project to move forward more quickly, while still providing the same level of long term support The UEFI boot loader is much improved, and now supports booting root-on-ZFS, and the beastie menu The beastie menu itself has been updated with support for ZFS Boot Environments The CAM Target Layer (CTL) now supports High Availability, allowing the construction of much more advanced storage systems The 64bit Linux Emulation Layer was backported Reroot support was added, allowing the system to boot off of a minimal image, such as a mfsroot and then reload all of userland from a different root file system (such as iSCSI, NFS, etc) The version of xz(1) has been updated to support multi-threaded compression sesutil(8) has been introduced, making it easier to manage large storage nodes Various ZFS updates As usual, a huge number of driver updates are also included *** How to use OpenBSD with Libreboot: detailed instructions (https://lists.nongnu.org/archive/html/libreboot/2016-04/msg00010.html) This tutorial covers installing OpenBSD on a Thinkpad X200 using Libreboot, a replacement for the traditional BIOS/firmware that comes from the manufacturer “Since 5.9, OpenBSD supports EFI boot mode, which means that it also have had to support framebuffer out of the box, so lack of proprietary VGA BIOS blob is no longer a problem and you can boot it with unmodified Libreboot binary release 20150518.” “In order to install OpenBSD on such a machine you will need someadditional preparations, since regular install59.fs won't work because bsd.rd doesn't have a framebuffer console.” A few extra steps are required to get it going, but they are outlined in the post This may be very interesting to those who prefer not to depend on binary blobs *** Linking the FreeBSD base system with lld -- status update (http://lists.llvm.org/pipermail/llvm-dev/2016-March/096449.html) The FreeBSD Foundation's Ed Maste provides an update on the LLVM mailing list about the progress of replacing the GNU linker with the lld in the FreeBSD base system “I'm pleased to report that I can now build a runnable FreeBSD system using lld as the linker (for buildworld), with a few workarounds and work-in-progress patches. I have not yet extensively tested the result but it is possible to login to the resulting system, and basic sanity tests I've tried are successful. Note that the kernel is still linked with ld.bfd.” Outstanding Issues Symbol version support (PR 23231). FreeBSD uses symbol versioning for backwards compatibility Linker script expression support (PR 26731). The FreeBSD kernel linker scripts contain expressions not currently supported by lld Library search paths. GNU LD automatically searches /lib, and lld does not the -N flag makes the text and data sections RW and does not page-align data. It is used by boot loader components. The -dc flag assigns space to common symbols when producing relocatable output (-r). It is used by the /rescue build, which is a single binary assembled from a collection of individual tools (sh, ls, fsck, ...) -Y adds a path to the default library search path. It is used by the lib32 build, which provides i386 builds of the system libraries for compatibility with i386 applications. With the ongoing work, it might be possible for FreeBSD 11 to use lld by default, although it might be best to wait to throw that particular switch *** Your favorite billion user company using BSD just flipped on encryption for all their users -- and it took 15 Engineers to do it (http://www.wired.com/2016/04/forget-apple-vs-fbi-whatsapp-just-switched-encryption-billion-people/) With the help of Moxie Marlinspike's Open Whisper Systems, WhatsApp has integrated the ‘Signal' encryption system for all messages, class, pictures, and videos sent between individuals or groups It uses public key cryptography, very similar to GPG, but with automated public key servers It also includes a system of QR codes to verify the identity of individuals in person, so you can be sure the person you are talking to is actually the person you met with WhatsApp runs their billion user network, using FreeBSD, with only about 50 engineers Only 15 of those engineers we needed to work on the project that has now deployed complete end-to-end encryption across the entire network The Wired article is very detailed and well worth the read *** Interview - George Neville-Neil - gnn@freebsd.org (mailto:gnn@freebsd.org) / @gvnn3 (https://twitter.com/gvnn3) Teaching BSD with Tracing News Roundup Faces of FreeBSD 2016: Scott Long (https://www.freebsdfoundation.org/blog/faces-of-freebsd-2016-scott-long/) It's been awhile since we've had a new entry into the “Faces of FreeBSD” series, but due to popular demand it's back! This installment features developer Scott Long, who currently works at NetFlix, previously at Yahoo and Adaptec. Scott got a very early start into BSD, first discovering i386BSD 0.1 on a FTP server at Berkeley, back at 1992. From there on it's been a journey, following along with FreeBSD since version 1.0 in 1993. So what stuff can we blame Scott for? In his own words: I've been a source committer since 2000. I got my start by taking over maintainership of the Adaptec ‘aac' RAID driver. From 2002-2006 I was the Release Engineer and was responsible for the 5.x and 6.x releases. Though the early 5.x releases were not great, they were necessary stepping stones to the success of FreeBSD 6.x and beyond. I'm exceptionally proud of my role in helping FreeBSD move forward during that time. I authored and maintained the ‘mfi' and ‘mps' storage drivers, the ‘udf' filesystem driver, and several smaller sound and USB drivers. I've maintained, or at least touched, most of the storage device drivers in the system to some extent, and I implemented medium-grained locking on the CAM storage stack. Recently I've been working on overall system scalability and performance. ASCII Flow (http://asciiflow.com/) A website that lets to draw and share ASCII diagrams Great for network layout maps, rack diagrams, protocol analysis etc Use it in your presentations and slides Sample (https://drive.google.com/open?id=0BynxTTJrNUOKeWxCVm1ERExrNkU) *** System Under Test: FreeBSD (http://lowlevelbits.org/system-under-test-freebsd/) Part of a series looking at testing across a number of projects Outlines the testing framework of FreeBSD Provides a mini-tutorial on how to run the tests There are some other tests that are now covered, but this is due to a lack of documentation on the fact that the tests exist, and how to run them There is much ongoing work in this area *** Worst April Fools Joke EVER! (http://www.rhyous.com/2016/04/01/microsoft-announces-it-is-acquiring-freebsd-for-300-million/) While a bad April Fool's joke, it also shows some common misconceptions The FreeBSD Foundation does not own the source repository, it is only the care taken of the trademark, and other things that require a single legal entity OpenBSD and NetBSD are not ‘sub brands' of FreeBSD Bash was not ported to Windows, but rather Windows gained a system similar to FreeBSD's linux_compat It would be nice to have ZFS on Windows *** Beastie Bits Credit where credit's due... (https://forums.freebsd.org/threads/55642/) M:Tier's OpenBSD packages and binpatches updated for 5.9 (https://stable.mtier.org/) NYC BUG Meeting (2016-04-06) - Debugging with LLVM, John Wolfe (http://www.nycbug.org/index.cgi) Need to create extremely high traffic loads? kq_sendrecv is worth checking out (http://lists.dragonflybsd.org/pipermail/commits/2016-March/459651.html) If you're in the Maryland region, CharmBug has a meetup next week (http://www.meetup.com/CharmBUG/events/230048300/) How to get a desktop on DragonFly (https://www.dragonflybsd.org/docs/how_to_get_to_the_desktop/) Linux vs BSD Development Models (https://twitter.com/q5sys/status/717509675630084096) Feedback/Question Paulo - ZFS Setup (http://pastebin.com/raw/GrM0jKZK) Jonathan - Installation (http://pastebin.com/raw/13KCkhMU) Andrew - Career / School (http://pastebin.com/wsx90L2m)

BSD Now
134: Marking up the Ports tree

BSD Now

Play Episode Listen Later Mar 24, 2016 125:28


This week on the show, Allan and I have gotten a bit more sleep since AsiaBSDCon, which is excellent since there is a LOT of news to cover. That plus our interview with Ports SecTeam member Mark Felder. So keep it This episode was brought to you by Headlines FreeNAS 9.10 Released (http://lists.freenas.org/pipermail/freenas-announce/2016-March/000028.html) OS: The base OS version for FreeNAS 9.10 is now FreeBSD 10.3-RC3, bringing in a huge number of OS-related bug fixes, performance improvements and new features. +Directory Services: You can now connect to large AD domains with cache disabled. +Reporting: Add the ability to send collectd data to a remote graphite server. +Hardware Support: Added Support for Intel I219-V & I219-LM Gigabit Ethernet Chipset Added Support for Intel Skylake architecture Improved support for USB devices (like network adapters) USB 3.0 devices now supported. +Filesharing: Samba (SMB filesharing) updated from version 4.1 to 4.3.4 Added GUI feature to allow nfsv3-like ownership when using nfsv4 Various bug fixes related to FreeBSD 10. +Ports: FreeBSD ports updated to follow the FreeBSD 2016Q1 branch. +Jails: FreeBSD Jails now default to a FreeBSD 10.3-RC2 based template. Old jails, or systems on which jails have been installed, will still default to the previous FreeBSD 9.3 based template. Only those machinesusing jails for the first time (or deleting and recreating their jails dataset) will use the new template. +bhyve: ++In the upcoming 10 release, the CLI will offer full support for managing virtual machines and containers. Until then, the iohyve command is bundled as a stop-gap solution to provide basic VM management support - *** Ubuntu BSD's first Beta Release (https://sourceforge.net/projects/ubuntubsd/) Under the category of “Where did this come from?”, we have a first beta release of Ubuntu BSD. Specifically it is Ubuntu, respun to use the FreeBSD kernel and ZFS natively. From looking at the minimal information up on sourceforge, we gather that is has a nice text-based installer, which supports ZFS configuration and iSCSI volume creation setups. Aside from that, it includes the XFCE desktop out of box, but claims to be suitable for both desktops and servers alike right now. We will keep an eye on this, if anybody listening has already tested it out, maybe drop us a line on your thoughts of how this mash-up works out. *** FreeBSD - a lesson in poor defaults (http://vez.mrsk.me/freebsd-defaults.txt) Former BSD producer, and now OpenBSD developer, TJ, writes a post detailing the defaults he changes in a fresh FreeBSD installation Maybe some of these should be the defaults While others are definitely a personal preference, or are not as security related as they seem A few of these, while valid criticisms, but some are done for a reason Specifically, the OpenSSH changes. So, you're a user, you install FreeBSD 10.0, and it comes with OpenSSH version X, which has some specific defaults As guaranteed by the FreeBSD Project, you will have a nice smooth upgrade path to any version in the 10.x branch Just because OpenSSH has released version Y, doesn't mean that the upgrade can suddenly remove support for DSA keys, or re-adding support for AES-CBC (which is not really weak, and which can be hardware accelerated, unlikely most of the replacements) “FreeBSD is the team trying to increase the risk.” Is incorrect, they are trying to reduce the impact on the end user Specifically, a user upgrading from 10.x to 10.3, should not end up locked out of their SSH server, or otherwise confronted by unexpected errors or slowdowns because of upstream changes I will note again, (and again), that the NONE cipher can NOT allow a user to “shoot themselves in the foot”, encryption is still used during the login phase, it is just disabled for the file transfer phase. The NONE cipher will refuse to work for an interactive session. While the post states that the NONE cipher doesn't improve performance that much, it infact does In my own testing, chacha20-poly1305 1.3 gbps, aes128-gcm (fastest) 5.0 gbps, NONE cipher 6.3 gbps That means that the NONE cipher is an hour faster to transfer 10 TB over the LAN. The article suggests just removing sendmail with no replacement. Not sure how they expect users to deliver mail, or the daily/weekly reports Ports can be compiled as a regular user. Only the install phase requires root for ntpd, it is not clear that there is an acceptable replacement yet, but I will not that it is off by default In the sysctl section, I am not sure I see how enabling tcp blackhole actually increases security at all I am not sure that linking to every security advisory in openssl since 2001 is actually useful Encrypted swap is an option in bsdinstall now, but I am not sure it is really that important FreeBSD now uses the Fortuna PRNG, upgraded to replace the older Yarrow, not vanilla RC4. “The resistance from the security team to phase out legacy options makes mewonder if they should be called a compatibility team instead.” I do not think this is the choice of the security team, it is the ABI guarantee that the project makes. The stable/10 branch will always have the same ABI, and a program or driver compiled against it will work with any version on that branch The security team doesn't really have a choice in the matter. Switching the version of OpenSSL used in FreeBSD 9.x would likely break a large number of applications the user has installed Something may need to be done differently, since it doesn't look like any version of OpenSSL, (or OpenSSH), will be supported for 5 years ever again *** ZFS Raidz Performance, Capacity and Integrity (https://calomel.org/zfs_raid_speed_capacity.html) An updated version of an article comparing the performance of various ZFS vdev configurations The settings users in the test may not reflect your workload If you are benchmarking ZFS, consider using multiple files across different datasets, and not making all of the writes synchronous Also, it is advisable to run more than 3 runs of each test Comparing the numbers from the 12 and 24 disk tests, it is surprising to see that the 12 mirror sets did not outperform the other configurations. In the 12 drive tests, the 6 mirror sets had about the same read performance as the other configurations, it is not clear why the performance with more disks is worse, or why it is no longer in line with the other configurations More investigation of this would be required There are obviously so other bottlenecks, as 5x SSDs in RAID-Z1 performed the same as 17x SSDs in RAID-Z1 Interesting results none the less *** iXSystems FreeNAS Mini Review (http://www.nasanda.com/2016/03/ixsystems-freenas-mini-nas-device-reviewed/) Interview - Mark Felder - feld@freebsd.org (mailto:feld@freebsd.org) / @feldpos (https://twitter.com/feldpos) Ports, Ports and more Ports DigitalOcean Digital Ocean's guide to setting up an OpenVPN server (https://www.digitalocean.com/community/tutorials/how-to-configure-and-connect-to-a-private-openvpn-server-on-freebsd-10-1) News Roundup AsiaBSDCon OpenBSD Papers (http://undeadly.org/cgi?action=article&sid=20160316153158&mode=flat&count=0) + Undeadly.org has compiled a handy list of the various OpenBSD talks / papers that were offered a few weeks ago at AsiaBSDCon 2016. Antoine Jacoutot (ajacoutot@) - OpenBSD rc.d(8) (slides | paper) Henning Brauer (henning@) - Running an ISP on OpenBSD (slides) Mike Belopuhov (mikeb@) - Implementation of Xen PVHVM drivers in OpenBSD (slides | paper) Mike Belopuhov (mikeb@) - OpenBSD project status update (slides) Mike Larkin (mlarkin@) - OpenBSD vmm Update (slides) Reyk Floeter (reyk@) - OpenBSD vmd Update (slides) Each talk provides slides, and some the papers as well. Also included is the update to ‘vmm' discussed at bhyveCon, which will be of interest to virtualization enthusiasts. *** Bitcoin Devs could learn a lot from BSD (http://bitcoinist.net/bitcoin-devs-could-learn-a-lot-from-bsd/) An interesting article this week, comparing two projects that at first glance may not be entirely related, namely BitCoin and BSD. The article first details some of the woes currently plaguing the BitCoin development community, such as toxic community feedback to changes and stakeholders with vested financial interests being unable to work towards a common development purpose. This leads into the crux or the article, about what BitCoin devs could learn from BSD: First and foremost, the way code is developed needs change to stop the current negative trend in Bitcoin. The FreeBSD project has a rigid internal hierarchy of people with write access to their codebase, which the various Bitcoin implementations also have, but BSD does this in a way that is very open to fresh eyes on their code, allowing parallel problem solving without the petty infighting we see in Bitcoin. Anyone can propose a commit publicly to the code, make it publicly available, and democratically decide which change ends up in the codebase. FreeBSD has a tiny number of core developers compared to the size of their codebase, but at any point, they have a huge community advancing their project without hard forks popping up at every small disagreement. Brian Armstrong commented recently on this flaw with Bitcoin development, particularly with the Core Devs: “Being high IQ is not enough for a team to succeed. You need to make reasonable tradeoffs, collaborate, be welcoming, communicate, and be easy to work with. Any team that doesn't have this will be unable to attract top talent and will struggle long term. In my opinion, perhaps the biggest risk in Bitcoin right now is, ironically, one of the things which has helped it the most in the past: the Bitcoin Core developers.” A good summary of the culture that could be adopted is summed up as follows: The other thing Bitcoin devs could learn from is the BSD community's adoption of the Unix Design philosophy. Primarily “Worse is Better,” The rule of Diversity, and Do One Thing and Do It Well. “Worse is Better” emphasizes using extant functional solutions rather than making more complex ones, even if they would be more robust. The Rule of Diversity stresses flexibility of the program being developed, allowing for modification and different implementations without breaking. Do one Thing and Do it well is a mantra of the BSD and Unix Communities that stresses modularity and progress over “perfect” solutions. Each of these elements help to make BSD a wildly successful open source project with a healthy development community and lots of inter-cooperation between the different BSD systems. While this is the opposite of what we see with Bitcoin at present, the situation is salvageable provided changes like this are made, especially by Core Developers. All in all, a well written and interesting take on the FreeBSD/BSD project. We hope the BitCoin devs can take something useful from it down the road. *** FreeBSD cross-compiling with gcc and poudriere (http://ben.eficium.net/2016/03/freebsd-cross-compiling-with-gcc.html) Cross-Compiling, always a challenge, has gotten easier using poudriere and qemu in recent years. However this blog post details some of the particular issues still being face when trying to compile some certain ports for ARM (I.E. rPi) that don't play nicely with FreeBSD's default CLANG compiler. The writer (Ben Slack) takes us through some of the work-arounds he uses to build some troublesome ports, namely lsof and libatomic_ops. Note this is not just an issue with cross compile, the above mentioned ports also don't build with clang on the Pi directly. After doing the initial poudriere/qemu cross-compile setup, he then shows us the minor tweaks to adjust which compiler builds specific ports, and how he triggers the builds using poudriere. With the actual Makefile adjustment being so minor, one wonders if this shouldn't just be committed upstream, with some if (ARM) - USE_GCC=yes type conditional. *** Nvidia releases new Beta graphics driver for FreeBSD (https://devtalk.nvidia.com/default/topic/925607/unix-graphics-announcements-and-news/linux-solaris-and-freebsd-driver-364-12-beta-/) Added support for the following GPUs: GeForce 920MX & GeForce 930MX Added support for the Vulkan API version 1.0. Fixed a bug that could cause incorrect frame rate reporting on Quadro Sync configurations with multiple GPUs. Added a new RandR property, CscMatrix, which specifies a 3x4 color-space conversion matrix. Improved handling of the X gamma ramp on GF119 and newer GPUs. On these GPUs, the RandR gamma ramp is always 1024 entries and now applies to the cursor and VDPAU or workstation overlays in addition to the X root window. Fixes for bugs and added several other EGL extensions *** Beastie Bits New TN Bug started (http://knoxbug.org/) DragonFlyBSD Network/TCP Performance's gets a bump (http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/4a43469a10cef8c17553c342aab9d73611ea7bc8?utm_source=anzwix) FreeBSD Foundation introduces a new website and logo (https://www.freebsdfoundation.org/blog/introducing-a-new-look-for-the-foundation/) Our producer made these based on the new logo: http://q5sys.sh/2016/03/a-new-freebsd-foundation-logo-means-its-time-for-some-new-wallpapers/ http://q5sys.sh/2016/03/pc-bsd-and-lumina-desktop-wallpapers/ https://github.com/pcbsd/lumina/commit/60314f46247b7ad6e877af503b3814b0be170da8 IPv6 errata for 5.7/5.8, pledge errata for 5.9 (http://undeadly.org/cgi?action=article&sid=20160316190937&mode=flat) Sponsoring “PAM Mastery” (http://blather.michaelwlucas.com/archives/2577) A visualization of FreeBSD commits on GitHub for 2015 (https://rocketgraph.com/s/v89jBkKN4e-) The VAX platform is no more (http://undeadly.org/cgi?action=article&sid=20160309192510) Feedback/Questions Hunter - Utils for Blind (http://slexy.org/view/s20KPYDOsq) Chris - ZFS Quotas (http://slexy.org/view/s2EHdI3z3L) Anonymous - Tun, Tap and Me! (http://slexy.org/view/s21Nx1VSiU) Andrew - Navigating the BSDs (http://slexy.org/view/s2ZKK2DZTL) Brent - Wifi on BSD (http://slexy.org/view/s20duO29mN) ***

BSD Now
112: Tracing the source

BSD Now

Play Episode Listen Later Oct 21, 2015 58:53


This week Allan is away at a ZFS conference, so it seems This episode was brought to you by Headlines pfsense - 2.3 alpha snapshots available (https://blog.pfsense.org/?p=1854) pfsense 2.3 Features and Changes (https://doc.pfsense.org/index.php/2.3_New_Features_and_Changes) The entire front end has been re-written Upgrade of base OS to FreeBSD 10-STABLE The PPTP server component has been removed, PBIs have been replaced with pkg PHP upgraded to 5.6 The web interface has been converted to Bootstrap *** BSDMag October 2015 out (http://bsdmag.org/download/bsd-09-2015/) A Look at the New PC-BSD 10.2 - Kris Moore Basis Of The Lumina Desktop Environment 18 - Ken Moore A Secure Webserver on FreeBSD with Hiawatha - David Carlier Defeating CryptoLocker Attacks with ZFS - Michael Dexter Emerging Technology Has Increasingly Been a Force for Both Good and Evil - Rob Somerville Interviews with: Dru Lavigne, Luca Ferrari, Oleksandr Rybalko *** OpnSense 15.7.14 Released (https://opnsense.org/opnsense-15-7-14-released/) Another update to OpnSense has landed! Some of the notable takeaways this time are that it isn't a security update Major rework of the firewall rules sections including, rules, schedules, virtual ip, nat and aliases pages Latest BIND and Squid packages Improved configuration management, including fixes to importing an old config file. New location for configuration history / backups. *** OpenBSD in Toyota Highlander (http://marc.info/?l=openbsd-misc&m=144327954931983&w=2) Images (http://imgur.com/a/SMVdp) While looking through the ‘Software Information' screen of a Toyota Highlander, Chad Dougherty of the ACM found a bunch of OpenBSD copyright notices At least one of which I recognize as OpenCrypto, because of the comment about “transforms” It is likely that the vehicle is running QNX, which contains various bits of BSD QNX: Third Party License Terms List version 2.17 (http://support7.qnx.com/download/download/25111/TPLTL.v2.17.Jul23-13.pdf) Some highlights Robert N. M. Watson (FreeBSD) TrustedBSD Project (FreeBSD) NetBSD Foundation NASA Ames Research Center (NetBSD) Damien Miller (OpenBSD) Theo de Raadt (OpenBSD) Sony Computer Science Laboratories Inc. Bob Beck (OpenBSD) Christos Zoulas (NetBSD) Markus Friedl (OpenBSD) Henning Brauer (OpenBSD) Network Associates Technology, Inc. (FreeBSD) 100s of others OpenSSH seems to be included It also seems to contain tcpdump for some reason Interview - Adam Leventhal - adam.leventhal@delphix.com (mailto:adam.leventhal@delphix.com) / @ahl (https://twitter.com/ahl) ZFS and DTrace Beastie-Bits isboot, an iSCSI boot driver for FreeBSD 9 and 10 (https://lists.freebsd.org/pipermail/freebsd-current/2015-September/057572.html) tame() is now called pledge() (http://marc.info/?l=openbsd-tech&m=144469071208559&w=2) Interview with NetBSD developer Leoardo Taccari (http://beastie.pl/deweloperzy-netbsd-7-0-leonardo-taccari/) Fuguita releases LiveCD based on OpenBSD 5.8 (http://fuguita.org/index.php?FuguIta) Dtrace toolkit gets an update and imported into NetBSD (http://mail-index.netbsd.org/source-changes/2015/09/30/msg069173.html) An older article about how to do failover / load-balancing in pfsense (http://www.tecmint.com/how-to-setup-failover-and-load-balancing-in-pfsense/) Feedback/Questions Michael writes in (http://slexy.org/view/s217HyOZ9U) Possniffer writes in (http://slexy.org/view/s2YODjppwX) Erno writes in (http://slexy.org/view/s21xltQ6jd) ***

BSD Now
64: Rump Kernels Revisited

BSD Now

Play Episode Listen Later Nov 19, 2014 113:32


This time on the show, we'll be talking with Justin Cormack about NetBSD rump kernels. We'll learn how to run them on other operating systems, what's planned for the future and a lot more. As always, answers to viewer-submitted questions and all the news for the week, on BSD Now - the place to B.. SD. This episode was brought to you by Headlines EuroBSDCon 2014 talks and tutorials (http://2014.eurobsdcon.org/talks-and-schedule/) The 2014 EuroBSDCon videos have been online for over a month, but unannounced - keep in mind these links may be temporary (but we'll mention their new location in a future show and fix the show notes if that's the case) Arun Thomas, BSD ARM Kernel Internals (https://va.ludost.net/files/eurobsdcon/2014/Rodopi/03.Saturday/01.BSD-ARM%20Kernel%20Internals%20-%20Arun%20Thomas.mp4) Ted Unangst, Developing Software in a Hostile Environment (https://va.ludost.net/files/eurobsdcon/2014/Rodopi/03.Saturday/02.Developing%20Software%20in%20a%20Hostile%20Environment%20-%20Ted%20Unangst.mp4) Martin Pieuchot, Taming OpenBSD Network Stack Dragons (https://va.ludost.net/files/eurobsdcon/2014/Rodopi/03.Saturday/03.Taming%20OpenBSD%20Network%20Stack%20Dragons%20-%20Martin%20Pieuchot.mp4) Henning Brauer, OpenBGPD turns 10 years (https://va.ludost.net/files/eurobsdcon/2014/Rodopi/03.Saturday/04.OpenBGPD%20turns%2010%20years%20-%20%20Henning%20Brauer.mp4) Claudio Jeker, vscsi and iscsid iSCSI initiator the OpenBSD way (https://va.ludost.net/files/eurobsdcon/2014/Rodopi/03.Saturday/05.vscsi(4)%20and%20iscsid%20-%20iSCSI%20initiator%20the%20OpenBSD%20way%20-%20Claudio%20Jeker.mp4) Paul Irofti, Making OpenBSD Useful on the Octeon Network Gear (https://va.ludost.net/files/eurobsdcon/2014/Rodopi/03.Saturday/06.Making%20OpenBSD%20Useful%20on%20the%20Octeon%20Network%20Gear%20-%20Paul%20Irofti.mp4) Baptiste Daroussin, Cross Building the FreeBSD ports tree (https://va.ludost.net/files/eurobsdcon/2014/Rodopi/04.Sunday/01.Cross%20Building%20the%20FreeBSD%20ports%20tree%20-%20Baptiste%20Daroussin.mp4) Boris Astardzhiev, Smartcom's control plane software, a customized version of FreeBSD (https://va.ludost.net/files/eurobsdcon/2014/Rodopi/04.Sunday/02.Smartcom%e2%80%99s%20control%20plane%20software,%20a%20customized%20version%20of%20FreeBSD%20-%20Boris%20Astardzhiev.mp4) Michał Dubiel, OpenStack and OpenContrail for FreeBSD platform (https://va.ludost.net/files/eurobsdcon/2014/Rodopi/04.Sunday/03.OpenStack%20and%20OpenContrail%20for%20FreeBSD%20platform%20-%20Micha%c5%82%20Dubiel.mp4) Martin Husemann & Joerg Sonnenberger, Tool-chaining the Hydra, the ongoing quest for modern toolchains in NetBSD (https://va.ludost.net/files/eurobsdcon/2014/Rodopi/04.Sunday/04.(Tool-)chaining%20the%20Hydra%20The%20ongoing%20quest%20for%20modern%20toolchains%20in%20NetBSD%20-%20Martin%20Huseman%20&%20Joerg%20Sonnenberger.mp4) Taylor R Campbell, The entropic principle: /dev/u?random and NetBSD (https://va.ludost.net/files/eurobsdcon/2014/Rodopi/04.Sunday/05.The%20entropic%20principle:%20dev-u%3frandom%20and%20NetBSD%20-%20Taylor%20R%20Campbell.mp4) Dag-Erling Smørgrav, Securing sensitive & restricted data (https://va.ludost.net/files/eurobsdcon/2014/Rodopi/04.Sunday/06.Securing%20sensitive%20&%20restricted%20data%20-%20Dag-Erling%20Sm%c3%b8rgrav.mp4) Peter Hansteen, Building The Network You Need (https://va.ludost.net/files/eurobsdcon/2014/Pirin/01.Thursday/01.Building%20The%20Network%20You%20Need%20With%20PF%20-%20Peter%20Hansteen.mp4) With PF (https://va.ludost.net/files/eurobsdcon/2014/Pirin/01.Thursday/02.Building%20The%20Network%20You%20Need%20With%20PF%20-%20Peter%20Hansteen.mp4) Stefan Sperling, Subversion for FreeBSD developers (https://va.ludost.net/files/eurobsdcon/2014/Pirin/01.Thursday/03.Subversion%20for%20FreeBSD%20developers%20-%20Stefan%20Sperling.mp4) Peter Hansteen, Transition to (https://va.ludost.net/files/eurobsdcon/2014/Pirin/02.Friday/01.Transition%20to%20OpenBSD%205.6%20-%20Peter%20Hansteen.mp4) OpenBSD 5.6 (https://va.ludost.net/files/eurobsdcon/2014/Pirin/02.Friday/02.Transition%20to%20OpenBSD%205.6%20-%20Peter%20Hansteen.mp4) Ingo Schwarze, Let's make manuals (https://va.ludost.net/files/eurobsdcon/2014/Pirin/02.Friday/03.Let%e2%80%99s%20make%20manuals%20more%20useful%20-%20Ingo%20Schwarze.mp4) more useful (https://va.ludost.net/files/eurobsdcon/2014/Pirin/02.Friday/04.Let%e2%80%99s%20make%20manuals%20more%20useful%20-%20Ingo%20Schwarze.mp4) Francois Tigeot, Improving DragonFly's performance with PostgreSQL (https://va.ludost.net/files/eurobsdcon/2014/Pirin/03.Saturday/01.Improving%20DragonFly%e2%80%99s%20performance%20with%20PostgreSQL%20-%20Francois%20Tigeot.mp4) Justin Cormack, Running Applications on the NetBSD Rump Kernel (https://va.ludost.net/files/eurobsdcon/2014/Pirin/03.Saturday/02.Running%20Applications%20on%20the%20NetBSD%20Rump%20Kernel%20-%20Justin%20Cormack.mp4) Pierre Pronchery, EdgeBSD, a year later (https://va.ludost.net/files/eurobsdcon/2014/Pirin/03.Saturday/04.EdgeBSD,%20a%20year%20later%20-%20%20Pierre%20Pronchery.mp4) Peter Hessler, Using routing domains or tables in a production network (https://va.ludost.net/files/eurobsdcon/2014/Pirin/03.Saturday/05.Using%20routing%20domains%20or%20tables%20in%20a%20production%20network%20-%20%20Peter%20Hessler.mp4) Sean Bruno, QEMU user mode on FreeBSD (https://va.ludost.net/files/eurobsdcon/2014/Pirin/03.Saturday/06.QEMU%20user%20mode%20on%20FreeBSD%20-%20%20Sean%20Bruno.mp4) Kristaps Dzonsons, Bugs Ex Ante (https://va.ludost.net/files/eurobsdcon/2014/Pirin/04.Sunday/01.Bugs%20Ex%20Ante%20-%20Kristaps%20Dzonsons.mp4) Yann Sionneau, Porting NetBSD to the LatticeMico32 open source CPU (https://va.ludost.net/files/eurobsdcon/2014/Pirin/04.Sunday/02.Porting%20NetBSD%20to%20the%20LatticeMico32%20open%20source%20CPU%20-%20Yann%20Sionneau.mp4) Alexander Nasonov, JIT Code Generator for NetBSD (https://va.ludost.net/files/eurobsdcon/2014/Pirin/04.Sunday/03.JIT%20Code%20Generator%20for%20NetBSD%20-%20Alexander%20Nasonov.mp4) Masao Uebayashi, Porting Valgrind to NetBSD and OpenBSD (https://va.ludost.net/files/eurobsdcon/2014/Pirin/04.Sunday/04.Porting%20Valgrind%20to%20NetBSD%20and%20OpenBSD%20-%20Masao%20Uebayashi.mp4) Marc Espie, parallel make, working with legacy code (https://va.ludost.net/files/eurobsdcon/2014/Pirin/04.Sunday/05.parallel%20make:%20working%20with%20legacy%20code%20-%20Marc%20Espie.mp4) Francois Tigeot, Porting the drm-kms graphic drivers to DragonFly (https://va.ludost.net/files/eurobsdcon/2014/Pirin/04.Sunday/06.Porting%20the%20drm-kms%20graphic%20drivers%20to%20DragonFly%20-%20Francois%20Tigeot.mp4) The following talks (from the Vitosha track room) are all currently missing: Jordan Hubbard, FreeBSD, Looking forward to another 10 years (but we have another recording) Theo de Raadt, Randomness, how arc4random has grown since 1998 (but we have another recording) Kris Moore, Snapshots, Replication, and Boot-Environments Kirk McKusick, An Introduction to the Implementation of ZFS John-Mark Gurney, Optimizing GELI Performance Emmanuel Dreyfus, FUSE and beyond, bridging filesystems Lourival Vieira Neto, NPF scripting with Lua Andy Tanenbaum, A Reimplementation of NetBSD Based on a Microkernel Stefano Garzarella, Software segmentation offloading for FreeBSD Ted Unangst, LibreSSL Shawn Webb, Introducing ASLR In FreeBSD Ed Maste, The LLDB Debugger in FreeBSD Philip Guenther, Secure lazy binding *** OpenBSD adopts SipHash (https://www.marc.info/?l=openbsd-tech&m=141614801713457&w=2) Even more DJB crypto somehow finds its way into OpenBSD's base system This time it's SipHash (https://131002.net/siphash/), a family of pseudorandom functions that's resistant to hash bucket flooding attacks while still providing good performance After an initial import (http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/sys/crypto/siphash.c?rev=1.1&content-type=text/x-cvsweb-markup) and some clever early usage (https://www.marc.info/?l=openbsd-cvs&m=141604896822253&w=2), a few developers agreed that it would be better to use it in a lot more places It will now be used in the filesystem, and the plan is to utilize it to protect all kernel hash functions Some other places (http://www.bsdnow.tv/episodes/2013_12_18-cryptocrystalline) that Bernstein's work can be found in OpenBSD include the ChaCha20-Poly1305 authenticated stream cipher and Curve25519 KEX used in SSH, ChaCha20 used in the RNG, and Ed25519 keys used in signify (http://www.bsdnow.tv/episodes/2014_02_05-time_signatures) and SSH *** FreeBSD 10.1-RELEASE (https://www.freebsd.org/releases/10.1R/announce.html) FreeBSD's release engineering team (http://www.bsdnow.tv/episodes/2013-09-11_engineering_powder_kegs) likes to troll us by uploading new versions just a few hours after we finish recording an episode The first maintenance update for the 10.x branch is out, improving upon a lot of things found in 10.0-RELEASE The vt driver was merged from -CURRENT and can now be enabled with a loader.conf switch (and can even be used on a PlayStation 3) Bhyve has gotten quite a lot of fixes and improvements from its initial debut in 10.0, including boot support for ZFS Lots of new ARM hardware is supported now, including SMP support for most of them A new kernel selection menu was added to the loader, so you can switch between newer and older kernels at boot time 10.1 is the first to support UEFI booting on amd64, which also has serial console support now Lots of third party software (OpenSSH, OpenSSL, Unbound..) and drivers have gotten updates to newer versions It's a worthy update from 10.0, or a good time to try the 10.x branch if you were avoiding the first .0 release, so grab an ISO (http://ftp.freebsd.org/pub/FreeBSD/ISO-IMAGES-amd64/10.1/) or upgrade (https://www.freebsd.org/cgi/man.cgi?query=freebsd-update) today Check the detailed release notes (https://www.freebsd.org/releases/10.1R/relnotes.html) for more information on all the changes Also take a look at some of the known problems (https://www.freebsd.org/releases/10.1R/errata.html#open-issues) to see if (https://forums.freebsd.org/threads/segmentation-fault-while-upgrading-from-10-0-release-to-10-1-release.48977/) you'll (https://lists.freebsd.org/pipermail/freebsd-stable/2014-October/080599.html) be (https://forums.freebsd.org/threads/10-0-10-1-diocaddrule-operation-not-supported-by-device.49016/) affected (https://www.reddit.com/r/freebsd/comments/2mmzzy/101release_restart_problems_anyone/) by any of them PC-BSD was also updated accordingly (http://wiki.pcbsd.org/index.php/What%27s_New/10.1) with some of their own unique features and changes *** arc4random - Randomization for All Occasions (https://www.youtube.com/watch?v=aWmLWx8ut20) Theo de Raadt gave an updated version of his EuroBSDCon presentation at Hackfest 2014 in Quebec The presentation is mainly about OpenBSD's arc4random function, and outlines the overall poor state of randomization in the 90s and how it has evolved in OpenBSD over time It begins with some interesting history on OpenBSD and how it became a security-focused OS - in 1996, their syslogd got broken into and "suddenly we became interested in security" The talk also touches on how low-level changes can shake up the software ecosystem and third party packages that everyone uses There's some funny history on the name of the function (being called arc4random despite not using RC4 anymore) and an overall status update on various platforms' usage of it Very detailed and informative presentation, and the slides can be found here (http://www.openbsd.org/papers/hackfest2014-arc4random/index.html) A great quote from the beginning: "We consider ourselves a community of (probably rather strange) people who work on software specifically for the purpose of trying to make it better. We take a 'whole-systems' approach: trying to change everything in the ecosystem that's under our control, trying to see if we can make it better. We gain a lot of strength by being able to throw backwards compatibility out the window. So that means that we're able to do research and the minute that we decide that something isn't right, we'll design an alternative for it and push it in. And if it ends up breaking everybody's machines from the previous stage to the next stage, that's fine because we'll end up in a happier place." *** Interview - Justin Cormack - justin@netbsd.org (mailto:justin@netbsd.org) / @justincormack (https://twitter.com/justincormack) NetBSD on Xen, rump kernels, various topics News Roundup The FreeBSD foundation's biggest donation (http://freebsdfoundation.blogspot.com/2014/11/freebsd-foundation-announces-generous.html) The FreeBSD foundation has a new blog post about the largest donation they've ever gotten From the CEO of WhatsApp comes a whopping one million dollars in a single donation It also has some comments from the donor about why they use BSD and why it's important to give back Be sure to donate to the foundation of whatever BSD you use when you can - every little bit helps, especially for OpenBSD (http://www.openbsd.org/donations.html), NetBSD (https://www.netbsd.org/donations/) and DragonFly (http://www.dragonflybsd.org/donations/) who don't have huge companies supporting them regularly like FreeBSD does *** OpenZFS Dev Summit 2014 videos (http://open-zfs.org/wiki/OpenZFS_Developer_Summit) Videos from the recent OpenZFS developer summit are being uploaded, with speakers from different represented platforms and companies Matt Ahrens (http://www.bsdnow.tv/episodes/2014_05_14-bsdcanned_goods), opening keynote (https://www.youtube.com/watch?v=XnTzbisLYzg) Raphael Carvalho, Platform Overview: ZFS on OSv (https://www.youtube.com/watch?v=TJLOBLSRoHE) Brian Behlendorf, Platform Overview: ZFS on Linux (https://www.youtube.com/watch?v=_MVOpMNV7LY) Prakash Surya, Platform Overview: illumos (https://www.youtube.com/watch?v=UtlGt3ag0o0) Xin Li, Platform Overview: FreeBSD (https://www.youtube.com/watch?v=xO0x5_3A1X4) All platforms, Group Q&A Session (https://www.youtube.com/watch?v=t4UlT0RmSCc) Dave Pacheco, Manta (https://www.youtube.com/watch?v=BEoCMpdB8WU) Saso Kiselkov, Compression (https://www.youtube.com/watch?v=TZF92taa_us) George Wilson (http://www.bsdnow.tv/episodes/2013_12_04-zettabytes_for_days), Performance (https://www.youtube.com/watch?v=deJc0EMKrM4) Tim Feldman, Host-Aware SMR (https://www.youtube.com/watch?v=b1yqjV8qemU) Pavel Zakharov, Fast File Cloning (https://www.youtube.com/watch?v=-4c4gsLi1LI) The audio is pretty poor (https://twitter.com/OpenZFS/status/534005125853888512) on all of them unfortunately *** BSDTalk 248 (http://bsdtalk.blogspot.com/2014/11/bsdtalk248-dragonflybsd-with-matthew.html) Our friend Will Backman is still busy getting BSD interviews as well This time he sits down with Matthew Dillon, the lead developer of DragonFly BSD We've never had Dillon on the show, so you'll definitely want to give this one a listen They mainly discuss all the big changes coming in DragonFly's upcoming 4.0 release *** MeetBSD 2014 videos (https://www.meetbsd.com/) The presentations from this year's MeetBSD conference are starting to appear online as well Kirk McKusick (http://www.bsdnow.tv/episodes/2013-10-02_stacks_of_cache), A Narrative History of BSD (https://www.youtube.com/watch?v=DEEr6dT-4uQ) Jordan Hubbard (http://www.bsdnow.tv/episodes/2013_11_27-bridging_the_gap), FreeBSD: The Next 10 Years (https://www.youtube.com/watch?v=Mri66Uz6-8Y) Brendan Gregg, Performance Analysis (https://www.youtube.com/watch?v=uvKMptfXtdo) The slides can be found here (https://www.meetbsd.com/agenda/) *** Feedback/Questions Dominik writes in (http://slexy.org/view/s20PXjp55N) Steven writes in (http://slexy.org/view/s2LwEYT3bA) Florian writes in (http://slexy.org/view/s2ubK8vQVt) Richard writes in (http://slexy.org/view/s216Eq8nFG) Kevin writes in (http://slexy.org/view/s21D2ugDUy) *** Mailing List Gold Contributing without code (https://www.marc.info/?t=141600819500004&r=1&w=2) Compression isn't a CRIME (https://lists.mindrot.org/pipermail/openssh-unix-dev/2014-November/033176.html) Securing web browsers (https://www.marc.info/?t=141616714600001&r=1&w=2) ***

BSD Now
45: ZFS War Stories

BSD Now

Play Episode Listen Later Jul 9, 2014 46:28


This week Allan is at BSDCam in the UK, so we'll be back with a regular episode next week. For now though, here's an interview with Josh Paetzel about some crazy experiences he's had with ZFS. This episode was brought to you by Interview - Josh Paetzel - josh@ixsystems.com (mailto:josh@ixsystems.com) / @bsdunix4ever (https://twitter.com/bsdunix4ever) Crazy ZFS stories, network protocols, server hardware

BSD Now
21: Tendresse for Ten

BSD Now

Play Episode Listen Later Jan 22, 2014 107:05


This time on the show, we've got some great news for OpenBSD, as well as the scoop on FreeBSD 10.0-RELEASE - yes it's finally here! We're gonna talk to Colin Percival about running FreeBSD 10 on EC2 and lots of other interesting stuff. After that, we'll be showing you how to do some bandwidth monitoring and network performance testing in a combo tutorial. We've got a round of your questions and the latest news, on BSD Now - the place to B.. SD. This episode was brought to you by Headlines FreeBSD 10.0-RELEASE is out (https://www.freebsd.org/releases/10.0R/announce.html) The long awaited, giant release of FreeBSD is now official and ready to be downloaded (http://ftp.freebsd.org/pub/FreeBSD/ISO-IMAGES-amd64/10.0/) One of the biggest releases in FreeBSD history, with tons of new updates Some features include: LDNS/Unbound replacing BIND, Clang by default (no GCC anymore), native Raspberry Pi support and other ARM improvements, bhyve, hyper-v support, AMD KMS, VirtIO, Xen PVHVM in GENERIC, lots of driver updates, ZFS on root in the installer, SMP patches to pf that drastically improve performance, Netmap support, pkgng by default, wireless stack improvements, a new iSCSI stack, FUSE in the base system... the list goes on and on (https://www.freebsd.org/releases/10.0R/relnotes.html) Start up your freebsd-update or do a source-based upgrade *** OpenSSH 6.5 CFT (https://lists.mindrot.org/pipermail/openssh-unix-dev/2014-January/031987.html) Our buddy Damien Miller (http://www.bsdnow.tv/episodes/2013_12_18-cryptocrystalline) announced a Call For Testing for OpenSSH 6.5 Huge, huge release, focused on new features rather than bugfixes (but it includes those too) New ciphers, new key formats, new config options, see the mailing list for all the details Should be in OpenBSD 5.5 in May, look forward to it - but also help test on other platforms! *** DIY NAS story, FreeNAS 9.2.1-BETA (http://blog.brianmoses.net/2014/01/diy-nas-2014-edition.html) Another new blog post about FreeNAS! Instead of updating the older tutorials, the author started fresh and wrote a new one for 2014 "I did briefly consider suggesting nas4free for the EconoNAS blog, since it's essentially a fork off the FreeNAS tree but may run better on slower hardware, but ultimately I couldn't recommend anything other than FreeNAS" Really long article with lots of nice details about his setup, why you might want a NAS, etc. Speaking of FreeNAS, they released 9.2.1-BETA (http://www.freenas.org/whats-new/2014/01/freenas-9-2-1-beta-now-ready-for-download.html) with lots of bugfixes *** OpenBSD needed funding for electricity.. and they got it (https://news.ycombinator.com/item?id=7069889) Briefly mentioned at the end of last week's show, but has blown up over the internet since OpenBSD in the headlines of major tech news sites: slashdot, zdnet, the register, hacker news, reddit, twitter.. thousands of comments They needed about $20,000 to cover electric costs for the server rack in Theo's basement (http://www.openbsd.org/images/rack2009.jpg) Lots of positive reaction from the community helping out so far, and it appears they have reached their goal (http://www.openbsdfoundation.org/campaign2104.html) and got $100,000 in donations From Bob Beck: "we have in one week gone from being in a dire situation to having a commitment of approximately $100,000 in donations to the foundation" This is a shining example of the BSD community coming together, and even the Linux people realizing how critical BSD is to the world at large *** Interview - Colin Percival - cperciva@freebsd.org (mailto:cperciva@freebsd.org) / @cperciva (https://twitter.com/cperciva) FreeBSD on Amazon EC2 (http://www.daemonology.net/freebsd-on-ec2/), backups with Tarsnap (https://www.tarsnap.com/), 10.0-RELEASE, various topics Tutorial Bandwidth monitoring and testing (http://www.bsdnow.tv/tutorials/vnstat-iperf) News Roundup pfSense talk at Tokyo FreeBSD Benkyoukai (https://blog.pfsense.org/?p=1176) Isaac Levy will be presenting "pfSense Practical Experiences: from home routers, to High-Availability Datacenter Deployments" He's also going to be looking for help to translate the pfSense documentation into Japanese The event is on February 17, 2014 if you're in the Tokyo area *** m0n0wall 1.8.1 released (http://m0n0.ch/wall/downloads.php) For those who don't know, m0n0wall is an older BSD-based firewall OS that's mostly focused on embedded applications pfSense was forked from it in 2004, and has a lot more active development now They switched to FreeBSD 8.4 for this new version Full list of updates in the changelog This version requires at least 128MB RAM and a disk/CF size of 32MB or more, oh no! *** Ansible and PF, plus NTP (http://blather.michaelwlucas.com/archives/1933) Another blog post from our buddy Michael Lucas (http://www.bsdnow.tv/episodes/2013_11_06-year_of_the_bsd_desktop) There've been some NTP amplification attacks recently (https://www.freebsd.org/security/advisories/FreeBSD-SA-14:02.ntpd.asc) in the news The post describes how he configured ntpd on a lot of servers without a lot of work He leverages pf and ansible for the configuration OpenNTPD is, not surprisingly, unaffected - use it *** ruBSD videos online (http://undeadly.org/cgi?action=article&sid=20140115054839) Just a quick followup from a few weeks ago Theo and Henning's talks from ruBSD are now available for download There's also a nice interview with Theo *** PCBSD weekly digest (http://blog.pcbsd.org/2014/01/pc-bsd-weekly-feature-digest-5/) 10.0-RC4 images are available Wine PBI is now available for 10 9.2 systems will now be able to upgrade to version 10 and keep their PBI library *** Feedback/Questions Sha'ul writes in (http://slexy.org/view/s2WQXwMASZ) Kjell-Aleksander writes in (http://slexy.org/view/s2H0FURAtZ) Mike writes in (http://slexy.org/view/s21eKKPgqh) Charlie writes in (and gets a reply) (http://slexy.org/view/s21UMLnV0G) Kevin writes in (http://slexy.org/view/s2SuazcfoR) ***

RunAs Radio
Next Generation Storage with Stephen Foskett

RunAs Radio

Play Episode Listen Later Oct 2, 2013 40:57


Richard chats with Stephen Foskett about where storage is at these days. The conversation spans far and wide, talking about how Microsoft's latest products (like Exchange 2013) are rather SAN-hostile, why we're all happy to get away from FibreChannel, our indifference toward iSCSI and the impact of NFS and SMB3 on file systems. Stephen also talks about just how fast fast is these days - whether it's SSDs, PCI-E based storage or USB3 thumb drives! It's all about the iOPs! Make sure you check out Tech Field Day!

BSD Now
3: MX with TTX

BSD Now

Play Episode Listen Later Sep 18, 2013 61:16


We follow up last week's poudriere tutorial with a segment about using pkgng, we talk with the developers of OpenSMTPD about running a mail server OpenBSD-style, answer YOUR questions and, of course, discuss all the latest news. All that and more on BSD Now! The place to B... SD. Headlines pfSense 2.1-RELEASE is out (http://blog.pfsense.org/?p=712) Now based on FreeBSD 8.3 Lots of IPv6 features added Security updates, bug fixes, driver updates PBI package support Way too many updates to list, see the full list (https://doc.pfsense.org/index.php/2.1_New_Features_and_Changes) *** New kernel based iSCSI stack comes to FreeBSD (https://lists.freebsd.org/pipermail/freebsd-current/2013-September/044237.html) Brief explanation of iSCSI This work replaces the older userland iscsi target daemon and improves the in-kernel iscsi initiator Target layer consists of: ctld(8), a userspace daemon responsible for handling configuration, listening for incoming connections, etc, then handing off connections to the kernel after the iSCSI Login phase iSCSI frontend to CAM Target Layer, which handles Full Feature phase. The work is being sponsored by FreeBSD Foundation Commit here (http://freshbsd.org/commit/freebsd/r255570) *** MTier creates openup utility for OpenBSD (http://www.mtier.org/index.php/solutions/apps/openup/) MTier provides a number of things for the OpenBSD community For example, regularly updated (for security) stable packages from their custom repo openup is a utility to easily check for security updates in both base and packages It uses the regular pkg tools, nothing custom-made Can be run from cron, but only emails the admin instead of automatically updating *** OpenSSH in FreeBSD -CURRENT supports DNSSEC (https://lists.freebsd.org/pipermail/freebsd-security/2013-September/007180.html) OpenSSH in base is now compiled with DNSSEC support In this case the default setting for ‘VerifyHostKeyDNS' is yes OpenSSH will silently trust DNSSEC-signed SSHFP records It is the secteam's opinion that this is better than teaching users to blindly hit “yes” each time they encounter a new key *** Interview - Gilles Chehade & Eric Faurot - gilles@poolp.org (mailto:gilles@poolp.org) / @poolpOrg (https://twitter.com/poolpOrg) & eric@openbsd.org (mailto:eric@openbsd.org) / @opensmtpd (https://twitter.com/opensmtpd) OpenSMTPD Tutorial Binary packages with pkgng (http://www.bsdnow.tv/tutorials/pkgng) News Roundup New progress with Newcons (http://raybsd.blogspot.com/2013/08/newcons-beginning.html) Newcons is a replacement console driver for FreeBSD Supports unicode, better graphics modes and bigger fonts Progress is being made, but it's not finished yet *** relayd gets PFS support (http://freshbsd.org/commit/openbsd/7e7bd0a7f61ea0005b5c2f763364ff8dfce03fe2) relayd is a load balancer for OpenBSD which does protocol layers 3, 4, and 7 Currently being ported to FreeBSD. There is a WIP port (https://www.freshports.org/net/relayd/) Works by negotiating ECDHE (Elliptic curve Diffie-Hellman) between the remote site and relayd to enable TLS/SSL Perfect Forward Secrecy, even when the client does not support it *** OpenZFS Launches (http://open-zfs.org/wiki/Main_Page) Slides from LinuxCon (http://www.slideshare.net/MatthewAhrens/open-zfs-linuxcon) Will feature ‘Office Hours' (Ask an Expert) Goal is to reduce the differences between various open source implementations of ZFS, both user facing and pure lines of code *** FreeBSD 10-CURRENT becomes 10.0-ALPHA (http://freshbsd.org/commit/freebsd/r255489) Glen Barber tagged the -CURRENT branch as 10.0-ALPHA In preparation for 10.0-RELEASE, ALPHA2 as of 9/16 Everyone was rushing to get their big commits in before 10-STABLE, which will be branched soon 10 is gonna be HUGE (https://wiki.freebsd.org/WhatsNew/FreeBSD10) *** September issue of BSD Mag (http://bsdmag.org/magazine/1848-day-to-day-bsd-administration) BSD Mag is a monthly online magazine about the BSDs This month's issue has some content written by Kris Topics include MidnightBSD live cds, server maintenance, turning a Mac Mini into a wireless access point with OpenBSD, server monitoring, FreeBSD programming, PEFS encryption and a brief introduction to ZFS *** The FreeBSD IRC channel is official For many years, the FreeBSD freenode channel has been “unofficial” with a double-hash prefix Finally it has freenode's blessing and looks like a normal channel! The old one will forward to the new one, so your IRC clients don't need updating *** OpenSSH 6.3 released (https://lists.mindrot.org/pipermail/openssh-unix-dev/2013-September/031638.html) After a big delay, Damien Miller announced the release of 6.3 Mostly a bugfix release, with a few new features Of note, SFTP now supports resuming failed downloads via -a *** Feedback/Questions [James writes in](http://slexy.org/view/s2wBbbSWGz] [Elias writes in](http://slexy.org/view/s2LMDF3PYx] [Gabor writes in](http://slexy.org/view/s2aCodo65X] Possibly the coolest feedback we've gotten thus far: Baptiste Daroussin, leader of the FreeBSD ports management team and author of poudriere and pkgng, has put up the BSD Now poudriere tutorial on the official documentation! ***

Accidental Tech Podcast
22: Full Brichter

Accidental Tech Podcast

Play Episode Listen Later Jul 18, 2013 92:52


Marco's new-new app, Bugshot, and some of its design decisions. Cutting features from 1.0 and trying to keep Bugshot from taking too much time. Bugshot gets the John Siracusa treatment. Exploring NAS options and initial impressions of the Synology DS1813+. Economics of FreeNAS or Mac Mini alternatives. iSCSI on Macs: the free $89 globalSAN initiator and the $195 ATTO initiator, which comes recommended by storage expert Dave Nanian. NAS backup options, since Backblaze doesn't do network drives: CrashPlan (with widespread upload-speed issues), or Arq (with potentially expensive Amazon Glacier or S3 costs). Backing up Mac filesystem metadata, Backup Bouncer, and current scores of online backup apps. Data hoarding and falling into John's backup vortex. The Apple Keynotes podcast feed. Sponsored by: Mind Blitz: An action-puzzle twist on the classic memory matching game. Transporter: Private cloud storage. Use coupon code atp for 10% off.

Intel CitC
Unified Networking with NetApp - Intel® CitC episode 19

Intel CitC

Play Episode Listen Later May 21, 2012 7:35


NetApp’s Mike McNamara talks about the company’s new Ethernet Advantage program and two new reference architectures, focused on unified networking for 10 GBbE with NFS and iSCSI.

IT-cast.de – Das Videoportal für die Praxis in der IT » Podcast Feed

MPIO ist ein sehr nützliches Feature, um mehrere redundante Wege zu einem iSCSI SAN Storage zu konfigurieren. In diesem Videocast zeige ich, wie man diese Feature in Verbindung mit iSCSI im Windows Server 2008 R2 konfiguriert. Viel Spaß beim Anschauen des Videocasts.

RunAs Radio
Stephen Foskett Talks Storage!

RunAs Radio

Play Episode Listen Later Jul 13, 2011 35:04


Richard and Greg talk to Stephen Foskett about storage technologies. Stephen gives a history of storage technologies from RAID arrays to SANs, including Microsoft's Storage Server. The conversation ranges over a huge number of storage technologies, including the new generation of low cost iSCSI targets, lamenting the death of Microsoft Home Server and even diving into obscure concepts like (get this) Fibre Channel over Token Ring! The show ends exploring the prospects of storage in the cloud and the possibilities going forward for data storage.

Online VMware Training
Iomega StorCenter px6-300d Network Storage - Adding iSCSI Drives

Online VMware Training

Play Episode Listen Later Jun 10, 2011 7:05


Online VMware Training
Iomega StorCenter px6-300d Network Storage - Adding iSCSI Drives

Online VMware Training

Play Episode Listen Later Jun 10, 2011 7:05


NetApp TV Studios
WhiteWater West Virtualizes Microsoft Exchange

NetApp TV Studios

Play Episode Listen Later Apr 14, 2010 6:12


Dan Morris, Senior Systems Engineer at WhiteWater West, shares his experience with NetApp and Microsoft Hyper-V. Topics include virtualizing Exchange to dramatically improve email reliability, using deduplication to reclaim space, and simplifying backup and recovery with NetApp.

RunAs Radio
Robert Smith Diagnoses Our Storage Performance Problems!

RunAs Radio

Play Episode Listen Later Apr 29, 2009 36:41


Richard and Greg talk to Robert Smith from Microsoft Premier Field Engineering about understanding storage performance. The discussion digs into SAN, iSCSI, direct-attached storage, also looking at tools like PerfMon and the Windows Performance Toolkit. Check out WPT at http://msdn.microsoft.com/en-us/library/cc305187.aspx and the disk performance issues talked about in the show at http://support.microsoft.com/?kbid=929491.

RunAs Radio
Alan Sugano Digs Into Storage Technologies!

RunAs Radio

Play Episode Listen Later Mar 18, 2009 44:38


Richard and Greg talk to Alan Sugano of ADS Consulting about everything storage. The conversation ranges over the different types of RAID, direct attached storage, iSCSI, Fibre Channel, Fibre Channel Over Ethernet, SANs... you name it!

Technical Council Podcast
Technical Council Podcast FAST 09 Paper - CA-NFS

Technical Council Podcast

Play Episode Listen Later Mar 12, 2009 14:05


From FAST 09 best paper authors, A Congestion-Aware Network File System an interview with Alexandros Batsakis Randal Burns Arkady Kanevsky James Lentini Thomas Talpey

Technical Council Podcast
Bob Snively on FCoE

Technical Council Podcast

Play Episode Listen Later Mar 11, 2009 20:19


Bob Snively is interviewed by TC member Steve Wilson on Fibre Channel over Ethernet Technology and Standards

IT-cast.de – Das Videoportal für die Praxis in der IT » Podcast Feed

Da ich schon das ein oder andere Mal einen Openfiler installiert und konfiguriert habe, und die Konfiguration nicht so ganz einfach durchschaubar ist, habe ich ein Video gemacht, das die Einrichtung eines Openfiler zur iSCSI-Nutzung zeigt. Anmerkungen, Lob oder Kritik sind gerne gesehen.

Technical Council Podcast
Interview with Rich Ramos on Object-based Storage Devices

Technical Council Podcast

Play Episode Listen Later Aug 11, 2008 11:52


Rich Ramos, an active member of the SNIA OSD Technical Work Group talks about the OSD standards and latest developments.

Technical Council Podcast
Interview with Jim Wiliams on Data Integrity

Technical Council Podcast

Play Episode Listen Later Jun 9, 2008 14:33


Jim Williams, SNIA TC member from Oracle, talks about Data Integrity and the work being done by the new Data Integrity TWG.

Technical Council Podcast
Interview with Don Deel on Management Frameworks

Technical Council Podcast

Play Episode Listen Later Apr 2, 2008 18:04


Don Deel, one of the original contributers to the SMI-S standard talks about the new work going on in standardizing Management Frameworks

RunAs Radio
Tom Clark Connects Us With iSCSI!

RunAs Radio

Play Episode Listen Later Sep 12, 2007 34:22


Tom Clark talks to Richard and Greg about iSCSI and how its bringing SAN solutions to the medium-scale enterprise. iSCSI uses TCP and Ethernet to provide SAN features and connectivity at a substantially lower cost than Fibre Channel.

Technical Council Podcast
Interview with Paul Strong of EBay

Technical Council Podcast

Play Episode Listen Later Sep 6, 2007 11:22


Paul Strong, from Ebay talks about his company's use of storage and their own storage management development.

Technical Council Podcast
Interview with Mike Walker on SMI

Technical Council Podcast

Play Episode Listen Later Sep 6, 2007 17:23


Mike Walker, Chair of the SNIA Storage Management Initative Technical Steering Group talks about new developments and future work of the Storage Management Initiative and SMI-S

Technical Council Podcast
Interview with Tom Clark on Storage

Technical Council Podcast

Play Episode Listen Later Sep 6, 2007 22:38


Tom Clark, SNIA's newest Board of Directors member talks about new developments in the storage industry including FCOE.

Technical Council Podcast
Interview with Dave Thiel on SNIA Software

Technical Council Podcast

Play Episode Listen Later Sep 6, 2007 13:58


Dave Thiel, Chair of the SNIA Technical Council talks about new developments in SNIA Technical Work Groups and the ability of SNIA to do Software

Datacenter of the Future
Improving Storage in a Virtualized Server Environment

Datacenter of the Future

Play Episode Listen Later Jun 8, 2007 7:53


Virtualization and IT consolidation are hot topics in the datacenter. But many IT professionals don’t realize that virtualization requires shared storage. So you are faced with two options: fiber channel or iSCSI. Listen to Travis Vigil, Senior Product Manager leading Dell’s iSCSI strategy, and Joe Pollock, Storage Marketing Manager, as they discuss how iSCSI provides an easier, more logical connection between machines and actually improves virtual machine mobility.

Datacenter of the Future
Storage Energy Economy: The Impact of Storage on Energy Economy in the Datacenter

Datacenter of the Future

Play Episode Listen Later May 25, 2007 5:54


Energy economy has exploded as an issue in the IT infrastructure. Energy-efficient products and virtualization/consolidation are certainly an important starting point, but what about the storage behind it all? It turns out that bringing energy efficiency into storage is one of the most important components to an overall strategy, and the one that hangs up many projects. Find out how to address this, and how enabling technologies like iSCSI are making it easier. Listen to Joe Pollock and Travis Vigil from the Storage Products Group at Dell.

Datacenter of the Future
Using iSCSI to Improve Deployment of Storage Area Networks

Datacenter of the Future

Play Episode Listen Later Dec 7, 2006 18:03


This podcast covers iSCSI, an exciting new technology that provides a simpler and more cost effective way to deploy storage across a network running IP. Now storage can be located anywhere in the world. Listen to Travis Vigil, Senior Product Manager leading Dell’s iSCSI strategy, and Joe Pollock, Storage Marketing Manager as they discuss the future of storage.

Black Hat Briefings, Las Vegas 2006 [Video] Presentations from the security conference

A fundamental of many SAN solutions is to use metadata to provide shared access to a SAN. This is true in iSCSI or FibreChannel and across a wide variety of products. Metadata can offer a way around the built-in security features provided that attackers have FibreChannel connectivity. SAN architecture represents a symbol of choosing speed over security. Metadata, the vehicle that provides speed, is a backdoor into the system built around it. In this session we will cover using Metadata to DoS or gain unauthorized access to an Xsan over the FibreChannel network."

Black Hat Briefings, Las Vegas 2005 [Audio] Presentations from the security conference

Himanshu Dwivedi's presentation will discuss the severe security issues that exist in the default implementations of iSCSI storage networks/products. The presentation will cover iSCSI storage as it pertains to the basic principals of security, including enumeration, authentication, authorization, and availability. The presentation will contain a short overview of iSCSI for security architects and basic security principals for storage administrators. The presentation will continue into a deep discussion of iSCSI attacks that are capable of compromising large volumes of data from iSCSI storage products/networks. The iSCSI attacks section will also show how simple attacks can make the storage network unavailable, creating a devastating problem for networks, servers, and applications. The presenter will also follow-up each discussion of iSCSI attacks with a demonstration of large data compromise. iSCSI attacks will show how a large volume of data can be compromised or simply made unavailable for long periods of time without a single root or administrator password. The presentation will concluded with existing solutions from responsible vendors that can protect iSCSI storage networks/products. Each iSCSI attack/defense described by the presenter will contain deep discussions and visual demonstrations, which will allow the audience to fully understand the security issues with iSCSI as well as the standard defenses. Himanshu Dwivedi is a founding partner of iSEC Partners, LLC. a strategic security organization. Himanshu has 11 years experience in security and information technology. Before forming iSEC, Himanshu was the Technical Director for @stake's bay area practice, the leading provider for digital security services. His professional experiences includes application programming, infrastructure security, secure product design, and is highlighted with deep research and testing on storage security for the past 5 years. Himanshu has focused his security experience towards storage security, specializing in SAN and NAS security. His research includes iSCSI and Fibre Channel (FC) Storage Area Networks as well as IP Network Attached Storage. Himanshu has given numerous presentations and workshops regarding the security in SAN and NAS networks, including conferences such as BlackHat 2004, BlackHat 2003, Storage Networking World, Storage World Conference, TechTarget, the Fibre Channel Conference, SAN-West, SAN-East, SNIA Security Summit, Syscan 2004, and Bellua 2005. Himanshu currently has a patent pending on a storage design architecture that he co-developed with other @stake professionals. The patent is for a storage security design that can be implemented on enterprise storage products deployed in Fibre Channel storage networks. Additionally, Himanshu has published three books, including "The Complete Storage Reference" - Chapter 25 Security Considerations (McGraw-Hill/Osborne), "Implementing SSH" (Wiley Publishing), and "Securing Storage" (Addison Wesley Publishing), which is due out in the fall of 2005. Furthermore, Himanshu has also published two white papers. The first white paper Himanshu wrote is titled "Securing Intellectual Property", which provides insight and recommendations on how to protect an organization's network from the inside out. Additionally, Himanshu has written a second white paper titled Storage Security, which provides the basic best practices and recommendations in order to secure a SAN or a NAS storage network.

Black Hat Briefings, Las Vegas 2006 [Audio] Presentations from the security conference

"A fundamental of many SAN solutions is to use metadata to provide shared access to a SAN. This is true in iSCSI or FibreChannel and across a wide variety of products. Metadata can offer a way around the built-in security features provided that attackers have FibreChannel connectivity. SAN architecture represents a symbol of choosing speed over security. Metadata, the vehicle that provides speed, is a backdoor into the system built around it. In this session we will cover using Metadata to DoS or gain unauthorized access to an Xsan over the FibreChannel network."

Black Hat Briefings, Las Vegas 2005 [Video] Presentations from the security conference

Himanshu Dwivedi's presentation will discuss the severe security issues that exist in the default implementations of iSCSI storage networks/products. The presentation will cover iSCSI storage as it pertains to the basic principals of security, including enumeration, authentication, authorization, and availability. The presentation will contain a short overview of iSCSI for security architects and basic security principals for storage administrators. The presentation will continue into a deep discussion of iSCSI attacks that are capable of compromising large volumes of data from iSCSI storage products/networks. The iSCSI attacks section will also show how simple attacks can make the storage network unavailable, creating a devastating problem for networks, servers, and applications. The presenter will also follow-up each discussion of iSCSI attacks with a demonstration of large data compromise. iSCSI attacks will show how a large volume of data can be compromised or simply made unavailable for long periods of time without a single root or administrator password. The presentation will concluded with existing solutions from responsible vendors that can protect iSCSI storage networks/products. Each iSCSI attack/defense described by the presenter will contain deep discussions and visual demonstrations, which will allow the audience to fully understand the security issues with iSCSI as well as the standard defenses. Himanshu Dwivedi is a founding partner of iSEC Partners, LLC. a strategic security organization. Himanshu has 11 years experience in security and information technology. Before forming iSEC, Himanshu was the Technical Director for @stake's bay area practice, the leading provider for digital security services. His professional experiences includes application programming, infrastructure security, secure product design, and is highlighted with deep research and testing on storage security for the past 5 years. Himanshu has focused his security experience towards storage security, specializing in SAN and NAS security. His research includes iSCSI and Fibre Channel (FC) Storage Area Networks as well as IP Network Attached Storage. Himanshu has given numerous presentations and workshops regarding the security in SAN and NAS networks, including conferences such as BlackHat 2004, BlackHat 2003, Storage Networking World, Storage World Conference, TechTarget, the Fibre Channel Conference, SAN-West, SAN-East, SNIA Security Summit, Syscan 2004, and Bellua 2005. Himanshu currently has a patent pending on a storage design architecture that he co-developed with other @stake professionals. The patent is for a storage security design that can be implemented on enterprise storage products deployed in Fibre Channel storage networks. Additionally, Himanshu has published three books, including "The Complete Storage Reference" - Chapter 25 Security Considerations (McGraw-Hill/Osborne), "Implementing SSH" (Wiley Publishing), and "Securing Storage" (Addison Wesley Publishing), which is due out in the fall of 2005. Furthermore, Himanshu has also published two white papers. The first white paper Himanshu wrote is titled "Securing Intellectual Property", which provides insight and recommendations on how to protect an organization's network from the inside out. Additionally, Himanshu has written a second white paper titled Storage Security, which provides the basic best practices and recommendations in order to secure a SAN or a NAS storage network.

NetApp TV Studios
IP SAN (iSCSI) Discussion with Rich Clifton

NetApp TV Studios

Play Episode Listen Later Jan 2, 2006 3:31


Join Rich Clifton, NetApp VP, Enterprise Data Center and Applications, as he discusses IP SAN and how iSCSI fits into that IP-based technology.

Dan Bricklin's Log Podcast
Lunch with Don Bulens CEO EqualLogic

Dan Bricklin's Log Podcast

Play Episode Listen Later Dec 22, 2005


Don, who was CEO of Trellix, is now CEO of EqualLogic. I run into people all the time who ask how he's doing and what's up with him, so I devoted some of our lunch time to recording a podcast. You will find this of interest if you are interested in Don himself, in EqualLogic and the type of products it makes (storage area network devices), or the VAR channel. He explains what SANs and iSCSI in particular are all about (IP-based network attached storage), why he joined that company, and what the VAR channel is like and how it compares to the Lotus Notes days when he helped develop the Notes VAR channel. Recorded: 2005-12-22 Length: 20:02, Size: 9.2MB

Le Comptoir Sécu - Podcasts
[SECHebdo] 02 Avril 2019

Le Comptoir Sécu - Podcasts

Play Episode Listen Later Jan 1, 1970


Nous venons de tourner un nouveau SECHebdo en live sur Youtube. Comme d’habitude, si vous avez raté l’enregistrement, vous pouvez le retrouver sur notre chaîne Youtube (vidéo ci-dessus) ou bien au format podcast audio : Au sommaire de cette émission : (à faire) { "options": { "theme": "default" }, "extensions": { "ChapterMarks": { "disabled": false }, "EpisodeInfo": {}, "Playlist": { "disabled": true }, "Transcript": { "disabled": true } }, "