POPULARITY
Paul Mikesell is the founder and CEO of Carbon Robotics. Paul has deep experience founding and building successful technology startups. Paul co-founded Isilon Systems, a distributed storage company, in 2001. Isilon went public in 2006 and was acquired by EMC for $2.5 billion in 2010. In 2006, Paul co-founded Clustrix, a distributed database startup that was acquired by MariaDB in 2018. Prior to Carbon, Paul served as Director of Infrastructure Engineering at Uber, where he grew the team and opened the company's engineering office in Seattle, later focusing on deep learning and computer vision. — This episode is presented by MyLand. Learn more HERE. — Links Carbon Robotics - https://carbonrobotics.com Paul on Linkedin - https://www.linkedin.com/in/paul-mikesell-4b63a9/ Join the Co-op - https://themodernacre.supercast.com Subscribe to the Newsletter - https://themodernacre.substack.com
Sujal Patel is Founder and CEO of Nautilus Biotechnology, a life sciences company creating a large-scale, single-molecule platform for quantifying and unlocking the complexity of the human proteome, with the goal of enabling fundamental advancements across human health and medicine. With nearly 20 years of executive leadership experience driving innovation for high-growth companies, Sujal is a successful entrepreneur, investor, and philanthropist who founded and served as CEO of Isilon Systems, an enterprise data storage company. After completing one of the most successful initial public offerings of 2006, Isilon was acquired by EMC in 2010 for $2.6 billion. See omnystudio.com/listener for privacy information.
Sujal Patel, co-founder and CEO of Nautilus Biotechnology, discusses their innovative work in proteomics and its impact on drug development. Sujal shares his transition from tech to biotech, the formation of Nautilus with Parag Mallick, and their revolutionary approach using multi-affinity probes. The conversation highlights the importance of proteomics in drug discovery, the broad applications of their technology, and the significance of product-market fit and fiscal discipline in building a sustainable business. Biography: Sujal Patel is the co-founder of Nautilus Biotechnology, a life sciences company working to create a platform technology for quantifying and unlocking the complexity of the proteome. Nautilus' mission is to democratize access to the proteome and, in doing so, enable fundamental advancements across human health and medicine. Sujal founded Isilon Systems in 2001, a storage company built for the future of unstructured, file-based data. In 2006, Isilon completed one of the most successful initial public offerings of the year. EMC (since acquired by Dell) acquired Isilon in December 2010 for $2.6 Billion, the largest acquisition in EMC's history. Sujal served as the president of EMC's Isilon Storage Division from the acquisition until November 2012, driving significant revenue growth, market expansion, and organizational scale. Prior to EMC and Isilon, Sujal served in various engineering roles at RealNetworks, Inc., in part as the chief architect behind the company's second-generation core media delivery system. Sujal holds nineteen patents in the areas of storage, networking, and media delivery and five patents for innovations related to the development of Nautilus Biotechnology's technology. He is a well-known speaker on entrepreneurship and has received a variety of industry awards. Currently, Sujal serves on the board of directors at Qumulo and Rainier Scholars and helps direct the philanthropic efforts of his family's foundation. He graduated from the University of Maryland College Park in 1996 with a degree in computer science.
Headstorm: https://headstorm.com/AGPILOT: https://headstorm.com/agpilot/Carbon Robotics: https://carbonrobotics.com/Paul is the founder and CEO of Carbon Robotics. What Carbon Robotics is doing is novel and interesting in and of itself, and we're going to talk a lot about that in today's episode. But it's important to note that Paul has a really impressive history of building technology companies outside of agriculture.Before starting Carbon Robotics, he co-founded Isilon Systems, a distributed storage company, in 2001. Isilon went public in 2006 and was acquired by EMC for $2.5 billion in 2010. In 2006, Paul co-founded Clustrix, a distributed database startup that was acquired by MariaDB in 2018. Immediately before Carbon, Paul served as Director of Infrastructure Engineering at Uber, where he grew the team and opened the company's engineering office in Seattle, later focusing on deep learning and computer vision. So in today's episode we're going to talk a lot about laser weeding, building a field robotics company, Paul's views on artificial intelligence and where he sees applications for the tech in agriculture, and the challenges an opportunities ahead for carbon robotics and agtech in general. I'll drop you into the conversation where Paul is explaining his desire to jump from tech to agtech, and how that transition has been for him.
In this episode I get to talk with Daniel Post about data classification and data governance. Dan is a Senior Sales Engineer for Varonis. He has been in the industry for a while and has knowledge that we break down into 'bite sized' chunks to make it easier for your staff to consume.Talking Points:Where does a company first start their Data Classification and Governance journey?What are some of the challenges that a company can expect when it comes to data classification?What are you seeing in the field right now that makes it hard for companies in their data governance program?Now that data lives in the 'Hybrid' world, how does data governance work when you have data on network drives like Isilon and cloud drives like Microsoft or Box?Does it integrate with a CMDB/ticketing system like Service Now or Service Desk, so your GRC team can take 'action' on it?Podcast Sponsor: The sponsor for this episode is Varonis. Varonis is a cybersecurity solutions company that is very mature in the Data Classification and Governance space. They are based out of good ole' New York City! Proceeds from this sponsorship will be going to the Autism Support of Kent County Michigan. Pam and her team help parents with finding support idea/solutions for their children with Autism. More information here - https://www.autismsupportofkentcounty.org/
Behind the Clouds – Dell Technologies Storage, and Data Protection lets your multi-cloud strategy take advantage of your on-premises data.
Building With People For People: The Unfiltered Build Podcast
What traits do you look for when hiring engineers for your team? What does fundraising look like in a startup. In today's episode we talk about what it takes to build great teams and fundraising specifically in the startup world and much more. Our guest, Randy Sears, brings more than 20 years of experience designing and implementing scalable distributed systems in domains as diverse as education and scientific computing, and has held technical leadership roles for more than a decade. Previous to his current role at DemandStar as the Head of Engineering, he worked at many seed and pre-IPO startups, participating in IPOs, acquisitions, and fundraising with Zulily, Loudeye, IBM, Isilon (now part of EMC) and Stackline. Our guest has studied computer science and music at Boston University, and loves sailing and scuba diving. In building teams, Randy favors empowerment and collegiality combined with an eagerness to learn. Connect with Randy: LinkedIn Show notes: The top skill Randy looks for when hiring is the ability to work well with others Pay attention when interviewing not to introduce affinity bias - where you identify with the candidate because they are like you Your interview process can and should adapt based on the size of your team and company Take home tests as an interview method Randy has learned is generally not preferred. Randy strives to build collegiality amongst his team Check out Roblok Best advice he has received is: "犬も歩けば棒に当たる” which translate roughly to “when a dog walks he hits a stick.” If you keep moving, mentally or physically, you will find interesting, often useful, things. So keep exploring. DemandStar is hiring so make sure to checkout their careers page for the most recent listings (and much more to come) “Authenticity and imperfections can communicate a lot” -Randy Sears Building something cool or solving interesting problems? Want to be on this show? Send me an email at jointhepodcast@unfilteredbuild.com Podcast produced by Unfiltered Build - dream.design.develop.
Bill Richter is the President & CEO of the American data storage company Qumulo. They have had an incredible start and growth trajectory because he knows what he is doing. As the former President of the Isilon storage division of EMC, Bill helped...
Bill Richter is the President & CEO of the American data storage company Qumulo. They have had an incredible start and growth trajectory because he knows what he is doing. As the former President of the Isilon storage division of EMC, Bill helped the business grow to over $1.5B in revenue. He's learned fortune favors the brave and just how brave he needs to be. • Tech wars and costs are mainly related to getting the best talent involved. • Any new idea seems dumb to the world. • To be an entrepreneur you have to look through the fog to see possibilities. TIME-STAMPED SHOW NOTES: [3:40] Risk tolerant capital market. [8:04] Find a way to reach people. [20:25] Satellite images.
Welcome to this episode of the AgEmerge podcast. We get to visit with some amazing folks here and in addition to that we get to introduce you to technologies that are helping growers meet the challenges of building soil health. Today is no exception as we welcome Paul Mikesell, Founder and CEO of Carbon Robotics where they have developed a system and technology for weed control that uses computer vision and very high powered lasers as the action for weed eradication. Listen in as Paul talks about the opportunities this technology creates! Paul Mikesell is the founder and CEO of Carbon Robotics. Paul has deep experience founding and building successful technology startups. Paul co-founded Isilon Systems, a distributed storage company, in 2001. Isilon went public in 2006 and was acquired by EMC for 2.5 billion dollars in 2010. In 2006, Paul co-founded Clustrix, a distributed database startup that was acquired by MariaDB in 2018. Prior to Carbon, Paul served as Director of Infrastructure Engineering at Uber, where he grew the team and opened the company's engineering office in Seattle, later focusing on deep learning and computer vision. Paul holds a bachelor's degree in computer science from the University of Washington. Carbon Robotics Website: https://carbonrobotics.com/?mc_cid=a5f5790b3f&mc_eid=UNIQID Linkedin: https://www.linkedin.com/company/carbonrobotics/?mc_cid=a5f5790b3f&mc_eid=UNIQID YouTube: https://www.youtube.com/c/CarbonRoboticsLaserWeeding?mc_cid=a5f5790b3f&mc_eid=UNIQID Twitter: https://twitter.com/carbon_robotics?mc_cid=a5f5790b3f&mc_eid=UNIQID Instagram: https://www.instagram.com/carbon_robotics/ Got questions you want answered? Send them our way and we'll do our best to research and find answers. Know someone you think would be great on the AgEmerge stage or podcast? Send your questions or suggestions to kim@asn.farm we'd love to hear from you.
The major shift in the Linux landscape this week that was hardly noticed, and our thoughts on COSMIC from System76. Plus Google adds its weight behind Rust in the Linux Kernel, and the new security features landing in WSL2.
The major shift in the Linux landscape this week that was hardly noticed, and our thoughts on COSMIC from System76. Plus Google adds its weight behind Rust in the Linux Kernel, and the new security features landing in WSL2.
The major shift in the Linux landscape this week that was hardly noticed, and our thoughts on COSMIC from System76. Plus Google adds its weight behind Rust in the Linux Kernel, and the new security features landing in WSL2.
Panelists Justin Dorfman | Richard Littauer Guest Melissa Logan Show Notes Hello and welcome to Sustain! Our special guest today is Melissa Logan, Founder of Constantia.io, a marketing consultancy that focuses on open source and enterprise tech companies. She pioneered the role of open source marketer that helped fuel the rise of open source software development. She also launched the Sexism Field Guide to help people identify and confront all forms of sexism. We will learn why Melissa created Constantia, her work at The Linux Foundation, Apache Cassandra, and Isilon. Also, Melissa talks about having the right personality to do marketing in a community and why she thinks about the community like a prism. Download this episode now to find out more! [00:00:48] Melissa tells us all about Constantia and why she created it. [00:02:30] Since Melissa has worked mainly with large OSPO’s, Richard wonders if she has had any experience working with smaller organizations or smaller repositories on GitHub type stuff. She also talks about what she did at the Linux Foundation and the projects they started, one specifically called OpenDaylight. [00:06:38] When Melissa talks about open source there are two key ways that she describes it. [00:07:43] We learn about Melissa working with the Apache Cassandra Community. Justin wonders if there was a company that did support contracts for Cassandra funding this or if this was a grassroots type of deal. [00:11:03] We learn what Melissa did at Isilon. [00:13:00] Richard wonders how Melissa gets marketing copy in front of people because mailing lists are important to getting into people’s inboxes. [00:16:23] Richard asks Melissa if she has any insight on how to market somebody who runs a small react library and she gives some great advice. [00:18:47] Melissa tells us how to pitch marketing to open source foundations as something they need to do because the return is so small. Richard wonders if she’s ever had to deal with people who are closed sourced and try to convince them to go open. [00:26:55] Since the pandemic has changed a lot of things around marketing, Richard wonders what Melissa’s had to change with how she markets stuff to get in front of people’s eyes over the past six months. [00:29:35] Melissa brings up the topic of disaggregated marketing and when you think about doing marketing in a community one of the most important things you need is the right personality. She also explains how she thinks of the community as a kind of prism. [00:34:43] If you’re interested in seeing the awesome content that Melissa has put out, she tells us where we can find it online. Spotlight [00:35:22] Justin’s spotlight is FingerprintJS. [00:36:00] Richard’s spotlight is a website (https://alex.github.io/nyt-2020-election-scraper/battleground-state-changes.html#) with election data that allows you to see what’s happening every minute in all of the battleground states. [00:36:41] Melissa’s spotlight is Scribus.net. Quotes [00:08:01] “At Linux Foundation it was different because it was part of kind of the governance of the project.” [00:11:03] “You were at Isilon. I remember reading about it way back in the day and it was acquired by EMC. What did you do there because that just really interests me?” [00:17:15] “When you think about doing marketing in a community, there are a lot of people who work at different companies, they have different cultures, they have different reasons for participating. Maybe they’re not aware that you actually want to have a marketing effort.” [00:17:32] “So I think what’s really important is to build some kind of architecture of participation for people in your community.” [00:19:18] “What are those quote unquote KPI’s in an open source project? What do we look at? I think things like lines of code, stars, those are all, I think you should just set those aside. That really doesn’t tell you about the health of an open source project.” [00:20:01] “So we really look at share of voice as one of the key metrics in an open source project and how we evaluate how things are doing.” [00:21:35] “One of the key ways that we knew we were gaining traction was when we found out that AT&T had adopted OpenDaylight, and we found out because they had said something on a user list because of course they found some bug or issue with it, so of course that’s when they reach out and talk to us.” [00:27:00] “So during the pandemic we’ve all been trying to figure out how not to overload people who are overloaded by so much content and information because everyone is doing everything digital all the time.” [00:30:38] Then how do you level the playing field for projects that maybe don’t have a charismatic leader? And the way you can do that is to find someone who plays in this marketing role who does go and seek out all these other types of contributions and tries to shine a light on things that are happening, not just with individuals, but in all parts of your community.” [00:31:40] “I remember in the early 2000’s, you had people in the embedded Linux community who were looking at ways to improve power consumption in satellites that were going into space so that was really important. You had needed a small footprint for everything. When they figured that out, they put it back upstream and that was then adopted by people in the supercomputing community.” [00:34:26] “I think of marketing kind of like you’re a backstage manager for a play and you’re trying to make everything run really smoothly for all the other people on the stage and really shine a light on them literally and figuratively.” Links Melissa Logan Twitter (https://twitter.com/melissa_b2b?lang=en) Melissa Logan Linkedin (https://www.linkedin.com/in/mklogan) Constantia (https://constantia.io/) All Things Open 2020 Online Event (https://2020.allthingsopen.org/) Open Daylight Project (ODL) (https://www.opendaylight.org/) The Linux Foundation (https://www.linuxfoundation.org/) Dell EMC Isilon (https://www.delltechnologies.com/en-us/storage/isilon/isilon-a2000-archive-nas-storage.htm?gacd=9650523-1033-5761040-266691960-0&dgc=st&&gclid=EAIaIQobChMI59rQu7327AIVg-iGCh0mAASuEAAYASABEgKNdvD_BwE&gclsrc=aw.ds) Apache Cassandra (https://cassandra.apache.org/) Apache Cassandra Twitter (https://twitter.com/cassandra) The Sexism Field Guide by Melissa Logan (https://sexismfieldguide.com/) FingerprintJS (https://fingerprintjs.com/) Election data results website (https://alex.github.io/nyt-2020-election-scraper/battleground-state-changes.html) Scribus (https://www.scribus.net/) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr at Peachtree Sound (https://www.peachtreesound.com/) Special Guest: Melissa Logan.
Today we are going to talk storage with Molly Presley from Qumulo.In 2012, Isilon veterans, Neal Fachan, Peter Godman, and Aaron Passey set out to create a new vision for storage. After extensive market research and development, they released the file-based storage platform Qumulo in 2014. Most recently, they finished a substantial funding round with a valuation over a billion dollars. They have built a software defined storage product that is built to run in the data center or in the cloud, focusing on performance and simplicity and looking to rewrite the storage playbook. And, I can tell you that they are popping up in deals all over the place. Today we have with us Molly Presley who runs global product marketing for Qumulo to tell us the story of Qumulo and where they are headed.Welcome Molly
Sujal Patel is the cofounder and CEO of Nautilus Biotechnology which offers a high-throughput, low-cost platform for analyzing and quantifying the human proteome. The company has raised over $100 million from top tier investors such as Andreessen Horowitz, Madrona Venture Group, AME Cloud Ventures, Vulcan Capital, Perceptive Advisors, Bezos Expeditions, Bolt Ventures, and Defy Ventures to name a few. Prior to this, he cofounded Isilon which he sold to EMC for $2.6 billion.
Sujal Patel is the cofounder and CEO of Nautilus Biotechnology which offers a high-throughput, low-cost platform for analyzing and quantifying the human proteome. The company has raised over $100 million from top tier investors such as Andreessen Horowitz, Madrona Venture Group, AME Cloud Ventures, Vulcan Capital, Perceptive Advisors, Bezos Expeditions, Bolt Ventures, and Defy Ventures to name a few. Prior to this, he cofounded Isilon which he sold to EMC for $2.6 billion.
In Part II, Mark Lloyd, Executive Vice President and Founder of Inspirata, continues explaining Inspirata’s offerings that change the patient’s care pathway. Mark shares Inspirata’s origins, beginning with his passion for digital pathology and positively impacting the cancer patient’s journey. Mark describes Inspirata’s successful implementation at The Ohio State University and highlights Isilon as a key part. It resulted in the first primary digital diagnosis in the United States and the largest repository of whole slide images. Mark then explains Inspirata’s partnerships with Dell, Leica, Huron, Fujifilm and others. Mark concludes with where to find more information and final thoughts.
In this episode, Randy Goins, UDS Regional Territory Manager and Tony Metz, UDS Advisory Systems Engineer discuss the use of Isilon with flash and blended tiers as an approach to manage healthcare imaging and other unstructured data. Tony explains the use cases for flash in a blended Isilon approach and getting the right blend and tiering policies which address performance, capacity, and cost requirements. Randy then discusses costs of blended Isilon local storage compared to a cloud approach and the many customer success stories with a blended approach.
This week, Chris and Martin discuss the announcements from Dell Technologies World 2019, held in April/May 2019 at the Sands Convention Centre in Las Vegas. Chris attended as a guest of Dell Technologies, who covered flights and accommodation. Dell Technologies made a range of announcements, including improved capabilities on the Unity XT and Isilon storage […] The post #99 – Dell Technologies World 2019 in Review appeared first on Storage Unpacked Podcast.
The strange birth and long life of Unix, FreeBSD jail with a single public IP, EuroBSDcon 2018 talks and schedule, OpenBSD on G4 iBook, PAM template user, ZFS file server, and reflections on one year of OpenBSD use. Picking the contest winner Vincent Bostjan Andrew Klaus-Hendrik Will Toby Johnny David manfrom Niclas Gary Eddy Bruce Lizz Jim Random number generator ##Headlines ###The Strange Birth and Long Life of Unix They say that when one door closes on you, another opens. People generally offer this bit of wisdom just to lend some solace after a misfortune. But sometimes it’s actually true. It certainly was for Ken Thompson and the late Dennis Ritchie, two of the greats of 20th-century information technology, when they created the Unix operating system, now considered one of the most inspiring and influential pieces of software ever written. A door had slammed shut for Thompson and Ritchie in March of 1969, when their employer, the American Telephone & Telegraph Co., withdrew from a collaborative project with the Massachusetts Institute of Technology and General Electric to create an interactive time-sharing system called Multics, which stood for “Multiplexed Information and Computing Service.” Time-sharing, a technique that lets multiple people use a single computer simultaneously, had been invented only a decade earlier. Multics was to combine time-sharing with other technological advances of the era, allowing users to phone a computer from remote terminals and then read e-mail, edit documents, run calculations, and so forth. It was to be a great leap forward from the way computers were mostly being used, with people tediously preparing and submitting batch jobs on punch cards to be run one by one. Over five years, AT&T invested millions in the Multics project, purchasing a GE-645 mainframe computer and dedicating to the effort many of the top researchers at the company’s renowned Bell Telephone Laboratories—including Thompson and Ritchie, Joseph F. Ossanna, Stuart Feldman, M. Douglas McIlroy, and the late Robert Morris. But the new system was too ambitious, and it fell troublingly behind schedule. In the end, AT&T’s corporate leaders decided to pull the plug. After AT&T’s departure from the Multics project, managers at Bell Labs, in Murray Hill, N.J., became reluctant to allow any further work on computer operating systems, leaving some researchers there very frustrated. Although Multics hadn’t met many of its objectives, it had, as Ritchie later recalled, provided them with a “convenient interactive computing service, a good environment in which to do programming, [and] a system around which a fellowship could form.” Suddenly, it was gone. With heavy hearts, the researchers returned to using their old batch system. At such an inauspicious moment, with management dead set against the idea, it surely would have seemed foolhardy to continue designing computer operating systems. But that’s exactly what Thompson, Ritchie, and many of their Bell Labs colleagues did. Now, some 40 years later, we should be thankful that these programmers ignored their bosses and continued their labor of love, which gave the world Unix, one of the greatest computer operating systems of all time. The rogue project began in earnest when Thompson, Ritchie, and a third Bell Labs colleague, Rudd Canaday, began to sketch out on paper the design for a file system. Thompson then wrote the basics of a new operating system for the lab’s GE-645 mainframe. But with the Multics project ended, so too was the need for the GE-645. Thompson realized that any further programming he did on it was likely to go nowhere, so he dropped the effort. Thompson had passed some of his time after the demise of Multics writing a computer game called Space Travel, which simulated all the major bodies in the solar system along with a spaceship that could fly around them. Written for the GE-645, Space Travel was clunky to play—and expensive: roughly US $75 a game for the CPU time. Hunting around, Thompson came across a dusty PDP-7, a minicomputer built by Digital Equipment Corp. that some of his Bell Labs colleagues had purchased earlier for a circuit-analysis project. Thompson rewrote Space Travel to run on it. And with that little programming exercise, a second door cracked ajar. It was to swing wide open during the summer of 1969 when Thompson’s wife, Bonnie, spent a month visiting his parents to show off their newborn son. Thompson took advantage of his temporary bachelor existence to write a good chunk of what would become the Unix operating system for the discarded PDP‑7. The name Unix stems from a joke one of Thompson’s colleagues made: Because the new operating system supported only one user (Thompson), he saw it as an emasculated version of Multics and dubbed it “Un-multiplexed Information and Computing Service,” or Unics. The name later morphed into Unix. Initially, Thompson used the GE-645 to compose and compile the software, which he then downloaded to the PDP‑7. But he soon weaned himself from the mainframe, and by the end of 1969 he was able to write operating-system code on the PDP-7 itself. That was a step in the right direction. But Thompson and the others helping him knew that the PDP‑7, which was already obsolete, would not be able to sustain their skunkworks for long. They also knew that the lab’s management wasn’t about to allow any more research on operating systems. So Thompson and Ritchie got creative. They formulated a proposal to their bosses to buy one of DEC’s newer minicomputers, a PDP-11, but couched the request in especially palatable terms. They said they were aiming to create tools for editing and formatting text, what you might call a word-processing system today. The fact that they would also have to write an operating system for the new machine to support the editor and text formatter was almost a footnote. Management took the bait, and an order for a PDP-11 was placed in May 1970. The machine itself arrived soon after, although the disk drives for it took more than six months to appear. During the interim, Thompson, Ritchie, and others continued to develop Unix on the PDP-7. After the PDP-11’s disks were installed, the researchers moved their increasingly complex operating system over to the new machine. Next they brought over the roff text formatter written by Ossanna and derived from the runoff program, which had been used in an earlier time-sharing system. Unix was put to its first real-world test within Bell Labs when three typists from AT&T’s patents department began using it to write, edit, and format patent applications. It was a hit. The patent department adopted the system wholeheartedly, which gave the researchers enough credibility to convince management to purchase another machine—a newer and more powerful PDP-11 model—allowing their stealth work on Unix to continue. During its earliest days, Unix evolved constantly, so the idea of issuing named versions or releases seemed inappropriate. But the researchers did issue new editions of the programmer’s manual periodically, and the early Unix systems were named after each such edition. The first edition of the manual was completed in November 1971. So what did the first edition of Unix offer that made it so great? For one thing, the system provided a hierarchical file system, which allowed something we all now take for granted: Files could be placed in directories—or equivalently, folders—that in turn could be put within other directories. Each file could contain no more than 64 kilobytes, and its name could be no more than six characters long. These restrictions seem awkwardly limiting now, but at the time they appeared perfectly adequate. Although Unix was ostensibly created for word processing, the only editor available in 1971 was the line-oriented ed. Today, ed is still the only editor guaranteed to be present on all Unix systems. Apart from the text-processing and general system applications, the first edition of Unix included games such as blackjack, chess, and tic-tac-toe. For the system administrator, there were tools to dump and restore disk images to magnetic tape, to read and write paper tapes, and to create, check, mount, and unmount removable disk packs. Most important, the system offered an interactive environment that by this time allowed time-sharing, so several people could use a single machine at once. Various programming languages were available to them, including BASIC, Fortran, the scripting of Unix commands, assembly language, and B. The last of these, a descendant of a BCPL (Basic Combined Programming Language), ultimately evolved into the immensely popular C language, which Ritchie created while also working on Unix. The first edition of Unix let programmers call 34 different low-level routines built into the operating system. It’s a testament to the system’s enduring nature that nearly all of these system calls are still available—and still heavily used—on modern Unix and Linux systems four decades on. For its time, first-edition Unix provided a remarkably powerful environment for software development. Yet it contained just 4200 lines of code at its heart and occupied a measly 16 KB of main memory when it ran. Unix’s great influence can be traced in part to its elegant design, simplicity, portability, and serendipitous timing. But perhaps even more important was the devoted user community that soon grew up around it. And that came about only by an accident of its unique history. The story goes like this: For years Unix remained nothing more than a Bell Labs research project, but by 1973 its authors felt the system was mature enough for them to present a paper on its design and implementation at a symposium of the Association for Computing Machinery. That paper was published in 1974 in the Communications of the ACM. Its appearance brought a flurry of requests for copies of the software. This put AT&T in a bind. In 1956, AT&T had agreed to a U.S government consent decree that prevented the company from selling products not directly related to telephones and telecommunications, in return for its legal monopoly status in running the country’s long-distance phone service. So Unix could not be sold as a product. Instead, AT&T released the Unix source code under license to anyone who asked, charging only a nominal fee. The critical wrinkle here was that the consent decree prevented AT&T from supporting Unix. Indeed, for many years Bell Labs researchers proudly displayed their Unix policy at conferences with a slide that read, “No advertising, no support, no bug fixes, payment in advance.” With no other channels of support available to them, early Unix adopters banded together for mutual assistance, forming a loose network of user groups all over the world. They had the source code, which helped. And they didn’t view Unix as a standard software product, because nobody seemed to be looking after it. So these early Unix users themselves set about fixing bugs, writing new tools, and generally improving the system as they saw fit. The Usenix user group acted as a clearinghouse for the exchange of Unix software in the United States. People could send in magnetic tapes with new software or fixes to the system and get back tapes with the software and fixes that Usenix had received from others. In Australia, the University of New South Wales and the University of Sydney produced a more robust version of Unix, the Australian Unix Share Accounting Method, which could cope with larger numbers of concurrent users and offered better performance. By the mid-1970s, the environment of sharing that had sprung up around Unix resembled the open-source movement so prevalent today. Users far and wide were enthusiastically enhancing the system, and many of their improvements were being fed back to Bell Labs for incorporation in future releases. But as Unix became more popular, AT&T’s lawyers began looking harder at what various licensees were doing with their systems. One person who caught their eye was John Lions, a computer scientist then teaching at the University of New South Wales, in Australia. In 1977, he published what was probably the most famous computing book of the time, A Commentary on the Unix Operating System, which contained an annotated listing of the central source code for Unix. Unix’s licensing conditions allowed for the exchange of source code, and initially, Lions’s book was sold to licensees. But by 1979, AT&T’s lawyers had clamped down on the book’s distribution and use in academic classes. The antiauthoritarian Unix community reacted as you might expect, and samizdat copies of the book spread like wildfire. Many of us have nearly unreadable nth-generation photocopies of the original book. End runs around AT&T’s lawyers indeed became the norm—even at Bell Labs. For example, between the release of the sixth edition of Unix in 1975 and the seventh edition in 1979, Thompson collected dozens of important bug fixes to the system, coming both from within and outside of Bell Labs. He wanted these to filter out to the existing Unix user base, but the company’s lawyers felt that this would constitute a form of support and balked at their release. Nevertheless, those bug fixes soon became widely distributed through unofficial channels. For instance, Lou Katz, the founding president of Usenix, received a phone call one day telling him that if he went down to a certain spot on Mountain Avenue (where Bell Labs was located) at 2 p.m., he would find something of interest. Sure enough, Katz found a magnetic tape with the bug fixes, which were rapidly in the hands of countless users. By the end of the 1970s, Unix, which had started a decade earlier as a reaction against the loss of a comfortable programming environment, was growing like a weed throughout academia and the IT industry. Unix would flower in the early 1980s before reaching the height of its popularity in the early 1990s. For many reasons, Unix has since given way to other commercial and noncommercial systems. But its legacy, that of an elegant, well-designed, comfortable environment for software development, lives on. In recognition of their accomplishment, Thompson and Ritchie were given the Japan Prize earlier this year, adding to a collection of honors that includes the United States’ National Medal of Technology and Innovation and the Association of Computing Machinery’s Turing Award. Many other, often very personal, tributes to Ritchie and his enormous influence on computing were widely shared after his death this past October. Unix is indeed one of the most influential operating systems ever invented. Its direct descendants now number in the hundreds. On one side of the family tree are various versions of Unix proper, which began to be commercialized in the 1980s after the Bell System monopoly was broken up, freeing AT&T from the stipulations of the 1956 consent decree. On the other side are various Unix-like operating systems derived from the version of Unix developed at the University of California, Berkeley, including the one Apple uses today on its computers, OS X. I say “Unix-like” because the developers of the Berkeley Software Distribution (BSD) Unix on which these systems were based worked hard to remove all the original AT&T code so that their software and its descendants would be freely distributable. The effectiveness of those efforts were, however, called into question when the AT&T subsidiary Unix System Laboratories filed suit against Berkeley Software Design and the Regents of the University of California in 1992 over intellectual property rights to this software. The university in turn filed a counterclaim against AT&T for breaches to the license it provided AT&T for the use of code developed at Berkeley. The ensuing legal quagmire slowed the development of free Unix-like clones, including 386BSD, which was designed for the Intel 386 chip, the CPU then found in many IBM PCs. Had this operating system been available at the time, Linus Torvalds says he probably wouldn’t have created Linux, an open-source Unix-like operating system he developed from scratch for PCs in the early 1990s. Linux has carried the Unix baton forward into the 21st century, powering a wide range of digital gadgets including wireless routers, televisions, desktop PCs, and Android smartphones. It even runs some supercomputers. Although AT&T quickly settled its legal disputes with Berkeley Software Design and the University of California, legal wrangling over intellectual property claims to various parts of Unix and Linux have continued over the years, often involving byzantine corporate relations. By 2004, no fewer than five major lawsuits had been filed. Just this past August, a software company called the TSG Group (formerly known as the SCO Group), lost a bid in court to claim ownership of Unix copyrights that Novell had acquired when it purchased the Unix System Laboratories from AT&T in 1993. As a programmer and Unix historian, I can’t help but find all this legal sparring a bit sad. From the very start, the authors and users of Unix worked as best they could to build and share, even if that meant defying authority. That outpouring of selflessness stands in sharp contrast to the greed that has driven subsequent legal battles over the ownership of Unix. The world of computer hardware and software moves forward startlingly fast. For IT professionals, the rapid pace of change is typically a wonderful thing. But it makes us susceptible to the loss of our own history, including important lessons from the past. To address this issue in a small way, in 1995 I started a mailing list of old-time Unix aficionados. That effort morphed into the Unix Heritage Society. Our goal is not only to save the history of Unix but also to collect and curate these old systems and, where possible, bring them back to life. With help from many talented members of this society, I was able to restore much of the old Unix software to working order, including Ritchie’s first C compiler from 1972 and the first Unix system to be written in C, dating from 1973. One holy grail that eluded us for a long time was the first edition of Unix in any form, electronic or otherwise. Then, in 2006, Al Kossow from the Computer History Museum, in Mountain View, Calif., unearthed a printed study of Unix dated 1972, which not only covered the internal workings of Unix but also included a complete assembly listing of the kernel, the main component of this operating system. This was an amazing find—like discovering an old Ford Model T collecting dust in a corner of a barn. But we didn’t just want to admire the chrome work from afar. We wanted to see the thing run again. In 2008, Tim Newsham, an independent programmer in Hawaii, and I assembled a team of like-minded Unix enthusiasts and set out to bring this ancient system back from the dead. The work was technically arduous and often frustrating, but in the end, we had a copy of the first edition of Unix running on an emulated PDP-11/20. We sent out messages announcing our success to all those we thought would be interested. Thompson, always succinct, simply replied, “Amazing.” Indeed, his brainchild was amazing, and I’ve been happy to do what I can to make it, and the story behind it, better known. Digital Ocean http://do.co/bsdnow ###FreeBSD jails with a single public IP address Jails in FreeBSD provide a simple yet flexible way to set up a proper server layout. In the most setups the actual server only acts as the host system for the jails while the applications themselves run within those independent containers. Traditionally every jail has it’s own IP for the user to be able to address the individual services. But if you’re still using IPv4 this might get you in trouble as the most hosters don’t offer more than one single public IP address per server. Create the internal network In this case NAT (“Network Address Translation”) is a good way to expose services in different jails using the same IP address. First, let’s create an internal network (“NAT network”) at 192.168.0.0/24. You could generally use any private IPv4 address space as specified in RFC 1918. Here’s an overview: https://en.wikipedia.org/wiki/Privatenetwork. Using pf, FreeBSD’s firewall, we will map requests on different ports of the same public IP address to our individual jails as well as provide network access to the jails themselves. First let’s check which network devices are available. In my case there’s em0 which provides connectivity to the internet and lo0, the local loopback device. options=209b [...] inet 172.31.1.100 netmask 0xffffff00 broadcast 172.31.1.255 nd6 options=23 media: Ethernet autoselect (1000baseT ) status: active lo0: flags=8049 metric 0 mtu 16384 options=600003 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2 inet 127.0.0.1 netmask 0xff000000 nd6 options=21``` > For our internal network, we create a cloned loopback device called lo1. Therefore we need to customize the /etc/rc.conf file, adding the following two lines: cloned_interfaces="lo1" ipv4_addrs_lo1="192.168.0.1-9/29" > This defines a /29 network, offering IP addresses for a maximum of 6 jails: ipcalc 192.168.0.1/29 Address: 192.168.0.1 11000000.10101000.00000000.00000 001 Netmask: 255.255.255.248 = 29 11111111.11111111.11111111.11111 000 Wildcard: 0.0.0.7 00000000.00000000.00000000.00000 111 => Network: 192.168.0.0/29 11000000.10101000.00000000.00000 000 HostMin: 192.168.0.1 11000000.10101000.00000000.00000 001 HostMax: 192.168.0.6 11000000.10101000.00000000.00000 110 Broadcast: 192.168.0.7 11000000.10101000.00000000.00000 111 Hosts/Net: 6 Class C, Private Internet > Then we need to restart the network. Please be aware of currently active SSH sessions as they might be dropped during restart. It’s a good moment to ensure you have KVM access to that server ;-) service netif restart > After reconnecting, our newly created loopback device is active: lo1: flags=8049 metric 0 mtu 16384 options=600003 inet 192.168.0.1 netmask 0xfffffff8 inet 192.168.0.2 netmask 0xffffffff inet 192.168.0.3 netmask 0xffffffff inet 192.168.0.4 netmask 0xffffffff inet 192.168.0.5 netmask 0xffffffff inet 192.168.0.6 netmask 0xffffffff inet 192.168.0.7 netmask 0xffffffff inet 192.168.0.8 netmask 0xffffffff inet 192.168.0.9 netmask 0xffffffff nd6 options=29 Setting up > pf part of the FreeBSD base system, so we only have to configure and enable it. By this moment you should already have a clue of which services you want to expose. If this is not the case, just fix that file later on. In my example configuration, I have a jail running a webserver and another jail running a mailserver: Public IP address IP_PUB="1.2.3.4" Packet normalization scrub in all Allow outbound connections from within the jails nat on em0 from lo1:network to any -> (em0) webserver jail at 192.168.0.2 rdr on em0 proto tcp from any to $IP_PUB port 443 -> 192.168.0.2 just an example in case you want to redirect to another port within your jail rdr on em0 proto tcp from any to $IP_PUB port 80 -> 192.168.0.2 port 8080 mailserver jail at 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 25 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 587 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 143 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 993 -> 192.168.0.3 > Now just enable pf like this (which is the equivalent of adding pf_enable=YES to /etc/rc.conf): sysrc pf_enable="YES" > and start it: service pf start Install ezjail > Ezjail is a collection of scripts by erdgeist that allow you to easily manage your jails. pkg install ezjail > As an alternative, you could install ezjail from the ports tree. Now we need to set up the basejail which contains the shared base system for our jails. In fact, every jail that you create get’s will use that basejail to symlink directories related to the base system like /bin and /sbin. This can be accomplished by running ezjail-admin install > In the next step, we’ll copy the /etc/resolv.conf file from our host to the newjail, which is the template for newly created jails (the parts that are not provided by basejail), to ensure that domain resolution will work properly within our jails later on: cp /etc/resolv.conf /usr/jails/newjail/etc/ > Last but not least, we enable ezjail and start it: sysrc ezjail_enable="YES" service ezjail start Create a jail > Creating a jail is as easy as it could probably be: ezjail-admin create webserver 192.168.0.2 ezjail-admin start webserver > Now you can access your jail using: ezjail-admin console webserver > Each jail contains a vanilla FreeBSD installation. Deploy services > Now you can spin up as many jails as you want to set up your services like web, mail or file shares. You should take care not to enable sshd within your jails, because that would cause problems with the service’s IP bindings. But this is not a problem, just SSH to the host and enter your jail using ezjail-admin console. EuroBSDcon 2018 Talks & Schedule (https://2018.eurobsdcon.org/talks-schedule/) News Roundup OpenBSD on an iBook G4 (https://bobstechsite.com/openbsd-on-an-ibook-g4/) > I've mentioned on social media and on the BTS podcast a few times that I wanted to try installing OpenBSD onto an old "snow white" iBook G4 I acquired last summer to see if I could make it a useful machine again in the year 2018. This particular eBay purchase came with a 14" 1024x768 TFT screen, 1.07GHz PowerPC G4 processor, 1.5GB RAM, 100GB of HDD space and an ATI Radeon 9200 graphics card with 32 MB of SDRAM. The optical drive, ethernet port, battery & USB slots are also fully-functional. The only thing that doesn't work is the CMOS battery, but that's not unexpected for a device that was originally released in 2004. Initial experiments > This iBook originally arrived at my door running Apple Mac OSX Leopard and came with the original install disk, the iLife & iWork suites for 2008, various instruction manuals, a working power cable and a spare keyboard. As you'll see in the pictures I took for this post the characters on the buttons have started to wear away from 14 years of intensive use, but the replacement needs a very good clean before I decide to swap it in! > After spending some time exploring the last version of OSX to support the IBM PowerPC processor architecture I tried to see if the hardware was capable of modern computing with Linux. Something I knew ahead of trying this was that the WiFi adapter was unlikely to work because it's a highly proprietary component designed by Apple to work specifically with OSX and nothing else, but I figured I could probably use a wireless USB dongle later to get around this limitation. > Unfortunately I found that no recent versions of mainstream Linux distributions would boot off this machine. Debian has dropped support 32-bit PowerPC architectures and the PowerPC variants of Ubuntu 16.04 LTS (vanilla, MATE and Lubuntu) wouldn't even boot the installer! The only distribution I could reliably install on the hardware was Lubuntu 14.04 LTS. > Unfortunately I'm not the biggest fan of the LXDE desktop for regular work and a lot of ported applications were old and broken because it clearly wasn't being maintained by people that use the hardware anymore. Ubuntu 14.04 is also approaching the end of its support life in early 2019, so this limited solution also has a limited shelf-life. Over to BSD > I discussed this problem with a few people on Mastodon and it was pointed out to me that OSX is built on the Darwin kernel, which happens to be a variant of BSD. NetBSD and OpenBSD fans in particular convinced me that their communities still saw the value of supporting these old pieces of kit and that I should give BSD a try. > So yesterday evening I finally downloaded the "macppc" version of OpenBSD 6.3 with no idea what to expect. I hoped for the best but feared the worst because my last experience with this operating system was trying out PC-BSD in 2008 and discovering with disappointment that it didn't support any of the hardware on my Toshiba laptop. > When I initially booted OpenBSD I was a little surprised to find the login screen provided no visual feedback when I typed in my password, but I can understand the security reasons for doing that. The initial desktop environment that was loaded was very basic. All I could see was a console output window, a terminal and a desktop switcher in the X11 environment the system had loaded. > After a little Googling I found this blog post had some fantastic instructions to follow for the post-installation steps: https://sohcahtoa.org.uk/openbsd.html. I did have to adjust them slightly though because my iBook only has 1.5GB RAM and not every package that page suggests is available on macppc by default. You can see a full list here: https://ftp.openbsd.org/pub/OpenBSD/6.3/packages/powerpc/. Final thoughts > I was really impressed with the performance of OpenBSD's "macppc" port. It boots much faster than OSX Leopard on the same hardware and unlike Lubuntu 14.04 it doesn't randomly hang for no reason or crash if you launch something demanding like the GIMP. > I was pleased to see that the command line tools I'm used to using on Linux have been ported across too. OpenBSD also had no issues with me performing basic desktop tasks on XFCE like browsing the web with NetSurf, playing audio files with VLC and editing images with the GIMP. Limited gaming is also theoretically possible if you're willing to build them (or an emulator) from source with SDL support. > If I wanted to use this system for heavy duty work then I'd probably be inclined to run key applications like LibreOffice on a Raspberry Pi and then connect my iBook G4 to those using VNC or an SSH connection with X11 forwarding. BSD is UNIX after all, so using my ancient laptop as a dumb terminal should work reasonably well. > In summary I was impressed with OpenBSD and its ability to breathe new life into this old Apple Mac. I'm genuinely excited about the idea of trying BSD with other devices on my network such as an old Asus Eee PC 900 netbook and at least one of the many Raspberry Pi devices I use. Whether I go the whole hog and replace Fedora on my main production laptop though remains to be seen! The template user with PAM and login(1) (http://oshogbo.vexillium.org/blog/48) > When you build a new service (or an appliance) you need your users to be able to configure it from the command line. To accomplish this you can create system accounts for all registered users in your service and assign them a special login shell which provides such limited functionality. This can be painful if you have a dynamic user database. > Another challenge is authentication via remote services such as RADIUS. How can we implement services when we authenticate through it and log into it as a different user? Furthermore, imagine a scenario when RADIUS decides on which account we have the right to access by sending an additional attribute. > To address these two problems we can use a "template" user. Any of the PAM modules can set the value of the PAM_USER item. The value of this item will be used to determine which account we want to login. Only the "template" user must exist on the local password database, but the credential check can be omitted by the module. > This functionality exists in the login(1) used by FreeBSD, HardenedBSD, DragonFlyBSD and illumos. The functionality doesn't exist in the login(1) used in NetBSD, and OpenBSD doesn't support PAM modules at all. In addition what is also noteworthy is that such functionality was also in the OpenSSH but they decided to remove it and call it a security vulnerability (CVE 2015-6563). I can see how some people may have seen it that way, that’s why I recommend reading this article from an OpenPAM author and a FreeBSD security officer at the time. > Knowing the background let's take a look at an example. ```PAMEXTERN int pamsmauthenticate(pamhandlet *pamh, int flags _unused, int argc _unused, const char *argv[] _unused) { const char *user, *password; int err; err = pam_get_user(pamh, &user, NULL); if (err != PAM_SUCCESS) return (err); err = pam_get_authtok(pamh, PAM_AUTHTOK, &password, NULL); if (err == PAM_CONV_ERR) return (err); if (err != PAM_SUCCESS) return (PAM_AUTH_ERR); err = authenticate(user, password); if (err != PAM_SUCCESS) { return (err); } return (pam_set_item(pamh, PAM_USER, "template")); } In the listing above we have an example of a PAM module. The pamgetuser(3) provides a username. The pamgetauthtok(3) shows us a secret given by the user. Both functions allow us to give an optional prompt which should be shown to the user. The authenticate function is our crafted function which authenticates the user. In our first scenario we wanted to keep all users in an external database. If authentication is successful we then switch to a template user which has a shell set up for a script allowing us to configure the machine. In our second scenario the authenticate function authenticates the user in RADIUS. Another step is to add our PAM module to the /etc/pam.d/system or to the /etc/pam.d/login configuration: auth sufficient pamtemplate.so nowarn allowlocal Unfortunately the description of all these options goes beyond this article - if you would like to know more about it you can find them in the PAM manual. The last thing we need to do is to add our template user to the system which you can do by the adduser(8) command or just simply modifying the /etc/master.passwd file and use pwdmkdb(8) program: $ tail -n /etc/master.passwd template::1000:1000::0:0:User &:/:/usr/local/bin/templatesh $ sudo pwdmkdb /etc/master.passwd As you can see,the template user can be locked and we still can use it in our PAM module (the * character after login). I would like to thank Dag-Erling Smørgrav for pointing this functionality out to me when I was looking for it some time ago. iXsystems iXsystems @ VMWorld ###ZFS file server What is the need? At work, we run a compute cluster that uses an Isilon cluster as primary NAS storage. Excluding snapshots, we have about 200TB of research data, some of them in compressed formats, and others not. We needed an offsite backup file server that would constantly mirror our primary NAS and serve as a quick recovery source in case of a data loss in the the primary NAS. This offsite file server would be passive - will never face the wrath of the primary cluster workload. In addition to the role of a passive backup server, this solution would take on some passive report generation workloads as an ideal way of offloading some work from the primary NAS. The passive work is read-only. The backup server would keep snapshots in a best effort basis dating back to 10 years. However, this data on this backup server would be archived to tapes periodically. A simple guidance of priorities: Data integrity > Cost of solution > Storage capacity > Performance. Why not enterprise NAS? NetApp FAS or EMC Isilon or the like? We decided that enterprise grade NAS like NetAPP FAS or EMC Isilon are prohibitively expensive and an overkill for our needs. An open source & cheaper alternative to enterprise grade filesystem with the level of durability we expect turned up to be ZFS. We’re already spoilt from using snapshots by a clever Copy-on-Write Filesystem(WAFL) by NetApp. ZFS providing snapshots in almost identical way was a big influence in the choice. This is also why we did not consider just a CentOS box with the default XFS filesystem. FreeBSD vs Debian for ZFS This is a backup server, a long-term solution. Stability and reliability are key requirements. ZFS on Linux may be popular at this time, but there is a lot of churn around its development, which means there is a higher probability of bugs like this to occur. We’re not looking for cutting edge features here. Perhaps, Linux would be considered in the future. FreeBSD + ZFS We already utilize FreeBSD and OpenBSD for infrastructure services and we have nothing but praises for the stability that the BSDs have provided us. We’d gladly use FreeBSD and OpenBSD wherever possible. Okay, ZFS, but why not FreeNAS? IMHO, FreeNAS provides a integrated GUI management tool over FreeBSD for a novice user to setup and configure FreeBSD, ZFS, Jails and many other features. But, this user facing abstraction adds an extra layer of complexity to maintain that is just not worth it in simpler use cases like ours. For someone that appreciates the commandline interface, and understands FreeBSD enough to administer it, plain FreeBSD + ZFS is simpler and more robust than FreeNAS. Specifications Lenovo SR630 Rackserver 2 X Intel Xeon silver 4110 CPUs 768 GB of DDR4 ECC 2666 MHz RAM 4 port SAS card configured in passthrough mode(JBOD) Intel network card with 10 Gb SFP+ ports 128GB M.2 SSD for use as boot drive 2 X HGST 4U60 JBOD 120(2 X 60) X 10TB SAS disks ###Reflection on one-year usage of OpenBSD I have used OpenBSD for more than one year, and it is time to give a summary of the experience: (1) What do I get from OpenBSD? a) A good UNIX tutorial. When I am curious about some UNIXcommands’ implementation, I will refer to OpenBSD source code, and I actually gain something every time. E.g., refresh socket programming skills from nc; know how to process file efficiently from cat. b) A better test bed. Although my work focus on developing programs on Linux, I will try to compile and run applications on OpenBSD if it is possible. One reason is OpenBSD usually gives more helpful warnings. E.g., hint like this: ...... warning: sprintf() is often misused, please use snprintf() ...... Or you can refer this post which I wrote before. The other is sometimes program run well on Linux may crash on OpenBSD, and OpenBSD can help you find hidden bugs. c) Some handy tools. E.g. I find tcpbench is useful, so I ported it into Linux for my own usage (project is here). (2) What I give back to OpenBSD? a) Patches. Although most of them are trivial modifications, they are still my contributions. b) Write blog posts to share experience about using OpenBSD. c) Develop programs for OpenBSD/BSD: lscpu and free. d) Porting programs into OpenBSD: E.g., I find google/benchmark is a nifty tool, but lacks OpenBSD support, I submitted PR and it is accepted. So you can use google/benchmark on OpenBSD now. Generally speaking, the time invested on OpenBSD is rewarding. If you are still hesitating, why not give a shot? ##Beastie Bits BSD Users Stockholm Meetup BSDCan 2018 Playlist OPNsense 18.7 released Testing TrueOS (FreeBSD derivative) on real hardware ThinkPad T410 Kernel Hacker Wanted! Replace a pair of 8-bit writes to VGA memory with a single 16-bit write Reduce taskq and context-switch cost of zio pipe Proposed FreeBSD Memory Management change, expected to improve ZFS ARC interactions Tarsnap ##Feedback/Questions Anian_Z - Question Robert - Pool question Lain - Congratulations Thomas - L2arc Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Technology changes, it’s a fact of life, and sometimes making a multi-year commitment can be a difficult decision. The Dell EMC Future-Proof Storage Loyalty Program gives you additional peace of mind with guaranteed satisfaction and investment protection for those future technology changes. The program covers the Dell EMC Storage Portfolio including; VMAX All-Flash, XtremIO X2, SC Series, Dell EMC Unity, Data Domain, Integrated Data Protection Appliance (IDPA), Isilon and Elastic Cloud Storage (ECS) appliance. Dell EMC Storage and Data Protection offers unbeatable value with a modern, efficient and feature rich product portfolio at no additional cost to you with purchase of a support agreement. Brian Henderson (@BHendu), Storage Portfolio Marketing Director, gives us the details on the 3-Year Satisfaction Guarantee, Hardware Investment Protection and Predictable Support Pricing along with 4:1 All-Flash Storage Efficiency Guarantee, Never-Worry Migratio, All-inclusive Software and Built-In Virtustream Storage Cloud. www.dellemc.com/futureproof Get Dell EMC The Source app in the Apple App Store or Google Play, and Subscribe to the podcast: iTunes, Stitcher Radio or Google Play. Dell EMC The Source Podcast is hosted by Sam Marraccini (@SamMarraccini)
Season 2 kicks off with James McCormick joining us to talk Scale-Out NAS, Unstructured Data, Isilon, and ECS.
We cover an interview about Unix Architecture Evolution, another vBSDcon trip report, how to teach an old Unix about backspace, new NUMA support coming to FreeBSD, and stack pointer checking in OpenBSD. This episode was brought to you by Headlines Unix Architecture Evolution from the 1970 PDP-7 to the 2017 FreeBSD (https://fosdem.org/2018/interviews/diomidis-spinellis/) Q: Could you briefly introduce yourself? I'm a professor of software engineering, a programmer at heart, and a technology author. Currently I'm also the editor in chief of the IEEE Software magazine. I recently published the book Effective Debugging, where I detail 66 ways to debug software and systems. Q: What will your talk be about, exactly? I will describe how the architecture of the Unix operating system evolved over the past half century, starting from an unnamed system written in PDP-7 assembly language and ending with a modern FreeBSD system. My talk is based, first, on a GitHub repository where I tried to record the system's history from 1970 until today and, second, on the evolution of documented facilities (user commands, system calls, library functions) across revisions. I will thus present the early system's defining architectural features (layering, system calls, devices as files, an interpreter, and process management) and the important ones that followed in subsequent releases: the tree directory structure, user contributed code, I/O redirection, the shell as a user program, groups, pipes, scripting, and little languages. Q: Why this topic? Unix stands out as a major engineering breakthrough due to its exemplary design, its numerous technical contributions, its impact, its development model, and its widespread use. Furthermore, the design of the Unix programming environment has been characterized as one offering unusual simplicity, power, and elegance. Consequently, there are many lessons that we can learn by studying the evolution of the Unix architecture, which we can apply to the design of new systems. I often see modern systems that suffer from a bloat of architectural features and a lack of clear form on which functionality can be built. I believe that many of the modern Unix architecture defining features are excellent examples of what we should strive toward as system architects. Q: What do you hope to accomplish by giving this talk? What do you expect? I'd like FOSDEM attendees to leave the talk with their mind full with architectural features of timeless quality. I want them to realize that architectural elegance isn't derived by piling design patterns and does not need to be expensive in terms of resources. Rather, beautiful architecture can be achieved on an extremely modest scale. Furthermore, I want attendees to appreciate the importance of adopting flexible conventions rather than rigid enforcement mechanisms. Finally, I want to demonstrate through examples that the open source culture was part of Unix from its earliest days. Q: What are the most significant milestones in the development of Unix? The architectural development of Unix follows a path of continuous evolution, albeit at a slowing pace, so I don't see here the most important milestones. I would however define as significant milestones two key changes in the way Unix was developed. The first occurred in the late 1970s when significant activity shifted from a closely-knit team of researchers at the AT&T Bell Labs to the Computer Science Research Group in the University of California at Berkeley. This opened the system to academic contributions and growth through competitive research funding. The second took place in the late 1980s and the 1990s when Berkeley open-sourced the the code it had developed (by that time a large percentage of the system) and enthusiasts built on it to create complete open source operating system distributions: 386BSD, and then FreeBSD, NetBSD, OpenBSD, and others. Q: In which areas has the development of Unix stalled? The data I will show demonstrate that there were in the past some long periods where the number of C library functions and system calls remained mostly stable. Nowadays there is significant growth in the number of all documented facilities with the exception of file formats. I'm looking forward to a discussion regarding the meaning of these growth patterns in the Q&A session after the talk. Q: What are the core features that still link the 1970 PDP-7 system to the latest FreeBSD 11.1 release, almost half a century apart? Over the past half-century the Unix system has grown by four orders of magnitude from a few thousand lines of code to many millions. Nevertheless, looking at a 1970s architecture diagram and a current one reveals that the initial architectural blocks are still with us today. Furthermore, most system calls, user programs, and C library functions of that era have survived until today with essentially similar functionality. I've even found in modern FreeBSD some lines of code that have survived unchanged for 40 years. Q: Can we still add innovative changes to operating systems like FreeBSD without breaking the ‘Unix philosophy'? Will there be a moment where FreeBSD isn't recognizable anymore as a descendant of the 1970 PDP-7 system? There's a saying that “form liberates”. So having available a time-tested form for developing operating system functionality allows you to innovate in areas that matter rather than reinventing the wheel. Such concepts include having commands act as a filter, providing manual pages with a consistent structure, supplying build information in the form of a Makefile, installing files in a well-defined directory hierarchy, implementing filesystems with an standardized object-oriented interface, and packaging reusable functions as a library. Within this framework there's ample space for both incremental additions (think of jq, the JSON query command) and radical innovations (consider the Solaris-derived ZFS and dtrace functionality). For this reason I think that BSD and Linux systems will always be recognizable as direct or intellectual descendants of the 1970s Research Unix editions. Q: Have you enjoyed previous FOSDEM editions? Immensely! As an academic I need to attend many scientific conferences and meetings in order to present research results and interact with colleagues. This means too much time spent traveling and away from home, and a limited number of conferences I'm in the end able to attend. Nevertheless, attending FOSDEM is an easy decision due to the world-changing nature of its theme, the breadth of the topics presented, the participants' enthusiasm and energy, as well as the exemplary, very efficient conference organization. Another vBSDCon trip report we just found (https://www.weaponizedawesome.com/blog/?cat=53) We just got tipped about another trip report from vBSDCon, this time from one of the first time speakers: W. Dean Freeman Recently I had the honor of co-presenting on the internals of FreeBSD's Kernel RNG with John-Mark Gurney at the 3rd biennial vBSDCon, hosted in Reston, VA hosted by Verisign. I've been in and out of the FreeBSD community for about 20 years. As I've mentioned on here before, my first Unix encounter was FreeBSD 2.2.8 when I was in the 7th or 8th grade. However, for all that time I've never managed to get out to any of the cons. I've been to one or two BUG meetings and I've met some folks from IRC before, but nothing like this. A BSD conference is a very different experience than anything else out there. You have to try it, it is the only way to truly understand it. I'd also not had to do a stand-up presentation really since college before this. So, my first BSD con and my first time presenting rolled into one made for an interesting experience. See, he didn't say terrifying. It went very well. You should totally submit a talk for the next conference, even if it is your first. That said, it was amazing and invigorating experience. I got to meet a few big names in the FreeBSD community, discuss projects, ideas for FreeBSD, etc. I did seem to spend an unusual amount of time talking about FIPS and Common Criteria with folks, but to me that's a good sign and indicative that there is interest in working to close gaps between FreeBSD and the current requirements so that we can start getting FreeBSD and more BSD-based products into the government and start whittling away the domination of Linux (especially since Oracle has cut Solaris, SPARC and the ZFS storage appliance business units). There is nothing that can match the high bandwidth interchange of ideas in person. The internet has made all kinds of communication possible, and we use it all the time, but every once in a while, getting together in person is hugely valuable. Dean then went on to list some of the talks he found most valuable, including DTrace, Capsicum, bhyve, *BSD security tools, and Paul Vixie's talk about gets() I think the talk that really had the biggest impact on me, however, was Kyle Kneisl's talk on BSD community dynamics. One of the key points he asked was whether the things that drew us to the BSD community in the first place would be able to happen today. Obviously, I'm not a 12 or 13 year old kid anymore, but it really got me thinking. That, combined with getting face time with people I'd previously only known as screen names has recently drawn me back into participating in IRC and rejoining mailing lists (wdf on freenode. be on the lookout!) Then Dean covered some thoughts on his own talk: JMG and my talk seems to have been well received, with people paying lots of attention. I don't know what a typical number of questions is for one of these things, but on day one there weren't that many questions. We got about 5 during our question time and spent most of the rest of the day fielding questions from interested attendees. Getting a “great talk!” from GNN after coming down from the stage was probably one of the major highlights for me. I remember my first solo talk, and GNN asking the right question in the middle to get me to explain a part of it I had missed. It was very helpful. I think key to the interest in our presentation was that JMG did a good job framing a very complicated topic's importance in terms everyone could understand. It also helped that we got to drop some serious truth bombs. Final Thoughts: I met a lot of folks in person for the first time, and met some people I'd never known online before. It was a great community and I'm glad I got a chance to expand my network. Verisign were excellent hosts and they took good care of both speakers (covering airfare, rooms, etc.) and also conference attendees at large. The dinners that they hosted were quite good as well. I'm definitely interested in attending vBSDCon again and now that I've had a taste of meeting IRL with the community on scale of more than a handful, I have every intention of finally making it to BSDCan next year (I'd said it in 2017, but then moved to Texas for a new job and it wasn't going to be practical). This year for sure, though! Teaching an Almost 40-year Old UNIX about Backspace (https://virtuallyfun.com/2018/01/17/teaching_an_almost_40-year_old_unix_about_backspace/) Introduction I have been messing with the UNIX® operating system, Seventh Edition (commonly known as UNIX V7 or just V7) for a while now. V7 dates from 1979, so it's about 40 years old at this point. The last post was on V7/x86, but since I've run into various issues with it, I moved on to a proper installation of V7 on SIMH. The Internet has some really good resources on installing V7 in SIMH. Thus, I set out on my own journey on installing and using V7 a while ago, but that was remarkably uneventful. One convenience that I have been dearly missing since the switch from V7/x86 is a functioning backspace key. There seem to be multiple different definitions of backspace: BS, as in ASCII character 8 (010, 0x08, also represented as ^H), and DEL, as in ASCII character 127 (0177, 0x7F, also represented as ^?). V7 does not accept either for input by default. Instead, # is used as the erase character and @ is used as the kill character. These defaults have been there since UNIX V1. In fact, they have been “there” since Multics, where they got chosen seemingly arbitrarily. The erase character erases the character before it. The kill character kills (deletes) the whole line. For example, “ba##gooo#d” would be interpreted as “good” and “bad line@good line” would be interpreted as “good line”. There is some debate on whether BS or DEL is the correct character for terminals to send when the user presses the backspace key. However, most programs have settled on DEL today. tmux forces DEL, even if the terminal emulator sends BS, so simply changing my terminal to send BS was not an option. The change from the defaults outlined here to today's modern-day defaults occurred between 4.1BSD and 4.2BSD. enf on Hacker News has written a nice overview of the various conventions Getting the Diff For future generations as well as myself when I inevitably majorly break this installation of V7, I wanted to make a diff. However, my V7 is installed in SIMH. I am not a very intelligent man, I didn't keep backup copies of the files I'd changed. Getting data out of this emulated machine is an exercise in frustration. In the end, I printed everything on screen using cat(1) and copied that out. Then I performed a manual diff against the original source code tree because tabs got converted to spaces in the process. Then I applied the changes to clean copies that did have the tabs. And finally, I actually invoked diff(1). Closing Thoughts Figuring all this out took me a few days. Penetrating how the system is put together was surprisingly fairly hard at first, but then the difficulty curve eased up. It was an interesting exercise in some kind of “reverse engineering” and I definitely learned something about tty handling. I was, however, not pleased with using ed(1), even if I do know the basics. vi(1) is a blessing that I did not appreciate enough until recently. Had I also been unable to access recursive grep(1) on my host and scroll through the code, I would've probably given up. Writing UNIX under those kinds of editing conditions is an amazing feat. I have nothing but the greatest respect for software developers of those days. News Roundup New NUMA support coming to FreeBSD CURRENT (https://lists.freebsd.org/pipermail/freebsd-current/2018-January/068145.html) Hello folks, I am working on merging improved NUMA support with policy implemented by cpuset(2) over the next week. This work has been supported by Dell/EMC's Isilon product division and Netflix. You can see some discussion of these changes here: https://reviews.freebsd.org/D13403 https://reviews.freebsd.org/D13289 https://reviews.freebsd.org/D13545 The work has been done in user/jeff/numa if you want to look at svn history or experiment with the branch. It has been tested by Peter Holm on i386 and amd64 and it has been verified to work on arm at various points. We are working towards compatibility with libnuma and linux mbind. These commits will bring in improved support for NUMA in the kernel. There are new domain specific allocation functions available to kernel for UMA, malloc, kmem, and vmpage*. busdmamem consumers will automatically be placed in the correct domain, bringing automatic improvements to some device performance. cpuset will be able to constrains processes, groups of processes, jails, etc. to subsets of the system memory domains, just as it can with sets of cpus. It can set default policy for any of the above. Threads can use cpusets to set policy that specifies a subset of their visible domains. Available policies are first-touch (local in linux terms), round-robin (similar to linux interleave), and preferred. For now, the default is round-robin. You can achieve a fixed domain policy by using round-robin with a bitmask of a single domain. As the scheduler and VM become more sophisticated we may switch the default to first-touch as linux does. Currently these features are enabled with VMNUMAALLOC and MAXMEMDOM. It will eventually be NUMA/MAXMEMDOM to match SMP/MAXCPU. The current NUMA syscalls and VMNUMAALLOC code was 'experimental' and will be deprecated. numactl will continue to be supported although cpuset should be preferred going forward as it supports the full feature set of the new API. Thank you for your patience as I deal with the inevitable fallout of such sweeping changes. If you do have bugs, please file them in bugzilla, or reach out to me directly. I don't always have time to catch up on all of my mailing list mail and regretfully things slip through the cracks when they are not addressed directly to me. Thanks, Jeff Stack pointer checking – OpenBSD (https://marc.info/?l=openbsd-tech&m=151572838911297&w=2) Stefan (stefan@) and I have been working for a few months on this diff, with help from a few others. At every trap and system call, it checks if the stack-pointer is on a page that is marked MAPSTACK. execve() is changed to create such mappings for the process stack. Also, libpthread is taught the new MAPSTACK flag to use with mmap(). There is no corresponding system call which can set MAP_FLAG on an existing page, you can only set the flag by mapping new memory into place. That is a piece of the security model. The purpose of this change is to twart stack pivots, which apparently have gained some popularity in JIT ROP attacks. It makes it difficult to place the ROP stack in regular data memory, and then perform a system call from it. Workarounds are cumbersome, increasing the need for far more gadgetry. But also the trap case -- if any memory experiences a demand page fault, the same check will occur and potentially also kill the process. We have experimented a little with performing this check during device interrupts, but there are some locking concerns and performance may then become a concern. It'll be best to gain experience from handle of syncronous trap cases first. chrome and other applications I use run fine! I'm asking for some feedback to discover what ports this breaks, we'd like to know. Those would be ports which try to (unconventionally) create their stacks in malloc()'d memory or inside another Data structure. Most of them are probably easily fixed ... Qt 5.9 on FreeBSD (https://euroquis.nl/bobulate/?p=1768) Tobias and Raphael have spent the past month or so hammering on the Qt 5.9 branch, which has (finally!) landed in the official FreeBSD ports tree. This brings FreeBSD back up-to-date with current Qt releases and, more importantly, up-to-date with the Qt release KDE software is increasingly expecting. With Qt 5.9, the Elisa music player works, for instance (where it has run-time errors with Qt 5.7, even if it compiles). The KDE-FreeBSD CI system has had Qt 5.9 for some time already, but that was hand-compiled and jimmied into the system, rather than being a “proper” ports build. The new Qt version uses a new build system, which is one of the things that really slowed us down from a packaging perspective. Some modules have been reshuffled in the process. Some applications depending on Qt internal-private headers have been fixed along the way. The Telegram desktop client continues to be a pain in the butt that way. Following on from Qt 5.9 there has been some work in getting ready for Clang 6 support; in general the KDE and Qt stack is clean and modern C++, so it's more infrastructural tweaks than fixing code. Outside of our silo, I still see lots of wonky C++ code being fixed and plenty of confusion between pointers and integers and strings and chars and .. ugh. Speaking of ugh, I'm still planning to clean up Qt4 on ARM aarch64 for FreeBSD; this boils down to stealing suitable qatomic implementations from Arch Linux. For regular users of Qt applications on FreeBSD, there should be few to no changes required outside the regular upgrade cycle. For KDE Plasma users, note that development of the ports has changed branches; as we get closer to actually landing modern KDE bits, things have been renamed and reshuffled and mulled over so often that the old plasma5 branch wasn't really right anymore. The kde5-import branch is where it's at nowadays, and the instructions are the same: the x11/kde5 metaport will give you all the KDE Frameworks 5, KDE Plasma Desktop and modern KDE Applications you need. Adding IPv6 to an Nginx website on FreeBSD / FreshPorts (https://dan.langille.org/2018/01/13/adding-ipv6-to-an-nginx-website-on-freebsd-freshports/) FreshPorts recently moved to an IPv6-capable server but until today, that capability has not been utilized. There were a number of things I had to configure, but this will not necessarily be an exhaustive list for you to follow. Some steps might be missing, and it might not apply to your situation. All of this took about 3 hours. We are using: FreeBSD 11.1 Bind 9.9.11 nginx 1.12.2 Fallout I expect some monitoring fallout from this change. I suspect some of my monitoring assumes IP4 and now that IPv6 is available, I need to monitor both IP addresses. ZFS on TrueOS: Why We Love OpenZFS (https://www.trueos.org/blog/zfs-trueos-love-openzfs/) TrueOS was the first desktop operating system to fully implement the OpenZFS (Zettabyte File System or ZFS for short) enterprise file system in a stable production environment. To fully understand why we love ZFS, we will look back to the early days of TrueOS (formerly PC-BSD). The development team had been using the UFS file system in TrueOS because of its solid track record with FreeBSD-based computer systems and its ability to check file consistency with the built-in check utility fsck. However, as computing demands increased, problems began to surface. Slow fsck file verification on large file systems, slow replication speeds, and inconsistency in data integrity while using UFS logging / journaling began to hinder users. It quickly became apparent that TrueOS users would need a file system that scales with evolving enterprise storage needs, offers the best data protection, and works just as well on a hobbyist system or desktop computer. Kris Moore, the founder of the TrueOS project, first heard about OpenZFS in 2007 from chatter on the FreeBSD mailing lists. In 2008, the TrueOS development team was thrilled to learn that the FreeBSD Project had ported ZFS. At the time, ZFS was still unproven as a graphical desktop solution, but Kris saw a perfect opportunity to offer ZFS as a cutting-edge file system option in the TrueOS installer, allowing the TrueOS project to act as an indicator of how OpenZFS would fair in real-world production use. The team was blown away by the reception and quality of OpenZFS on FreeBSD-based systems. By its nature, ZFS is a copy-on-write (CoW) file system that won't move a block of data until it both writes the data and verifies its integrity. This is very different from most other file systems in use today. ZFS is able to assure that data stays consistent between writes by automatically comparing write checksums, which mitigates bit rot. ZFS also comes with native RaidZ functionality that allows for enterprise data management and redundancy without the need for expensive traditional RAID cards. ZFS snapshots allow for system configuration backups in a split-second. You read that right. TrueOS can backup or restore snapshots in less than a second using the ZFS file system. Given these advantages, the TrueOS team decided to use ZFS as its exclusive file system starting in 2013, and we haven't looked back since. ZFS offers TrueOS users the stable workstation experience they want, while simultaneously scaling to meet the increasing demands of the enterprise storage market. TrueOS users are frequently commenting on how easy it is to use ZFS snapshots with our built-in snapshot utility. This allows users the freedom to experiment with their system knowing they can restore it in seconds if anything goes wrong. If you haven't had a chance to try ZFS with TrueOS, browse to our download page and make sure to grab a copy of TrueOS. You'll be blown away by the ease of use, data protection functionality, and incredible flexibility of RaidZ. Beastie Bits Source Code Podcast Interview with Michael W Lucas (https://blather.michaelwlucas.com/archives/3099) Operating System of the Year 2017: NetBSD Third place (https://w3techs.com/blog/entry/web_technologies_of_the_year_2017) OPNsense 18.1-RC1 released (https://opnsense.org/opnsense-18-1-rc1-released/) Personal OpenBSD Wiki Notes (https://balu-wiki.readthedocs.io/en/latest/security/openbsd.html) BSD section can use some contribution (https://guide.freecodecamp.org/bsd-os/) The Third Research Edition Unix Programmer's Manual (now available in PDF) (https://github.com/dspinellis/unix-v3man) Feedback/Questions Alex - my first freebsd bug (http://dpaste.com/3DSV7BC#wrap) John - Suggested Speakers (http://dpaste.com/2QFR4MT#wrap) Todd - Two questions (http://dpaste.com/2FQ450Q#wrap) Matthew - CentOS to FreeBSD (http://dpaste.com/3KA29E0#wrap) Brian - Brian - openbsd 6.2 and enlightenment .17 (http://dpaste.com/24DYF1J#wrap) ***
Before racing at the Buckeye Indoor Electronic Cart Track in Columbus, Ohio and after our Dell EMC customer event, I was able to catch up with Scott Phillips. Scott is an Advisory Technology Consultant for the Dell EMC Isilon product division. We talked unstructured file data, access protocols, flash and ease of deployment, management and expanding seamlessly. For more details, visit www.dellemc.com/isilon Don’t miss “Dell EMC The Source” app in the App Store. Be sure to subscribe to Dell EMC The Source Podcast on iTunes, Stitcher Radio or Google Play and visit the official blog at thesourceblog.emc.com EMC: The Source Podcast is hosted by Sam Marraccini (@SamMarraccini)
Dell EMC World 2017 included a complete refresh of all our core storage products. I was able to catch up with Jeff Boudreau (@JeffBoudreau3), President of Dell EMC Storage. We talked VMAX 950 All-Flash, XtremIO X2, Dell EMC Unity All-Flash, SC 5020, Isilon, and ECS. High end, mid-range and unstructured storage, EVERYTHING was refreshed and upgraded at Dell EMC World 2017. Get the inside scoop directly from the source, this week on Dell EMC The Source. Don’t miss “Dell EMC The Source” app in the App Store. Be sure to subscribe to Dell EMC The Source Podcast on iTunes, Stitcher Radio or Google Play and visit the official blog at thesourceblog.emc.com EMC: The Source Podcast is hosted by Sam Marraccini (@SamMarraccini)
This week on the show, we've got all sorts of goodies to discuss. Starting with, vmm, vkernels, raspberry pi and much more! Some iX folks are visiting from out of This episode was brought to you by Headlines vmm enabled (http://undeadly.org/cgi?action=article&sid=20161012092516&mode=flat&count=15) VMM, the OpenBSD hypervisor, has been imported into current It has similar hardware requirements to bhyve, a Intel Nehalem or newer CPU with the hardware virtualization features enabled in the BIOS AMD support has not been started yet OpenBSD is the only supported guest It would be interesting to hear from viewers that have tried it, and hear how it does, and what still needs more work *** vkernels go COW (http://lists.dragonflybsd.org/pipermail/commits/2016-October/624675.html) The DragonflyBSD feature, vkernels, has gained a new Copy-On-Write functionality Disk images can now be mounted RO or RW, but changes will not be written back to the image file This allows multiple vkernels to share the same disk image “Note that when the vkernel operates on an image in this mode, modifications will eat up system memory and swap, so the user should be cognizant of the use-case. Still, the flexibility of being able to mount the image R+W should not be underestimated.” This is another feature we'd love to hear from viewers that have tried it out. *** Basic support for the RPI3 has landed in FreeBSD-CURRENT (https://wiki.freebsd.org/arm64/rpi3) The long awaited bits to allow FreeBSD to boot on the Raspberry Pi 3 have landed There is still a bit of work to be done, some of the as mentioned in Oleksandr's blog post: Raspberry Pi support in HEAD (https://kernelnomicon.org/?p=690) “Raspberry Pi 3 limited support was committed to HEAD. Most of drivers should work with upstream dtb, RNG requires attention because callout mode seems to be broken and there is no IRQ in upstream device tree file. SMP is work in progress. There are some compatibility issue with VCHIQ driver due to some assumptions that are true only for ARM platform. “ This is exciting work. No HDMI support (yet), so if you plan on trying this out make sure you have your USB->Serial adapter cables ready to go. Full Instructions to get started with your RPI 3 can be found on the FreeBSD Wiki (https://wiki.freebsd.org/arm64/rpi3) Relatively soon, I imagine there will be a RaspBSD build for the RPI3 to make it easier to get started Eventually there will be official FreeBSD images as well *** OpenBSD switches softraid crypto from PKCS5 PBKDF2 to bcrypt PBKDF. (https://github.com/openbsd/src/commit/2ba69c71e92471fe05f305bfa35aeac543ebec1f) After the discussion a few weeks ago when a user wrote a tool to brute force their forgotten OpenBSD Full Disk Encryption password (from a password list of possible variations of their password), it was discovered that OpenBSD defaulted to using just 8192 iterations of PKCSv5 for the key derivation function with a SHA1-HMAC The number of iterations can be manually controlled by the user when creating the softraid volume By comparison, FreeBSDs GELI full disk encryption used a benchmark to pick a number of iterations that would take more than 2 seconds to complete, generally resulting in a number of iterations over 1 million on most modern hardware. The algorithm is based on a SHA512-HMAC However, inefficiency in the implementation of PKCSv5 in GELI resulted in the implementation being 50% slower than some other implementations, meaning the effective security was only about 1 second per attempt, rather than the intended 2 seconds. The improved PKCSv5 implementation is out for review currently. This commit to OpenBSD changes the default key derivation function to be based on bcrypt and a SHA512-HMAC instead. OpenBSD also now uses a benchmark to pick a number of of iterations that will take approximately 1 second per attempt “One weakness of PBKDF2 is that while its number of iterations can be adjusted to make it take an arbitrarily large amount of computing time, it can be implemented with a small circuit and very little RAM, which makes brute-force attacks using application-specific integrated circuits or graphics processing units relatively cheap. The bcrypt key derivation function requires a larger amount of RAM (but still not tunable separately, i. e. fixed for a given amount of CPU time) and is slightly stronger against such attacks, while the more modern scrypt key derivation function can use arbitrarily large amounts of memory and is therefore more resistant to ASIC and GPU attacks.” The upgrade to the bcrypt, which has proven to be quite resistant to cracking by GPUs is a significant enhancement to OpenBSDs encrypted softraid feature *** Interview - Josh Paetzel - email@email (mailto:email@email) / @bsdunix4ever (https://twitter.com/bsdunix4ever) MeetBSD ZFS Panel FreeNAS - graceful network reload Pxeboot *** News Roundup EC2's most dangerous feature (http://www.daemonology.net/blog/2016-10-09-EC2s-most-dangerous-feature.html) Colin Percival, FreeBSD's unofficial EC2 maintainer, has published a blog post about “EC2's most dangerous feature” “As a FreeBSD developer — and someone who writes in C — I believe strongly in the idea of "tools, not policy". If you want to shoot yourself in the foot, I'll help you deliver the bullet to your foot as efficiently and reliably as possible. UNIX has always been built around the idea that systems administrators are better equipped to figure out what they want than the developers of the OS, and it's almost impossible to prevent foot-shooting without also limiting useful functionality. The most powerful tools are inevitably dangerous, and often the best solution is to simply ensure that they come with sufficient warning labels attached; but occasionally I see tools which not only lack important warning labels, but are also designed in a way which makes them far more dangerous than necessary. Such a case is IAM Roles for Amazon EC2.” “A review for readers unfamiliar with this feature: Amazon IAM (Identity and Access Management) is a service which allows for the creation of access credentials which are limited in scope; for example, you can have keys which can read objects from Amazon S3 but cannot write any objects. IAM Roles for EC2 are a mechanism for automatically creating such credentials and distributing them to EC2 instances; you specify a policy and launch an EC2 instance with that Role attached, and magic happens making time-limited credentials available via the EC2 instance metadata. This simplifies the task of creating and distributing credentials and is very convenient; I use it in my FreeBSD AMI Builder AMI, for example. Despite being convenient, there are two rather scary problems with this feature which severely limit the situations where I'd recommend using it.” “The first problem is one of configuration: The language used to specify IAM Policies is not sufficient to allow for EC2 instances to be properly limited in their powers. For example, suppose you want to allow EC2 instances to create, attach, detach, and delete Elastic Block Store volumes automatically — useful if you want to have filesystems automatically scaling up and down depending on the amount of data which they contain. The obvious way to do this is would be to "tag" the volumes belonging to an EC2 instance and provide a Role which can only act on volumes tagged to the instance where the Role was provided; while the second part of this (limiting actions to tagged volumes) seems to be possible, there is no way to require specific API call parameters on all permitted CreateVolume calls, as would be necessary to require that a tag is applied to any new volumes being created by the instance.” “As problematic as the configuration is, a far larger problem with IAM Roles for Amazon EC2 is access control — or, to be more precise, the lack thereof. As I mentioned earlier, IAM Role credentials are exposed to EC2 instances via the EC2 instance metadata system: In other words, they're available from http://169.254.169.254/. (I presume that the "EC2ws" HTTP server which responds is running in another Xen domain on the same physical hardware, but that implementation detail is unimportant.) This makes the credentials easy for programs to obtain... unfortunately, too easy for programs to obtain. UNIX is designed as a multi-user operating system, with multiple users and groups and permission flags and often even more sophisticated ACLs — but there are very few systems which control the ability to make outgoing HTTP requests. We write software which relies on privilege separation to reduce the likelihood that a bug will result in a full system compromise; but if a process which is running as user nobody and chrooted into /var/empty is still able to fetch AWS keys which can read every one of the objects you have stored in S3, do you really have any meaningful privilege separation? To borrow a phrase from Ted Unangst, the way that IAM Roles expose credentials to EC2 instances makes them a very effective exploit mitigation mitigation technique.” “To make it worse, exposing credentials — and other metadata, for that matter — via HTTP is completely unnecessary. EC2 runs on Xen, which already has a perfectly good key-value data store for conveying metadata between the host and guest instances. It would be absolutely trivial for Amazon to place EC2 metadata, including IAM credentials, into XenStore; and almost as trivial for EC2 instances to expose XenStore as a filesystem to which standard UNIX permissions could be applied, providing IAM Role credentials with the full range of access control functionality which UNIX affords to files stored on disk. Of course, there is a lot of code out there which relies on fetching EC2 instance metadata over HTTP, and trivial or not it would still take time to write code for pushing EC2 metadata into XenStore and exposing it via a filesystem inside instances; so even if someone at AWS reads this blog post and immediately says "hey, we should fix this", I'm sure we'll be stuck with the problems in IAM Roles for years to come.” “So consider this a warning label: IAM Roles for EC2 may seem like a gun which you can use to efficiently and reliably shoot yourself in the foot; but in fact it's more like a gun which is difficult to aim and might be fired by someone on the other side of the room snapping his fingers. Handle with care!” *** Open-source storage that doesn't suck? Our man tries to break TrueNAS (http://www.theregister.co.uk/2016/10/18/truenas_review/) The storage reviewer over at TheRegister got their hands on a TrueNAS and gave it a try “Data storage is difficult, and ZFS-based storage doubly so. There's a lot of money to be made if you can do storage right, so it's uncommon to see a storage company with an open-source model deliver storage that doesn't suck.” “To become TrueNAS, FreeNAS's code is feature-frozen and tested rigorously. Bleeding-edge development continues with FreeNAS, and FreeNAS comes with far fewer guarantees than does TrueNAS.” “iXsystems provided a Z20 hybrid storage array. The Z20 is a dual-controller, SAS-based, high-availability, hybrid storage array. The testing unit came with a 2x 10GbE NIC per controller and retails around US$24k. The unit shipped with 10x 300GB 10k RPM magnetic hard drives, an 8GB ZIL SSD and a 200GB L2ARC SSD. 50GiB of RAM was dedicated to the ARC by the system's autotune feature.” The review tests the performance of the TrueNAS, which they found acceptable for spinning rust, but they also tested the HA features While the look of the UI didn't impress them, the functionality and built in help did “The UI contains truly excellent mouseover tooltips that provide detailed information and rationale for almost every setting. An experienced sysadmin will be able to navigate the TrueNAS UI with ease. An experienced storage admin who knows what all the terms mean won't have to refer to a wiki or the more traditional help manual, but the same can't be said for the uninitiated.” “After a lot of testing, I'd trust my data to the TrueNAS. I am convinced that it will ensure the availability of my data to within any reasonable test, and do so as a high availability solution. That's more than I can say for a lot of storage out there.” “iXsystems produce a storage array that is decent enough to entice away some existing users of the likes of EMC, NetApp, Dell or HP. Honestly, that's not something I thought possible going into this review. It's a nice surprise.” *** OpenBSD now officially on GitHub (https://github.com/openbsd) Got a couple of new OpenBSD items to bring to your attention today. First up, for those who didn't know, OpenBSD development has (always?) taken place in CVS, similar to NetBSD and previously FreeBSD. However today, Git fans can rejoice, since there is now an “official” read-only github mirror of their sources for public consumption. Since this is read-only, I will assume (unless told otherwise) that pull-requests and whatnot aren't taken. But this will come in handy for the “git-enabled” among us who need an easier way to checkout OpenBSD sources. There is also not yet a guarantee about the stability of the exporter. If you base a fork on the github branch, and something goes wrong with the exporter, the data may be reexported with different hashes, making it difficult to rebase your fork. How to install LibertyBSD or OpenBSD on a libreboot system (https://libreboot.org/docs/bsd/openbsd.html) For the second part of our OpenBSD stories, we have a pretty detailed document posted over at LibreBoot.org with details on how to boot-strap OpenBSD (Or LibertyBSD) using their open-source bios replacement. We've covered blog posts and other tidbits about this process in the past, but this seems to be the definitive version (so far) to reference. Some of the niceties include instructions on getting the USB image formatted not just on OpenBSD, but also FreeBSD, Linux and NetBSD. Instructions on how to boot without full-disk-encryption are provided, with a mention that so far Libreboot + Grub does not support FDE (yet). I would imagine somebody will need to port over the openBSD FDE crypto support to GRUB, as was done with GELI at some point. Lastly some instructions on how to configure grub, and troubleshoot if something goes wrong will help round-out this story. Give it a whirl, let us know if you run into issues. Editorial Aside - Personally I find the libreboot stuff fascinating. It really is one of the last areas that we don't have full control of our systems with open-source. With the growth of EFI, it seems we rely on a closed-source binary / mini-OS of sorts just to boot our Open Source solutions, which needs to be addressed. Hats off to the LibreBoot folks for taking on this important challenge. *** FreeNAS 9.10 – LAGG & VLAN Overview (https://www.youtube.com/watch?v=wqSH_uQSArQ) A video tutorial on FreeNAS's official YouTube Channel Covers the advanced networking features, Link Aggregation and VLANs Covers what the features do, and in the case of LAGG, how each of the modes work and when you might want to use it *** Beastie Bits Remote BSD Developer Position is up for grabs (https://www.cybercoders.com/bsd-developer-remote-job-305206) Isilon is hiring for a FreeBSD Security position (https://twitter.com/jeamland/status/785965716717441024) Google has ported the Networked real-time multi-player BSD game (https://github.com/google/web-bsd-hunt) A bunch of OpenBSD Tips (http://www.vincentdelft.be) The last OpenBSD 6.0 Limited Edition CD has sold (http://www.ebay.com/itm/-/332000602939) Dan spots George Neville-Neil on TV at the Airport (https://twitter.com/DLangille/status/788477000876892162) gnn on CNN (https://www.youtube.com/watch?v=h7zlxgtBA6o) SoloBSD releases v 6.0 built upon OpenBSD (http://solobsd.blogspot.com/2016/10/release-solobsd-60-openbsd-edition.html) Upcoming KnoxBug looks at PacBSD - Oct 25th (http://knoxbug.org/content/2016-10-25) Feedback/Questions Morgan - Ports and Packages (http://pastebin.com/Kr9ykKTu) Mat - ZFS Memory (http://pastebin.com/EwpTpp6D) Thomas - FreeBSD Path Length (http://pastebin.com/HYMPtfjz) Cy - OpenBSD and NetHogs (http://pastebin.com/vGxZHMWE) Lars - Editors (http://pastebin.com/5FMz116T) ***
The rapid growth of unstructured data represents a significant challenge for many enterprises across industries. As the volume and sources of data have expanded dramatically, traditional techniques to store and analyze information have proved to be too expensive and too slow to handle the massive data volumes modern enterprise product and manage. Dell EMC Isilon is designed to directly address that challenge, taking your data lake strategy to the next level with new scale-out offerings providing breakthrough efficiency, scale and agility from edge to core to cloud. Earlier this week at Dell EMC World, Isilon became the latest member of the Dell EMC All-Flash Portfolio. This week on Dell EMC The Source, Ed Beauvais (@EdBeauvais), Director Isilon Product Marketing stops by to talk Isilon and the impact of All-Flash for density (68 PB) and performance (25 Million IOPS). How does that translate to your workloads? Find out this week on Dell EMC The Source. For more visit www.emc.com/isilon Don’t miss “Dell EMC The Source” app in the App Store. Be sure to subscribe to Dell EMC The Source Podcast on iTunes, Stitcher Radio or Google Play and visit the official blog at thesourceblog.emc.com EMC: The Source Podcast is hosted by Sam Marraccini (@SamMarraccini)
On Monday at EMC World, I had the opportunity to sit down with Ed Beauvais, EMC’s Director of Isilon Product Marketing. Ed was gracious enough to agree to be interviews for the MiniCast. We talked about new features in Isilon … Continue reading →
This week on BSDNow, we get to hear all of Allans post EuroBSDCon wrap-up and a great interview with Benno Rice from Isilon. We got to discuss some of the pain of doing major forklift upgrades, and why your business should track -CURRENT. This episode was brought to you by Headlines EuroBSDCon Videos EuroBSDCon has started posting videos of the talks online already. The videos posted online are archives of the live stream, so some of the videos contain multiple talks Due to a technical complication, some videos only have 1 channel of audio EuroBSDCon Talk Schedule (https://2015.eurobsdcon.org/talks-and-schedule/talk-schedule/) Red Room Videos (https://www.youtube.com/channel/UCBPvcqZrNuKZuP1LQhlCp-A) Yellow Room Videos (https://www.youtube.com/channel/UCJk8Kls9LT-Txu-Jhv7csfw) Blue Room Videos (https://www.youtube.com/channel/UC-3DOxIOI5oHXE1H57g3FzQ) Photos of the conference courtersy of Ollivier Robert (https://assets.keltia.net/photos/EuroBSDCon-2015/) *** A series of OpenSMTPd patches fix multiple vulnerabilities (http://undeadly.org/cgi?action=article&sid=20151005200020) Qualys recently published an audit of the OpenSNMPd source code (https://www.qualys.com/2015/10/02/opensmtpd-audit-report.txt) The fixes for these vulnerabilities were released as 5.7.2 After its release, two additional vulnerabilities (http://www.openwall.com/lists/oss-security/2015/10/04/2) were found. One, in the portable version, newer code that was added after the audit started All users are strongly encouraged to upgrade to 5.7.3 OpenBSD users should apply the latest errata or upgrade to the newest snapshot *** FreeBSD updates in -CURRENT (https://svnweb.freebsd.org/base?view=revision&revision=288917) Looks like Xen header support has been bumped in FreeBSD from 4.2 -> 4.6 It also enables support for ARM Update to Clang / LLVM to 3.7.0 (https://lists.freebsd.org/pipermail/freebsd-current/2015-October/057691.html) http://llvm.org/releases/3.7.0/docs/ReleaseNotes.html ZFS gets FRU (field replaceable unit) tracking (https://svnweb.freebsd.org/base?view=revision&revision=287745) OpenCL makes it way into the ports tree (https://svnweb.freebsd.org/ports?view=revision&revision=397198) bhyve has grown UEFI support, plus a CSM module bhyve can now boot Windows (https://lists.freebsd.org/pipermail/freebsd-virtualization/2015-October/003832.html) Currently there is still only a serial console, so the post includes an unattended install .xml file and instructions on how to repack the ISO. Once Windows is installed, you can RDP into the machine bhyve can also now run IllumOS (https://lists.freebsd.org/pipermail/freebsd-virtualization/2015-October/003833.html) *** OpenBSD Initial Support for Broadwell Graphics (http://marc.info/?l=openbsd-cvs&m=144304997800589&w=2) OpenBSD joins DragonFly now with initial support for broadwell GPUs landing in their development branch This brings Open up to Linux 3.14.52 DRM, and Mark Kettenis mentions that it isn.t perfect yet, and may cause some issues with older hardware, although no major regressions yet *** OpenBSD Slides for TAME (http://www.openbsd.org/papers/tame-fsec2015/) and libTLS APIs (http://www.openbsd.org/papers/libtls-fsec-2015/) The first set of slides are from a talk Theo de Raadt gave in Croatia, they describe the history and impetus for tame Theo specifically avoids comparisons to other sandboxing techniques like capsicum and seccomp, because he is not impartial tame() itself is only about 1200 lines of code Sandboxing the file(1) command with systrace: 300 lines of code, with tame: 4 lines Theo makes the point that .optional security. is irrelevant. If a mitigation feature has a knob to turn it off, some program will break and advise users to turn the feature off. Eventually, no one uses the feature, and it dies This has lead to OpenBSD.s policy: .Once working, these features cannot be disabled. Application bugs must be fixed. The second talk is by Bob Beck, about LibreSSL when LibreSSL was forked from OpenSSL 1.0.1g, it contained 388,000 lines of C code 30 days in LibreSSL, they had deleted 90,000 lines of C OpenSSL 1.0.2d has 432,000 lines of C (728k total), and OpenSSL Current has 411,000 lines of C (over 1 million total) LibreSSL today, contains 297,000 lines of C (511k total) None of the high risk CVEs against OpenSSL (there have been 5) have affected LibreSSL. It turns out removing old code and unneeded features is good for security. The talk focuses on libtls, an alternative to the OpenSSL API, designed to be easier to use and less error prone In the libtls api, if -1 is returned, it is always an error. In OpenSSL, it might not be an error, needs additional code to check errno In OpenBSD: ftp, nc, ntpd, httpd, spamd, syslog have been converted to the new API The OpenBSD Foundation is looking for donations in order to sponsor 2-3 developers to spend 6 months dedicated to LibreSSL *** Interview - Benno Rice - benno@FreeBSD.org (mailto:benno@FreeBSD.org) / @jeamland (https://twitter.com/jeamland) Isilon and building products on top of FreeBSD News Roundup ReLaunchd (https://github.com/mheily/relaunchd/blob/master/doc/rationale.txt) This past week we got a heads up about another init/launchd replacement, this time .Relaunchd. The goals of this project appear to be keeping launchd functionality, while being portable enough to run on FreeBSD / Linux, etc. It also has aspirations of being .container-aware. with support for jailed services, ala-docker, as well as cluster awareness. Written in ruby :(, it also maintains that it wishes to NOT take over PID1 or replace the initial system boot scripts, but extend / leverage them in new ways. *** Static Intrusion Detection in NetBSD (https://mail-index.netbsd.org/source-changes/2015/09/24/msg069028.html) Alistar Crooks has committed a new .sid. utility to NetBSD, which allows intrusion detection by comparing the file-system contents to a database of known good values The utility can compare the entire root file system of a modest NetBSD machine in about 15 seconds The following parameters of each file can be checked: atime, block count, ctime, file type, flags, group, inode, link target, mtime, number of links, permissions, size, user, crc32c checksum, sha256 checksum, sha512 checksum A JSON report is issued at the end, for any detected variances *** LibreSSL 2.3.0 in PC-BSD If you.re running PC-BSD 10.2-EDGE or October's -CURRENT image, LibreSSL 2.3.0 is now a thing Thanks to the hard work of Bernard Spil and others, we have merged in the latest LibreSSL which actually removes SSL support in favor of TLS Quite a number of bugs have been fixed, as well as patches brought over from OpenBSD to fix numerous ports. Allan has started a patchset that sets the OpenSSL in base to "private" (http://allanjude.com/bsd/privatessl_2015-10-07.patch) This hides the library so that applications and ports cannot find it, so only tools in the base system, like fetch, will be able to use it. This makes OpenSSL no longer part of the base system ABI, meaning the version can be upgraded without breaking the stable ABI promise. This feature may be important in the future as OpenSSL versions now have EoL dates, that may be sooner than the EoL on the FreeBSD stable branches. *** PC-BSD and boot-environments without GRUB (http://lists.pcbsd.org/pipermail/testing/2015-October/010173.html) In this month.s -CURRENT image of PC-BSD, we began the process of moving back from the GRUB boot-loader, in favor of FreeBSD.s A couple of patches have been included, which enables boot-environment support via the 4th menus (Thanks Allan) and support for booting ZFS on root via UEFI "beadm" has also been updated to seamlessly support both boot-loaders No full-disk encryption support yet (hopefully soon), but GRUB is still available on installer for those who need it *** Import of IWM wireless to DragonFly (http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/24a8d46a22f9106b0c1466c41ba73460d7d22262) Matthew Dillon has recently imported the newer if_iwm driver from FreeBSD -> DragonFly Across the internet, users with newer Intel chipsets rejoiced! Coupled with the latest Broadwell DRM improvements, DragonFly sounds very ready for the latest laptop chipsets Also, looks like progress is being made on i386 removal (http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/cf37dc2040cea9f384bd7d3dcaf24014f441b8a6) *** Feedback/Questions Dan writes in about PCBSD (http://slexy.org/view/s27ZeOiM4t) Matt writes in about ZFS (http://slexy.org/view/s219J3ebx5) Anonymous writes in about problems booting (http://slexy.org/view/s21uuMAmZb) ***
Intelligent Storage: Swift Project Erasure Codes–container model for grouping objects within code.
Intelligent Storage: Intel and VMware are working together to provide foundational technologies for intelligent, software-defined storage solutions. These solutions utilize Intel Xeon processors, 10 Gigabit Intel Ethernet Converged Network Adapters, Intel Solid-State Drive (SSD) Data Center Family and VMware Virtual SAN. Together they provide a scalable, efficient and flexible infrastructure.
Software Defined Infrastructure: Intel General Manager Bev Crair and EMC VP Boaz Palgi discuss how Intel and EMC ViPR cloud storage solutions help manage and extract value from data through software-defined data center networks and storage.
Intelligent Storage: Irshad Raihan explains how HP solutions for big data, built on the Intel Xeon processor family, deliver value quickly and cost-effectively to companies analyzing both structured and unstructured storage data.
Software Defined Infrastructure: Replacing older servers in your datacenter with new, upgraded ones can go a long way to getting you the performance you need as your business grows. When selecting replacement hardware for your legacy systems, you can go with a basic configuration, upgrade some components as part of the new purchase, or upgrade […]
Software Defined Infrastructure: Hardware and virtual application delivery controllers (ADCs) from F5 Networks help ensure users can connect to applications from anywhere in the world, regardless of device, cloud, or enterprise architecture.
Software Defined Infrastructure: F5 Networks application delivery controllers (ADCs) are helping today’s most advanced and efficient data centers meet these demands. Based on Intel Xeon processors and Intel QuickAssist Technology, these purpose-built appliances reside at the front of the data center, intelligently managing network traffic to help ensure applications are fast, secure, and available. An […]
Software Defined Infrastructure: IDF 2014: Maxta software defined storage solutions. New architectures: Grantly launch, Intel Xeon processor E5V3, boosting performance to support storage platforms.
Software Defined Infrastructure: Server upgrades to platforms featuring Windows Server 2012, Intel Xeon processor E5-2697 v2, Intel SSD DC S3700 series, Intel Ethernet 10GbE, and more help accelerate IT modernization to maximize business efficiency and minimize operating expenses.
Intelligent Storage: Comprehensive Business Systems, VAR, uses Intel Xeon processor E3 v2 family-based Western Digital Sentinel S-Series servers for virtual boot, rapid recovery benefits.
Salesforce shares how they get their data moving with HP 3PAR StoreServ storage—based on Intel technology—to prevent performance bottlenecks. These new solutions increase performance and scalabilty, and reduce costs.
Intelligent Storage: A scalable block storage system was implemented using the Ceph open-source software for distributed storage. As a provider of cloud services, IDC Frontier Inc. became interested in highly scalable object storage as a way to increase the capacity and reduce the cost of the storage used by these services.
Intelligent Storage: New, intelligent features in OpenStack Swift called Storage Policies help meet today’s storage demands.
Software Defined Infrastructure: This infographic shows how 10GbE Intel Ethernet Converged Network Adapters and FCoE support unified networking to solve the cost and performance problems of separate storage and networking infrastructure.
Software Defined Infrastructure: The era of separate networks is coming to an end. As more and more data centers look to adopt 10GbE unified networking solutions, you can be poised to offer expert advice and speak directly to the efficiency and performance gains, along with savings to the bottom line, which data centers will get […]
Software Defined Infrastructure: This infographic shows that one server product can’t solve the problems of legacy hardware. That’s why modern data centers need 10 gigabit Ethernet—with the Intel Xeon processor E5-2600 v2 product family, Windows Server* 2012, the Intel Solid-State Drive Data Center S3700 Series, and the Intel Ethernet 10 Gigabit Converged Network Adapter X520—to […]
Software Defined Infrastructure: In order to be cloud ready, virtualization solutions need to span across server, storage, and networking. For compute and I/O intensive applications and architectures that require high bandwidth and predictable latency, businesses typically discover that it is not enough to just deploy flexible, virtual server applications—the underlying I/O must also be flexible, […]
Intelligent Storage: This Solution Reference Architecture showcases an end-to-end solution available today to address a specific storage challenge, in this case Tier 2 Workloads, using VMware Virtual SAN
Intelligent Storage: Intel and VMware are working together to provide foundational technologies for intelligent, software-defined storage solutions. These solutions utilize Intel Xeon processors, 10 Gigabit Intel Ethernet Converged Network Adapters, Intel Solid-State Drive (SSD) Data Center Family and VMware Virtual SAN. Together they provide a scalable, efficient and flexible infrastructure.
Software Defined Infrastructure: This testing was done by Principled Technologies to show Good, Better, Best results using Intel Xeon processors, Intel Ethernet, and Intel SSDs, with Microsoft Server 2012 R2.