American internet pioneer
POPULARITY
The DNS resolution path by which the world's internet content consumers locate the world's internet content producers has been under continuous attack since the earliest days of Internet commercialization and privatization. Much work has recently and is currently being invested to protect this vital source of Personally Identifiable Information -- but by whom, and why, and how? Let's discuss. About the speaker: Paul Vixie serves AWS Security as Deputy CISO, VP & Distinguished Engineer after a 29-year career as the founder and CEO of five startup companies covering the fields of DNS, anti-spam, Internet exchange, Internet carriage and hosting, and Internet security. Vixie earned his Ph.D. in Computer Science from Keio University in 2011 and was inducted into the Internet Hall of Fame in 2014. He has authored or co-authored several Internet RFC documents and open source software projects including Cron and BIND. https://en.wikipedia.org/wiki/Paul_Vixie
In this episode, we're bringing you a fireside chat with Paul Vixie, AWS Deputy CISO, Vice President, and Distinguished Engineer and Clarke Rodgers, Director of Enterprise Strategy. As an early influencer in the evolution of the Internet, Paul knows a lot about the Internet's inner workings, including its vulnerabilities. Join us as our experts discuss how AWS is working to address these security flaws and make the Internet a safer place to communicate.
This week we're talking about DNS with Paul Vixie — Paul is well known for his contributions to DNS and agrees with Adam on having a “love/hate relationship with DNS.” We discuss the limitations of current DNS technologies and the need for revisions to support future internet scale, the challenges in doing that. Paul shares insights on the future of the internet and how he'd reinvent DNS if given the opportunity. We even discuss the cultural idiom “It's always DNS,” and the shift to using DNS resolvers like OpenDNS, Google's 8.8.8.8 and Cloudflare's 1.1.1.1. Buckle up, this is a good one.
This week we're talking about DNS with Paul Vixie — Paul is well known for his contributions to DNS and agrees with Adam on having a “love/hate relationship with DNS.” We discuss the limitations of current DNS technologies and the need for revisions to support future internet scale, the challenges in doing that. Paul shares insights on the future of the internet and how he'd reinvent DNS if given the opportunity. We even discuss the cultural idiom “It's always DNS,” and the shift to using DNS resolvers like OpenDNS, Google's 8.8.8.8 and Cloudflare's 1.1.1.1. Buckle up, this is a good one.
In this episode, we're joined by Paul Vixie, an Internet Hall of Fame inductee with over 30 years of experience in the tech industry. Paul, a high school dropout turned Ph.D. holder from Keio University in Japan, shares his journey from making the internet more accessible to controlling its impact and making it harder to misuse.As a founder and CEO of five different technology companies, Paul emphasizes the responsibility of technologists to protect everyday users. He discusses his evolution as a 'generalist with deep perspective' and shares his motivation driven by his frustration with how the internet has made people unsafe.Join us as we delve into Paul's journey and his insights on how to navigate the world of internet technology successfully.-“I am no longer part of the group of people who can dictate what the next big thing will look like. There are people who are better than me in all of that. However, where I can still make a contribution is as a generalist with deep perspective, rather than as an expert in numerous different technologies” -Paul's Links:TwitterLinkedInWikipedia Internet Hall of FameMastodon--Thanks for being an imposter - a part of the Imposter Syndrome Network (ISN)! We'd love it if you connected with us at the links below: The ISN LinkedIn group (community): https://www.linkedin.com/groups/14098596/ The ISN on Twitter: https://twitter.com/ImposterNetwork Zoë on Twitter: https://twitter.com/RoseSecOps Chris on Twitter: https://twitter.com/ChrisGrundemann Make it a great day.
Amazon Web Services would not be what it is today without open source. "I think it starts with sustainability," said David Nalley, head of open source and marketing at AWS in an interview at the Open Source Summit in Dublin for The New Stack Makers. "And this really goes back to the origin of Amazon Web Services. AWS would not be what it is today without open source." Long-term support for open source is one of three pillars of the organization's open source strategy. AWS builds and innovates on top of open source and will maintain that approach for its innovation, customers, and the larger digital economy. "And that means that there's a long history of us benefiting from open source and investing in open source," Nalley said. "But ultimately, we're here for the long haul. We're going to continue making investments. We're going to increase our investments in open source." Customers' interest in open source is the second pillar of the AWS open source strategy. "We feel like we have to make investments on behalf of our customers," Nally said. "But the reality is our customers are choosing open source to run their workloads on." [sponsor_note slug="amazon-web-services-aws" ][/sponsor_note] The third pillar focuses on advocating for open source in the larger digital economy. Notable is how much AWS's presence in the market played a part in Paul Vixie's decision to join the company. Vixie, an Internet pioneer, is now vice president of security and an AWS distinguished engineer who was also interviewed for the New Stack Makers podcast at the Open Source Summit. Nalley has his recognizable importance in the community. Nalley is the president of the Apache Software Foundation, one of the world's most essential open source foundations. The importance of its three-pillar strategy shows in many of the projects that AWS supports. AWS recently donated $10 million to the Open Source Software Supply Chain Foundation, part of the Linux Foundation. AWS is a significant supporter of the Rust Foundation, which supports the Rust programming language and ecosystem. It puts a particular focus on maintainers that govern the project. Last month, Facebook unveiled the PyTorch Foundation that the Linux Foundation will manage. AWS is on the governing board.
Paul Vixie grew up in San Francisco. He dropped out of high school in 1980. He worked on the first Internet gateways at DEC and, from there, started the Internet Software Consortium (ISC), establishing Internet protocols, particularly the Domain Name System (DNS). Today, Vixie is one of the few dozen in the technology world with the title "distinguished engineer," working at Amazon Web Services as vice president of security, where he believes he can make the Internet a more safe place. As safe as before the Internet emerged. "I am worried about how much less safe we all are in the Internet era than we were before," Vixie said in an interview at the Open Source Summit in Dublin earlier this month for The New Stack Makers podcast. "And everything is connected, and very little is understood. And so, my mission for the last 20 years has been to restore human safety to pre-internet levels. And doing that at scale is quite the challenge. It'll take me a lifetime." So why join AWS? He spent decades establishing the ISC. He started a company called Farsight, which came out of ISC. He sold Farsight in November of last year when conversations began with AWS. Vixie thought about his mission to better restore human safety to pre-internet levels when AWS asked a question that changed the conversation and led him to his new role. "They asked me, what is now in retrospect, an obvious question, 'AWS hosts, probably the largest share of the digital economy that you're trying to protect," Vixie said. "Don't you think you can complete your mission by working to help secure AWS?' "The answer is yes. In fact, I feel like I'm going to get more traction now that I can focus on strategy and technology and not also operate a company on the side. And so it was a very good win for me, and I hope for them." Interviewing Vixie is such an honor. It's people like Paul who made so much possible for anyone who uses the Internet. Just think of that for a minute -- anyone who uses the Internet have people like Paul to thank. Thanks Paul -- you are a hero to many. Here's to your next run at AWS.
Links Referenced: Okta's CEO: https://www.bloomberg.com/news/articles/2022-04-04/okta-ceo-says-breach-is-big-deal-aims-to-restore-trust taken a job as a Distinguished Engineer VP at AWS: https://www.linkedin.com/feed/update/urn:li:activity:6914280317675614208/ Ubiquiti has sued Brian Krebs for defamation: https://www.theregister.com/2022/03/30/ubiquiti_brian_krebs/ “Best practices: Securing your Amazon Location Service resources”: https://aws.amazon.com/blogs/security/best-practices-securing-your-amazon-location-service-resources/ Access Undenied: https://github.com/ermetic/access-undenied-aws aws-keys-sectool: https://github.com/toshke/aws-keys-sectool TranscriptCorey: This is the AWS Morning Brief: Security Edition. AWS is fond of saying security is job zero. That means it's nobody in particular's job, which means it falls to the rest of us. Just the news you need to know, none of the fluff.Corey: Today's episode is brought to you in part by our friends at MinIO the high-performance Kubernetes native object store that's built for the multi-cloud, creating a consistent data storage layer for your public cloud instances, your private cloud instances, and even your edge instances, depending upon what the heck you're defining those as, which depends probably on where you work. It's getting that unified is one of the greatest challenges facing developers and architects today. It requires S3 compatibility, enterprise-grade security and resiliency, the speed to run any workload, and the footprint to run anywhere, and that's exactly what MinIO offers. With superb read speeds in excess of 360 gigs and 100 megabyte binary that doesn't eat all the data you've gotten on the system, it's exactly what you've been looking for. Check it out today at min.io/download, and see for yourself. That's min.io/download, and be sure to tell them that I sent you.Corey: A somehow quiet week as we all grapple with the recent string of security failures from, well, take your pick really.A bit late but better than never, Okta's CEO admits the LAPSUS$ hack has damaged trust in the company. The video interview is surprisingly good in parts, but he ruins the, “Third-party this, third-party that, no—it was our responsibility, and our failure” statement by then saying that they no longer do business with Sitel—the third-party who was responsible for part of this breach. Crisis comms is really something to figure out in advance of a crisis, so you don't get in your own way.Paul Vixie, creator of a few odds and ends such as DNS, has taken a job as a Distinguished Engineer VP at AWS and I look forward to misusing more of his work as databases. He's apparently in the security org which is why I'm talking about today and not Monday.And of course, as I've been ranting about in yesterday's newsletter and on Twitter, Ubiquiti has sued Brian Krebs for defamation. Frankly they come off as far, far worse for this than they did at the start. My position has shifted from one of sympathy to, “Well, time to figure out who sells a 10Gbps switch that isn't them.”Corey: This episode is sponsored in part by LaunchDarkly. Take a look at what it takes to get your code into production. I'm going to just guess that it's awful because it's always awful. No one loves their deployment process. What if launching new features didn't require you to do a full-on code and possibly infrastructure deploy? What if you could test on a small subset of users and then roll it back immediately if results aren't what you expect? LaunchDarkly does exactly this. To learn more, visit launchdarkly.com and tell them Corey sent you, and watch for the wince.AWS had an interesting post: “Best practices: Securing your Amazon Location Service resources”. AWS makes a good point here. It hadn't occurred to me that you'd need to treat location data particularly specially, but of course you do. The entire premise of the internet falls apart if it suddenly gets easier to punch someone in the face for something they said on Twitter.And two tools of note this week for you. Access Undenied parses AWS AccessDenied CloudTrail events, explains the reasons for them, and offers actionable fixes. And aws-keys-sectool does something obvious in hindsight: Making sure that any long-lived credentials on your machine are access restricted to your own IP address. Check it out. And that's what happened last week in AWS security. Continue to make good choices because it seems very few others are these days.Corey: Thank you for listening to the AWS Morning Brief: Security Edition with the latest in AWS security that actually matters. Please follow AWS Morning Brief on Apple Podcast, Spotify, Overcast—or wherever the hell it is you find the dulcet tones of my voice—and be sure to sign up for the Last Week in AWS newsletter at lastweekinaws.com.Announcer: This has been a HumblePod production. Stay humble.
Why does DNS matter for privacy, security, freedom, and the internet? We are thrilled to bring on John Todd from Quad9 to give us the inside scoop on DNS and how important it is for all of us. Quad9: https://quad9.net/ Dr Paul Vixie: https://www.farsightsecurity.com/about-farsight-security/team/vixie/ Quad9 Blog: https://quad9.net/news/blog Quad9 Donation Page: https://www.paypal.com/donate?token=KCPcRNowmH1fHS0O0kIM2OdnaiKH3wOaqZnBVC3eGBgrK3K0M7W9dt1So5QE3Ersq_ZRDDY9c9I04yZS 00:00 Introduction00:21 History of Quad906:57 What is DNS?09:48 What is a "recursive" DNS?13:05 Why does DNS matter?19:52 DNS & enforcing content policies20:59 Why should listeners change their default DNS?23:28 What makes Quad9 different and why should listeners consider Quad9?24:59 What is Quad9's business/funding model?29:03 DoH vs DoT36:35 John Todd's 7 DNS/tech predictions for 202237:39 Prediction 2: How can DNS help mitigate or prevent cyberattack?42:37 DNS stability vs security44:11 Prediction 6: What does the future of privacy regulation hold?47:00 Additional thoughts& comments48:38 Quad9 vs the Courts (and how you can help!)
Today's episode on spam is read by the illustrious Joel Rennich. Spam is irrelevant or inappropriate and unsolicited messages usually sent to a large number of recipients through electronic means. And while we probably think of spam as something new today, it's worth noting that the first documented piece of spam was sent in 1864 - through the telegraph. With the advent of new technologies like the fax machine and telephone, messages and unsolicited calls were quick to show up. Ray Tomlinson is widely accepted as the inventor of email, developing the first mail application in 1971 for the ARPANET. It took longer than one might expect to get abused, likely because it was mostly researchers and people from the military industrial research community. Then in 1978, Gary Thuerk at Digital Equipment Corporation decided to send out a message about the new VAX computer being released by Digital. At the time, there were 2,600 email accounts on ARPANET and his message found its way to 400 of them. That's a little over 15% of the Internet at the time. Can you imagine sending a message to 15% of the Internet today? That would be nearly 600 million people. But it worked. Supposedly he closed $12 million in deals despite rampant complaints back to the Defense Department. But it was too late; the damage was done. He proved that unsolicited junk mail would be a way to sell products. Others caught on. Like Dave Rhodes who popularized MAKE MONEY FAST chains in the 1988. Maybe not a real name but pyramid schemes probably go back to the pyramids so we might as well have them on the Internets. By 1993 unsolicited email was enough of an issue that we started calling it spam. That came from the Monty Python skit where Vikings in a cafe and spam was on everything on the menu. That spam was in reference to canned meat made of pork, sugar, water, salt, potato starch, and sodium nitrate that was originally developed by Jay Hormel in 1937 and due to how cheap and easy it was found itself part of a cultural shift in America. Spam came out of Austin, Minnesota. Jay's dad George incorporated Hormel in 1901 to process hogs and beef and developed canned lunchmeat that evolved into what we think of as Spam today. It was spiced ham, thus spam. During World War II, Spam would find its way to GIs fighting the war and Spam found its way to England and countries the war was being fought in. It was durable and could sit on a shelf for moths. From there it ended up in school lunches, and after fishing sanctions on Japanese-Americans in Hawaii restricted the foods they could haul in, spam found its way there and some countries grew to rely on it due to displaced residents following the war. And yet, it remains a point of scorn in some cases. As the Monty Python sketch mentions, spam was ubiquitous, unavoidable, and repetitive. Same with spam through our email. We rely on email. We need it. Email was the first real, killer app for the Internet. We communicate through it constantly. Despite the gelatinous meat we sometimes get when we expect we're about to land that big deal when we hear the chime that our email client got a new message. It's just unavoidable. That's why a repetitive poster on a list had his messages called spam and the use just grew from there. Spam isn't exclusive to email. Laurence Canter and Martha Siegel sent the first commercial Usenet spam in the “Green Card” just after the NSF allowed commercial activities on the Internet. It was a simple Perl script to sell people on the idea of paying a fee to have them enroll people into the green card lottery. They made over $100,000 and even went so far as to publish a book on guerrilla marketing on the Internet. Canter got disbarred for illegal advertising in 1997. Over the years new ways have come about to try and combat spam. RBLs, or using DNS blacklists to mark hosts as unable to send blacklists and thus having port 25 blocked emerged in 1996 from the Mail Abuse Prevention System, or MAPS. Developed by Dave Rand and Paul Vixie, the list of IP addresses helped for a bit. That is, until spammers realized they could just send from a different IP. Vixie also mentioned the idea of of matching a sender claim to a mail server a message came from as a means of limiting spam, a concept that would later come up again and evolve into the Sender Policy Framework, or SPF for short. That's around the same time Steve Linford founded Spamhaus to block anyone that knowingly spams or provides services to spammers. If you have a cable modem and try to setup an email server on it you've probably had to first get them to unblock your address from their Don't Route list. The next year Mark Jeftovic created a tool called filter.plx to help filter out spam and that project got picked up by Justin Mason who uploaded his new filter to SourceForge in 2001. A filter he called SpamAssassin. Because ninjas are cooler than pirates. Paul Graham, the co-creator of Y Combinator (and author a LISP-like programming language) wrote a paper he called “A Plan for Spam” in 2002. He proposed using a Bayesian filter as antivirus software vendors used to combat spam. That would be embraced and is one of the more common methods still used to block spam. In the paper he would go into detail around how scoring of various words would work and probabilities that compared to the rest of his email that a spam would get flagged. That Bayesian filter would be added to SpamAssassin and others the next year. Dana Valerie Reese came up with the idea for matching sender claims independently and she and Vixie both sparked a conversation and the creation of the Anti-Spam Research Group in the IETF. The European Parliament released the Directive on Privacy and Electronic Communications in the EU criminalizing spam. Australia and Canada followed suit. 2003 also saw the first laws in the US regarding spam. The CAN-SPAM Act of 2003 was signed by President George Bush in 2003 and allowed the FTC to regulate unsolicited commercial emails. Here we got the double-opt-in to receive commercial messages and it didn't take long before the new law was used to prosecute spammers with Nicholas Tombros getting the dubious honor of being the first spammer convicted. What was his spam selling? Porn. He got a $10,000 fine and six months of house arrest. Fighting spam with laws turned international. Christopher Pierson was charged with malicious communication after he sent hoax emails. And even though spammers were getting fined and put in jail all the time, the amount of spam continued to increase. We had pattern filters, Bayesian filters, and even the threat of legal action. But the IETF Anti-Spam Research Group specifications were merged by Meng Weng Wong and by 2006 W. Schlitt joined the paper to form a new Internet standard called the Sender Policy Framework which lives on in RFC 7208. There are a lot of moving parts but at the heart of it, Simple Mail Transfer Protocol, or SMTP, allows sending mail from any connection over port 25 (or others if it's SSL-enabled) and allowing a message to pass requiring very little information - although the sender or sending claim is a requirement. A common troubleshooting technique used to be simply telnetting into port 25 and sending a message from an address to a mailbox on a mail server. Theoretically one could take the MX record, or the DNS record that lists the mail server to deliver mail bound for a domain to and force all outgoing mail to match that. However, due to so much spam, some companies have dedicated outbound mail servers that are different than their MX record and block outgoing mail like people might send if they're using personal mail at work. In order not to disrupt a lot of valid use cases for mail, SPF had administrators create TXT records in DNS that listed which servers could send mail on their behalf. Now a filter could check the header for the SMTP server of a given message and know that it didn't match a server that was allowed to send mail. And so a large chunk of spam was blocked. Yet people still get spam for a variety of reasons. One is that new servers go up all the time just to send junk mail. Another is that email accounts get compromised and used to send mail. Another is that mail servers get compromised. We have filters and even Bayesian and more advanced forms of machine learning. Heck, sometimes we even sign up for a list by giving our email out when buying something from a reputable site or retail vendor. Spam accounts for over 90% of the total email traffic on the Internet. This is despite blacklists, SPF, and filters. And despite the laws and threats spam continues. And it pays well. We mentioned Canter & Sigel. Shane Atkinson was sending 100 million emails per day in 2003. That doesn't happen for free. Nathan Blecharczyk, a co-founder of Airbnb paid his way through Harvard on the back of spam. Some spam sells legitimate products in illegitimate ways, as we saw with early IoT standard X10. Some is used to spread hate and disinformation, going back to Sender Argic, known for denying the Armenian genocide through newsgroups in 1994. Long before infowars existed. Peter Francis-Macrae sent spam to solicit buying domains he didn't own. He was convicted after resorting to blackmail and threats. Jody Michael Smith sold replica watches and served almost a year in prison after he got caught. Some spam is sent to get hosts loaded with malware so they could be controlled as happened with Peter Levashov, the Russian czar of the Kelihos botnet. Oleg Nikolaenko was arrested by the FBI in 2010 for spamming to get hosts in his Mega-D botnet. The Russians are good at this; they even registered the Russian Business Network as a website in 2006 to promote running an ISP for phishing, spam, and the Storm botnet. Maybe Flyman is connected to the Russian oligarchs and so continues to be allowed to operate under the radar. They remain one of the more prolific spammers. Much is sent by a small number of spammers. Khan C. Smith sent a quarter of the spam in the world until he got caught in 2001 and fined $25 million. Again, spam isn't limited to just email. It showed up on Usenet in the early days. And AOL sued Chris “Rizler” Smith for over $5M for his spam on their network. Adam Guerbuez was fined over $800 million dollars for spamming Facebook. And LinkedIn allows people to send me unsolicited messages if they pay extra, probably why Microsoft payed $26 billion for the social network. Spam has been with us since the telegraph; it isn't going anywhere. But we can't allow it to run unchecked. The legitimate organizations that use unsolicited messages to drive business help obfuscate the illegitimate acts where people are looking to steal identities or worse. Gary Thuerk opened a Pandora's box that would have been opened if hadn't of done so. The rise of the commercial Internet and the co-opting of the emerging cyberspace as a place where privacy and so anonymity trump verification hit a global audience of people who are not equal. Inequality breeds crime. And so we continually have to rethink the answers to the question of sovereignty versus the common good. Think about that next time an IRS agent with a thick foreign accent calls asking for your social security number - and remember (if you're old enough) that we used to show our social security cards to grocery store clerks when we wrote checks. Can you imagine?!?!
In this Voices From Infosec edition of Breaking Badness, we talk with DNS pioneer Paul Vixie
Our guest this week is a true internet pioneer. Paul Vixie describes himself as a “long time defender of the internet.” He's an author or co-author of several RFC documents and open source software systems including BIND and Cron, a serial entrepreneur now CEO and co-founder of his fifth startup company, Farsight Security, and an inductee into the Internet Hall of Fame. He joins us with insights on how we are suffering the ramifications of early internet design choices, what that means for global networking going forward, and, specifically, why he believes it's best not to rely on outsourcing your DNS.
Recorded Future - Inside Threat Intelligence for Cyber Security
Our guest this week is a true internet pioneer. Paul Vixie describes himself as a “long time defender of the internet.” He’s an author or co-author of several RFC documents and open source software systems including BIND and Cron, a serial entrepreneur now CEO and co-founder of his fifth startup company, Farsight Security, and an inductee into the Internet Hall of Fame. He joins us with insights on how we are suffering the ramifications of early internet design choices, what that means for global networking going forward, and, specifically, why he believes it’s best not to rely on outsourcing your DNS.
Insikt Weekly will soon be "Off the Record". Levi opines on California's CCPA legislation, and Dr. Paul Vixie stops by to talk DNS, privacy, and the open road.
Dr. Paul Vixie, CEO of Farsight Security and Internet Hall of Famer, joins Ashwin at RSA’s Broadcast Alley to talk about ethical data aggregation. He explains the cybercrime fighting mission behind SIE Europe, the nonprofit he co-founded with Christoph Fishcher and Peter Kruse, saying, “So far, everyone’s giving us high marks for both transparency and […]
Is DoH a silver bullet? Is DoH an evil? Why not DoT? It's also time for true stories. Paul Vixie tells us the story about catching Conficker at good old plain DNS years. Guests: Paul Vixie, the DNS GuyAlexander Lyamin, Founder and CEO of Qrator LabsWhat about: DNS over HTTPS Скачать файл подкаста Добавить RSS в подкаст-плеер. Подкаст доступен в iTunes. Скачать все выпуски подкаста вы можете с яндекс-диска. Url podcast:https://dts.podtrac.com/redirect.mp3/https://fs.linkmeup.ru/podcasts/telecom/linkmeup-V084(2020-02).mp3
Is DoH a silver bullet? Is DoH an evil? Why not DoT? It's also time for true stories. Paul Vixie tells us the story about catching Conficker at good old plain DNS years. Guests: Paul Vixie, the DNS GuyAlexander Lyamin, Founder and CEO of Qrator LabsWhat about: DNS over HTTPS Скачать файл подкаста Добавить RSS в подкаст-плеер. Подкаст доступен в iTunes. Скачать все выпуски подкаста вы можете с яндекс-диска.
Is DoH a silver bullet? Is DoH an evil? Why not DoT? It's also time for true stories. Paul Vixie tells us the story about catching Conficker at good old plain DNS years. Guests: Paul Vixie, the DNS GuyAlexander Lyamin, Founder and CEO of Qrator LabsWhat about: DNS over HTTPS Скачать файл подкаста Добавить RSS в подкаст-плеер. Подкаст доступен в iTunes. Скачать все выпуски подкаста вы можете с яндекс-диска. Url podcast:https://dts.podtrac.com/redirect.mp3/https://fs.linkmeup.ru/podcasts/telecom/linkmeup-V084(2020-02).mp3
Linux couldn’t duplicate OpenBSD, FreeBSD Q4 status report, OPNsense 19.7.9 released, archives retain and pass on knowledge, HardenedBSD Tor Onion Service v3 Nodes, and more. Headlines OpenBSD has to be a BSD Unix and you couldn't duplicate it with Linux (https://utcc.utoronto.ca/~cks/space/blog/unix/OpenBSDMustBeABSD?showcomments) OpenBSD has a well deserved reputation for putting security and a clean system (for code, documentation, and so on) first, and everything else second. OpenBSD is of course based on BSD (it's right there in the name) and descends from FreeBSD NetBSD (you can read the history here). But one of the questions you could ask about it is whether it had to be that way, and in particular if you could build something like OpenBSD on top of Linux. I believe that the answer is no. Linux and the *BSDs have a significantly different model of what they are. BSDs have a 'base system' that provides an integrated and fully operational core Unix, covering the kernel, C library and compiler, and the normal Unix user level programs, all maintained and distributed by the particular BSD. Linux is not a single unit this way, and instead all of the component parts are maintained separately and assembled in various ways by various Linux distributions. Both approaches have their advantages, but one big one for the BSD approach is that it enables global changes. Making global changes is an important part of what makes OpenBSD's approach to improving security, code maintenance, and so on work. Because it directly maintains everything as a unit, OpenBSD is in a position to introduce new C library or kernel APIs (or change them) and then immediately update all sorts of things in user level programs to use the new API. This takes a certain amount of work, of course, but it's possible to do it at all. And because OpenBSD can do this sort of ambitious global change, it does. This goes further than just the ability to make global changes, because in theory you can patch in global changes on top of a bunch of separate upstream projects. Because OpenBSD is in control of its entire base system, it's not forced to try to reconcile different development priorities or integrate clashing changes. OpenBSD can decide (and has) that only certain sorts of changes will be accepted into its system at all, no matter what people want. If there are features or entire programs that don't fit into what OpenBSD will accept, they just lose out. FreeBSD Quarterly Status Report 2019Q4 (https://lists.freebsd.org/pipermail/freebsd-announce/2020-January/001923.html) Here is the last quarterly status report for 2019. As you might remember from last report, we changed our timeline: now we collect reports the last month of each quarter and we edit and publish the full document the next month. Thus, we cover here the period October 2019 - December 2019. If you thought that the FreeBSD community was less active in the Christmas' quarter you will be glad to be proven wrong: a quick glance at the summary will be sufficient to see that much work has been done in the last months. Have a nice read! News Roundup OPNsense 19.7.9 released (https://opnsense.org/opnsense-19-7-9-released/) As 20.1 nears we will be making adjustments to the scope of the release with an announcement following shortly. For now, this update brings you a GeoIP database configuration page for aliases which is now required due to upstream database policy changes and a number of prominent third-party software updates we are happy to see included. Archives are important to retain and pass on knowledge (https://dan.langille.org/2020/01/07/archives-are-important-to-retain-and-pass-on-knowledge/) Archives are important. When they are public and available for searching, it retains and passes on knowledge. It saves vast amounts of time. HardenedBSD Tor Onion Service v3 Nodes (https://hardenedbsd.org/article/shawn-webb/2020-01-30/hardenedbsd-tor-onion-service-v3-nodes) I've been working today on deploying Tor Onion Service v3 nodes across our build infrastructure. I'm happy to announce that the public portion of this is now completed. Below you will find various onion service hostnames and their match to our infrastructure. hardenedbsd.org: lkiw4tmbudbr43hbyhm636sarn73vuow77czzohdbqdpjuq3vdzvenyd.onion ci-01.nyi.hardenedbsd.org: qspcqclhifj3tcpojsbwoxgwanlo2wakti2ia4wozxjcldkxmw2yj3yd.onion ci-03.md.hardenedbsd.org: eqvnohly4tjrkpwatdhgptftabpesofirnhz5kq7jzn4zd6ernpvnpqd.onion ci-04.md.hardenedbsd.org: rfqabq2w65nhdkukeqwf27r7h5xfh53h3uns6n74feeyl7s5fbjxczqd.onion git-01.md.hardenedbsd.org: dacxzjk3kq5mmepbdd3ai2ifynlzxsnpl2cnkfhridqfywihrfftapid.onion Beastie Bits The Missing Semester of Your CS Education (MIT Course) (https://missing.csail.mit.edu/) An old Unix Ad (https://i.redd.it/503390rf7md41.png) OpenBSD syscall call-from verification (https://marc.info/?l=openbsd-tech&m=157488907117170&w=2) OpenBSD/arm64 on Pinebook (https://twitter.com/bluerise/status/1220963106563579909) Reminder: First Southern Ontario BSD user group meeting, February 11th (this coming Tuesday!) 18:30 at Boston Pizza on Upper James st, Hamilton. (http://studybsd.com/) NYCBUG: March meeting will feature Dr. Paul Vixie and his new talk “Operating Systems as Dumb Pipes” (https://www.nycbug.org/) 8th Meetup of the Stockholm BUG: March 3 at 18:00 (https://www.meetup.com/de-DE/BSD-Users-Stockholm/events/267873938/) Polish BSD User Group meets on Feb 11, 2020 at 18:15 (https://bsd-pl.org/en) Feedback/Questions Sean - ZFS and Creation Dates (http://dpaste.com/3W5WBV0#wrap) Christopher - Help on ZFS Disaster Recovery (http://dpaste.com/3SE43PW) Mike - Encrypted ZFS Send (http://dpaste.com/00J5JZG#wrap) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Your browser does not support the HTML5 video tag.
vBSDcon 2019 recap, Unix at 50, OpenBSD on fan-less Tuxedo InfinityBook, humungus - an hg server, how to configure a network dump in FreeBSD, and more. Headlines vBSDcon Recap Allan and Benedict attended vBSDcon 2019, which ended last week. It was held again at the Hyatt Regency Reston and the main conference was organized by Dan Langille of BSDCan fame.The two day conference was preceded by a one day FreeBSD hackathon, where FreeBSD developers had the chance to work on patches and PRs. In the evening, a reception was held to welcome attendees and give them a chance to chat and get to know each other over food and drinks. The first day of the conference was opened with a Keynote by Paul Vixie about DNS over HTTPS (DoH). He explained how we got to the current state and what challenges (technical and social) this entails. If you missed this talk and are dying to see it, it will also be presented at EuroBSDCon next week John Baldwin followed up by giving an overview of the work on “In-Kernel TLS Framing and Encryption for FreeBSD” abstract (https://www.vbsdcon.com/schedule/2019-09-06.html#talk:132615) and the recent commit we covered in episode 313. Meanwhile, Brian Callahan was giving a separate session in another room about “Learning to (Open)BSD through its porting system: an attendee-driven educational session” where people had the chance to learn about how to create ports for the BSDs. David Fullard’s talk about “Transitioning from FreeNAS to FreeBSD” was his first talk at a BSD conference and described how he built his own home NAS setup trying to replicate FreeNAS’ functionality on FreeBSD, and why he transitioned from using an appliance to using vanilla FreeBSD. Shawn Webb followed with his overview talk about the “State of the Hardened Union”. Benedict’s talk about “Replacing an Oracle Server with FreeBSD, OpenZFS, and PostgreSQL” was well received as people are interested in how we liberated ourselves from the clutches of Oracle without compromising functionality. Entertaining and educational at the same time, Michael W. Lucas talk about “Twenty Years in Jail: FreeBSD Jails, Then and Now” closed the first day. Lucas also had a table in the hallway with his various tech and non-tech books for sale. People formed small groups and went into town for dinner. Some returned later that night to some work in the hacker lounge or talk amongst fellow BSD enthusiasts. Colin Percival was the keynote speaker for the second day and had an in-depth look at “23 years of software side channel attacks”. Allan reprised his “ELI5: ZFS Caching” talk explaining how the ZFS adaptive replacement cache (ARC) work and how it can be tuned for various workloads. “By the numbers: ZFS Performance Results from Six Operating Systems and Their Derivatives” by Michael Dexter followed with his approach to benchmarking OpenZFS on various platforms. Conor Beh was also a new speaker to vBSDcon. His talk was about “FreeBSD at Work: Building Network and Storage Infrastructure with pfSense and FreeNAS”. Two OpenBSD talks closed the talk session: Kurt Mosiejczuk with “Care and Feeding of OpenBSD Porters” and Aaron Poffenberger with “Road Warrior Disaster Recovery: Secure, Synchronized, and Backed-up”. A dinner and reception was enjoyed by the attendees and gave more time to discuss the talks given and other things until late at night. We want to thank the vBSDcon organizers and especially Dan Langille for running such a great conference. We are grateful to Verisign as the main sponsor and The FreeBSD Foundation for sponsoring the tote bags. Thanks to all the speakers and attendees! humungus - an hg server (https://humungus.tedunangst.com/r/humungus) Features View changes, files, changesets, etc. Some syntax highlighting. Read only. Serves multiple repositories. Allows cloning via the obvious URL. Supports go get. Serves files for downloads. Online documentation via mandoc. Terminal based admin interface. News Roundup OpenBSD on fan-less Tuxedo InfinityBook 14″ v2. (https://hazardous.org/archive/blog/openbsd/2019/09/02/OpenBSD-on-Infinitybook14) The InfinityBook 14” v2 is a fanless 14” notebook. It is an excellent choice for running OpenBSD - but order it with the supported wireless card (see below.). I’ve set it up in a dual-boot configuration so that I can switch between Linux and OpenBSD - mainly to spot differences in the drivers. TUXEDO allows a variety of configurations through their webshop. The dual boot setup with grub2 and EFI boot will be covered in a separate blogpost. My tests were done with OpenBSD-current - which is as of writing flagged as 6.6-beta. See Article for breakdown of CPU, Wireless, Video, Webcam, Audio, ACPI, Battery, Touchpad, and MicroSD Card Reader Unix at 50: How the OS that powered smartphones started from failure (https://arstechnica.com/gadgets/2019/08/unix-at-50-it-starts-with-a-mainframe-a-gator-and-three-dedicated-researchers/) Maybe its pervasiveness has long obscured its origins. But Unix, the operating system that in one derivative or another powers nearly all smartphones sold worldwide, was born 50 years ago from the failure of an ambitious project that involved titans like Bell Labs, GE, and MIT. Largely the brainchild of a few programmers at Bell Labs, the unlikely story of Unix begins with a meeting on the top floor of an otherwise unremarkable annex at the sprawling Bell Labs complex in Murray Hill, New Jersey. It was a bright, cold Monday, the last day of March 1969, and the computer sciences department was hosting distinguished guests: Bill Baker, a Bell Labs vice president, and Ed David, the director of research. Baker was about to pull the plug on Multics (a condensed form of MULTiplexed Information and Computing Service), a software project that the computer sciences department had been working on for four years. Multics was two years overdue, way over budget, and functional only in the loosest possible understanding of the term. Trying to put the best spin possible on what was clearly an abject failure, Baker gave a speech in which he claimed that Bell Labs had accomplished everything it was trying to accomplish in Multics and that they no longer needed to work on the project. As Berk Tague, a staffer present at the meeting, later told Princeton University, “Like Vietnam, he declared victory and got out of Multics.” Within the department, this announcement was hardly unexpected. The programmers were acutely aware of the various issues with both the scope of the project and the computer they had been asked to build it for. Still, it was something to work on, and as long as Bell Labs was working on Multics, they would also have a $7 million mainframe computer to play around with in their spare time. Dennis Ritchie, one of the programmers working on Multics, later said they all felt some stake in the success of the project, even though they knew the odds of that success were exceedingly remote. Cancellation of Multics meant the end of the only project that the programmers in the Computer science department had to work on—and it also meant the loss of the only computer in the Computer science department. After the GE 645 mainframe was taken apart and hauled off, the computer science department’s resources were reduced to little more than office supplies and a few terminals. Some of Allan’s favourite excerpts: In the early '60s, Bill Ninke, a researcher in acoustics, had demonstrated a rudimentary graphical user interface with a DEC PDP-7 minicomputer. Acoustics still had that computer, but they weren’t using it and had stuck it somewhere out of the way up on the sixth floor. And so Thompson, an indefatigable explorer of the labs’ nooks and crannies, finally found that PDP-7 shortly after Davis and Baker cancelled Multics. With the rest of the team’s help, Thompson bundled up the various pieces of the PDP-7—a machine about the size of a refrigerator, not counting the terminal—moved it into a closet assigned to the acoustics department, and got it up and running. One way or another, they convinced acoustics to provide space for the computer and also to pay for the not infrequent repairs to it out of that department’s budget. McIlroy’s programmers suddenly had a computer, kind of. So during the summer of 1969, Thompson, Ritchie, and Canaday hashed out the basics of a file manager that would run on the PDP-7. This was no simple task. Batch computing—running programs one after the other—rarely required that a computer be able to permanently store information, and many mainframes did not have any permanent storage device (whether a tape or a hard disk) attached to them. But the time-sharing environment that these programmers had fallen in love with required attached storage. And with multiple users connected to the same computer at the same time, the file manager had to be written well enough to keep one user’s files from being written over another user’s. When a file was read, the output from that file had to be sent to the user that was opening it. It was a challenge that McIlroy’s team was willing to accept. They had seen the future of computing and wanted to explore it. They knew that Multics was a dead-end, but they had discovered the possibilities opened up by shared development, shared access, and real-time computing. Twenty years later, Ritchie characterized it for Princeton as such: “What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form.” Eventually when they had the file management system more or less fleshed out conceptually, it came time to actually write the code. The trio—all of whom had terrible handwriting—decided to use the Labs’ dictating service. One of them called up a lab extension and dictated the entire code base into a tape recorder. And thus, some unidentified clerical worker or workers soon had the unenviable task of trying to convert that into a typewritten document. Of course, it was done imperfectly. Among various errors, “inode” came back as “eye node,” but the output was still viewed as a decided improvement over their assorted scribbles. In August 1969, Thompson’s wife and son went on a three-week vacation to see her family out in Berkeley, and Thompson decided to spend that time writing an assembler, a file editor, and a kernel to manage the PDP-7 processor. This would turn the group’s file manager into a full-fledged operating system. He generously allocated himself one week for each task. Thompson finished his tasks more or less on schedule. And by September, the computer science department at Bell Labs had an operating system running on a PDP-7—and it wasn’t Multics. By the summer of 1970, the team had attached a tape drive to the PDP-7, and their blossoming OS also had a growing selection of tools for programmers (several of which persist down to this day). But despite the successes, Thompson, Canaday, and Ritchie were still being rebuffed by labs management in their efforts to get a brand-new computer. It wasn’t until late 1971 that the computer science department got a truly modern computer. The Unix team had developed several tools designed to automatically format text files for printing over the past year or so. They had done so to simplify the production of documentation for their pet project, but their tools had escaped and were being used by several researchers elsewhere on the top floor. At the same time, the legal department was prepared to spend a fortune on a mainframe program called “AstroText.” Catching wind of this, the Unix crew realized that they could, with only a little effort, upgrade the tools they had written for their own use into something that the legal department could use to prepare patent applications. The computer science department pitched lab management on the purchase of a DEC PDP-11 for document production purposes, and Max Mathews offered to pay for the machine out of the acoustics department budget. Finally, management gave in and purchased a computer for the Unix team to play with. Eventually, word leaked out about this operating system, and businesses and institutions with PDP-11s began contacting Bell Labs about their new operating system. The Labs made it available for free—requesting only the cost of postage and media from anyone who wanted a copy. The rest has quite literally made tech history. See the link for the rest of the article How to configure a network dump in FreeBSD? (https://www.oshogbo.vexillium.org/blog/68/) A network dump might be very useful for collecting kernel crash dumps from embedded machines and machines with a larger amount of RAM then available swap partition size. Besides net dumps we can also try to compress the core dump. However, often this may still not be enough swap to keep whole core dump. In such situation using network dump is a convenient and reliable way for collecting kernel dump. So, first, let’s talk a little bit about history. The first implementation of the network dumps was implemented around 2000 for the FreeBSD 4.x as a kernel module. The code was implemented in 2010 with the intention of being part of FreeBSD 9.0. However, the code never landed in FreeBSD. Finally, in 2018 with the commit r333283 by Mark Johnston the netdump client code landed in the FreeBSD. Subsequently, many other commitments were then implemented to add support for the different drivers (for example r333289). The first official release of FreeBSD, which support netdump is FreeBSD 12.0. Now, let’s get back to the main topic. How to configure the network dump? Two machines are needed. One machine is to collect core dump, let’s call it server. We will use the second one to send us the core dump - the client. See the link for the rest of the article Beastie Bits Sudo Mastery 2nd edition is not out (https://mwl.io/archives/4530) Empirical Notes on the Interaction Between Continuous Kernel Fuzzing and Development (http://users.utu.fi/kakrind/publications/19/vulnfuzz_camera.pdf) soso (https://github.com/ozkl/soso) GregKH - OpenBSD was right (https://youtu.be/gUqcMs0svNU?t=254) Game of Trees (https://gameoftrees.org/faq.html) Feedback/Questions BostJan - Another Question (http://dpaste.com/1ZPCCQY#wrap) Tom - PF (http://dpaste.com/3ZSCB8N#wrap) JohnnyK - Changing VT without keys (http://dpaste.com/3QZQ7Q5#wrap) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Your browser does not support the HTML5 video tag.
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we're able to be prepared for the innovations of the future! Todays episode is on the history of the Domain Name System, or DNS for short. You know when you go to www.google.com. Imagine if you had to go to 172.217.4.196, or the IP address, instead. DNS is the service that resolves that name to that IP address. Let's start this story back in 1966. The Beatles released Yellow Submarine. The Rolling Stones were all over the radio with Paint It Black. Indira Ghandi was elected the Prime Minister of India. US Planes were bombing Hanoi smack dab in the middle of the Vietnam War. The US and USSR agreed not to fill space with nukes. The Beach Boys had just released Good Vibrations. I certainly feel the good vibrations when I think that quietly, when no one was watching, the US created ARPANET, or the Advanced Research Projects Agency Network. ARPANET would evolve into the Internet as we know it today. As with many great innovations in technology, it took awhile to catch on. Late into the 1980s there were just over 300 computers on the Internet, most doing research. Sure, there were 254 to the 4th addresses that were just waiting to be used, but the idea of keeping the address of all 300 computers you wanted to talk to seemed cumbersome and it was slow to take hold. To get an address in the 70s you needed to contact Jon Postel at USC to get put on what was called the Assigned Numbers List. You could call or mail them. Stanford Research Institute (now called SRI) had a file they hosted called hosts.txt. This file mapped the name of one of these hosts on the network to a IP address, making a table of computer names and then IP addresses those matched with, or a table of hosts. Many computers still maintain this file. Elizabeth Feinler maintained this directory of systems. She would go on to lead and operate the Network Information Center, or NIC for short, for ARPANET and see the evolution to the Defense Data Network, or DDN for short and later the Internet. She wrote what was then called the Resource Handbook. By 1982, Ken Harrenstien and Vic White on Feinler's group at Stanford created a service called Whois, defined in RFC 812, which was an online directory. You can still use the whois command on Windows, Mac and Linux computers today. But by 1982 it was clear that the host table was getter's slower and harder to maintain as more systems were coming online. This meant more people to do that maintenance. But Postel from USC then started reviewing proposals for maintaining this thing, a task he handed off to Paul Mockapetris. That's when Mockapetris did something that he wasn't asked to do and created DNS. Mockapetris had been working on some ideas for filesystems at the time and jumped at the chance to apply those ideas to something different. So Jon Postel and Zaw-Sing Su helped him complete his thoughts which were published by the Internet Engineering Task Force, or IETF, in in RFC 882 for the concepts and facilities and RFC 883 for the implementation and specification in November 1983. You can google those and read them today. And most of it is still used. Here, he introduced the concept that a NAME of a TYPE points to an address, or RDATA and lives for a specified amount of time, or TTL short for Time To Live. He also mapped IP addresses to names in the specifications, creating PTR records. All names had a TLD or Top Level Domain name of ARPANET. Designing a protocol isn't the same thing as implementing a protocol. In 1984, four students from the University of California Berkeley wrote the first version of BIND, short for Berkeley Internet Name Domain, for BSD 4.3. Douglas Terry, Mark Painter, David Riggle, and Songnian Zhou using funds from a DARPA grant. In 1988 Paul Vixie from Digital Equipment Corporation then gave it a little update and maintained it until he founded the Internet Systems Consortium to take it over. BIND is still the primary distribution of DNS, although there are other distributions now. For example, Microsoft added DNS in 1995 with the release of NT 3.51. But back to the 80s real quick. In 1985, came the introduction of .mil, .gov, .edu, .org, .com TLDs. Remember John Postel from USC? He and Joyce K Reynolds started an organization called IANA to assign numbers for use on the Internet. DNS Servers are hierarchical, and so there's a set of root DNS servers, with a root zone controlled by the US Dept of Commerce. 10 of the 13 original servers were operated in the US and 3 outside, each assigned a letter of A through M. You can still ping a.root-servers.net. These host the root zone database from IANA and handle the hierarchy of the TLD they're authoritative for with additional servers hosted for .gov, .com, etc. There are now over 1,000 TLDs! And remember how USC was handling the addressing (which became IANA) and Stanford was handling the names? Well Feinler's group turned over naming to Network Solutions in 1991 and they handled it until 1998 when Postel died and ICANN was formed. ICANN or the Internet Corporation for Assigned Names and Numbers, merged the responsibilities under one umbrella. Each region of the world is allowed to manage their own IP addresses, and so ARIN was formed in 1998 to manage the distribution of IP addresses in America. The collaboration between Feinler and Postel fostered the innovations that would follow. They also didn't try to take everything on. Postel instigated TCP/IP and DNS. Postel co-wrote many of the RFCs that define the Internet and DNS to this day. And Feinler's showed great leadership in administering how much of that was implemented. One can only aspire to find such a collaboration in life and to do so with results like the Internet, worth tens of trillions of dollars, but more importantly has reshaped the world, disrupted practically every industry and touched the lives of nearly every human on earth. Thank you for joining us for this episode of the History Of Computing Podcast. We hope you had an easy time finding thehistoryofcomputing.libsyn.com thanks to the hard work of all those who came before us.
YouTube and Soft Porn? There is a YouTuber, named Matt Watson who exposed what he called a wormhole into a soft core pedophilia ring and posted this out on YouTube. In less than a week he essentially caused the cut off of advertising, some very big advertisers on YouTube. I'll tell you more about that today. You know, I've been warning you about these smart devices and today there is a real-world example of the bad guys at work and the true invasion of our private lives. The Dark Web is in the News. We'll talk about what it is, What's available. How do you access it? What you can find. We've got Windows Defender. This is something I'm talking about in my new master course over the next couple of weeks. Where I will teach how to secure your computers and secure them effectively. Do you put things off? We all do. We'll talk a little bit that and procrastination. I will tell you about an extension for Chrome that just might help you out. Facebook may not be trying to protect the rest of us, but they are trying to protect their own. Yes, indeed, Facebook is using its apps to track people. And we'll tell you why and how. Flipping Houses. Has it ever intrigued you? I know it has me. Now, Zillow is helping you get the information you need if you are interested in buying or selling properties. They have a couple of new tools that I will talk to you about. These and more tech tips, news, and updates visit - CraigPeterson.com --- Transcript: Below is a rush transcript of this segment, it might contain errors. Airing date: 02/23/2019 Flipping Houses The Hitech Way - Smart TVs Chrome sticks Hacked For-Profit - Pedophile Ring Operating On YouTube Craig Peterson: 0:00 Here we go again Craig Peterson. Man we just keep pushing to that thousand week mark this is so cool. I have to thank everybody course at Clear Channel and iHeart and even before it was either one of those for putting up with me all these years and for you guys for listening right. If it wasn't for listeners there just wouldn't be a radio show and podcast. So, hey thank you very much. Of course, you can listen to me live on iHeart app. I'm not sure why they're podcasting isn't picking me up but yeah we never been able to resolve that maybe I should get somebody to work on that one as well because there was a lot of people that really enjoy it and if you want to listen to the podcast it's just http://CraigPeterson.com/iTunes and you'll find it right there. I'm also on a bunch of other ones. I'm in a SoundCloud I think I'm know I'm in TuneIn and a few of these other distribution networks out there. So have a look for it Tech Talk with Craig Peterson and don't get confused with this other guy that's been trying to steal my name and my show out in the Seattle area. No, no, no, no, it's me. I have a kind of distinctive voice. And so from that, hopefully you'll be able to figure out which one is which. Because I am the original Now, okay, moving on. Today, we've got as usual, a whole ton of stuff. Our email should be back to usual for you guys. So, this morning, Saturday morning, you should have received the weekly show notes and those show notes go through all of the articles that I think are worth covering in the news. So let's get going. We've got a YouTuber here who has cut off essentially, he has caused the cut off of advertising, some big advertisers on YouTube will tell you about that. You know, I've been warning you about these smart devices that are out there. We're going to tell you about why. Here's a real-world example of what a bad guy has been doing. And he says he's going to stop doing it now. The dark web, what is it? What can you access? How do you access it? What can you find, and we've got Windows Defender. This is something I'm talking about in the course, my master course over the next couple of weeks about how to secure your computers and secure them effectively. So we'll talk a little bit about that and procrastination. This is kind of cool. How many of us get on the computer, we open up Chrome, we start poking around on the internet. And we end up wasting a lot of time. Well here's a Chrome extension that might help you out. Facebook may not be trying to protect the rest of us, but they are trying to protect their own. Yes, indeed, Facebook is using its apps to track people. And we'll tell you why and how. And we'll start out with this one because I think this is really cool. You've probably heard of Zillow before, it's an app. It's a website. I've used it before, you know, you go into a nice neighborhood or area of the country and say, Wow, I wonder what it costs to buy a house here. So you pop up Zillow and Zillow comes up and says, hey, yeah, here's what it costs and in this town, and here's all the houses are for sale that even now tells you about rentals and rental availability. And Zillow has been keeping a very close eye on the real estate market. It's pretty much right almost all the time. It even has a tool that will estimate what a house is worth. So it'll have its little Zillow estimate. It'll let you know what the taxes are, what the was paid for that house originally kind of pulls it all together. Now, you've been watching on TV, there's, I think it's home and garden TV, HDTV. My wife loves this channel, and some of the stuff that's out on it. But they have these house flipping shows, there are these twin brothers that do one, there are these people in Waco, Texas, a husband and wife, I guess they're having marital issues. Now, I'm not sure. But this whole Magnolia thing and how popular it is, it's gotten so popular that even our target has picked up her chain of household decorations and everything, which is really kind of cool. When you get right down to it. And think about it. She has really built an empire along with her husband, I think it's mainly her. But you know, it's hard to tell, maybe it's him driving it. And she's just the voice for the most part. But she's definitely the decorator type. Well, these shows have become very popular, they are inexpensive to produce, at least for the most part. And the idea is that the host, and I only mentioned two shows there's a lot of them, but the idea is the hosts go in, they find a house maybe a distressed house somebody's about to get foreclosed on or something else is happening. But they find the house and they try and buy it at a deal. And oftentimes they are on foreclosed homes and they don't get a chance to inspect them beforehand. But they're still able to make a bunch of money. And that's how they're compensated. Of course, they get money from the show's production company. And there may be other money's involved to go between all of these different people involved. It's fairly expensive, obviously to produce a show. But, because the host can make so much money it costs a lot less to produce these shows and people love them right. I like those shows also where they go to different parts of the world. I saw when they were in Panama and they the show that couple three different properties and they have to decide which one they want to buy and sell your their guests and against your wife trying to figure out what it is right? Aren't those things kind of cool. Obviously, I'm not the only one that likes these just based on the numbers that are out there. And if you've seen them, you might have thought a little bit about maybe flipping a house. And there have been ads on this radio station that I'm on and others for seminars you can attend where they'll teach you everything you need to know about how to make money fast, and you do it in the real estate market. And you use other people's money. I took Bob Allen's course. read his book, and he and Mark Victor Hansen got together they put together this whole thing and I bought it all and I went to some of the seminars. I've been to these before. Multi-day seminars on buying houses and flipping them and things. I never did it because it wasn't something I was comfortable doing. You know what do I know, I certainly have done renovations in my own home before and I mean, done them, you know, built everything. Built the cabinets from scratch, and, and the whole nine yards. So, I know a little bit about it. But I never felt comfortable enough to go out and do it all on my own. How about you guys, right? I guess it's the same thing with computer security. Most of the people that signed up for this latest master course for me were kind of wondering, what should they do? How should they do it? They didn't even know where to start. And that's kind of where I would be. And even after attending these seminars, these live events, I didn't feel comfortable enough to go out and start flipping houses, right. Well, this is interesting when we tie in Zillow because you can use it a little in order to kind of figure out is there a house I should flip. And bottom line if you are going to be buying homes and flipping them, which means you're going to renovate them, you're going to turn them into a valuable property from whatever they are now and sell it for a higher price. And hopefully higher than what it cost you to buy it and renovate it. Right? That's what flipping is when talking about flipping a house. So hopefully, you know the neighborhood. Everybody that I've read and the seminars I've attended, say get to know a neighborhood. It might be your neighborhood. It might be a close by neighborhood but know everything, know all of the houses, what are they sold for? What are they really worth? How long have they been sitting on the market? That information can help you also get a really good deal on a house. Now, remember, I mentioned Zillow. Have you used Zillow before? Have you wondered what their model is? How do they really make money? It is obvious that they're taking somebody from the real estate agents that are are advertising in there and the leads that they're getting from Zillow. But, this is very interesting because Zillow is now getting in the house flipping business. Isn't that something. So you can now go on to Zillow. And this is cool, I even saw an ad I have it up on my website. When you look at this article that came out of Bloomberg, they have a little ad that right out of a print media about flipping your house, my you know, we want to buy your home. So people have these problems all the time where they're trying to sell their home, but they can't buy a new home until they sold the current home. And once I sold the current home, they're in real trouble now, because I got to get out of that home and 30 days or 60 days, whatever it is, which means that's all the time they have to find a new home and that just not going to work out very well for them. Been in that position before. So should I do repairs on the house before I sell it? How much repair should I do? Should I just paint the walls white, I've got to replace this floor or that flooring. I gotta fix this roof leak that's been there for a long time, right? All of these things you guys have dealt with this before. Right? And if you've ever sold a house. Well, the company that came up with this whole business thing and came up with the programs and the website. And now the apps have something called Zestimates. Z-E-S-T, Just like Zillow. Z-I-L-L-O-W. You check it out if you haven't checked out that app or website yet. But Zestimates now, and it's kind of a Kelley Blue Book for American homes. And they've started an instant offers business. The idea behind this is if you think you want to sell your home, you go on to the Zillow site and you get a Zestimate. And the whole idea behind this is Zillow has a pretty good idea of what your house might be worth. Because they're using some machine learning. They have all of this data about homes that have been sold in your neighborhood in your town, and what they're all worth. So if you sign up for this, it's part of this whole new breed of high tech home flippers. They've been called iBuyers before. There are some Silicon Valley startups that are doing this and real estate brokerages that have instant offer operations as well. So Zillow has kind of pulled this all together the information from Wall Street, and the finances and Silicon Valley capital, and all of these algorithms they've developed. And they can make very detailed and fine and correct really predictions about home prices. So, these investors are buying homes on a massive scale. And because it's such a big scale, unlike the Magnolia people we talked about before, they only have to make a just a small profit out of each flip. So that's putting these real estate investors that have been flipping homes the old fashioned way, it's putting them in a bit of a bad spot. But it's interesting because this Bloomberg article goes through a specific example in the Phoenix area. And what happened was, they listed their home, they weren't sure what to do, what to fix what not to fix, normally, what would happen, right, you'd have a real estate agent come out, they look at the house and say, Well, if you fix the bathroom, you're going to get your money out. If you do the kitchen, you're not going to get your money out. And you got to fix the roof because it's got a second roofline or whatever, all of those things are right. And then you do it and you have your fingers crossed, that you're going to be able to sell that house at some point in the future. Well, in this case, what happens is Zillow make some really good estimates based on what's happening in your neighborhood. And they now have this great opportunity because they might make a good deal on it. So they hire someone to come out at that point and look at your house. So they have an appraiser who comes out and has a quick look around, makes sure there's nothing like terrible with your house, which would lower its value on the market. And then Tada, you're done. It's just absolutely amazing. So, you know, agents in the past have really resented Zillow is market power. And they've been complaining about this whole idea behind this Zestimate and Zillow as had these estimates for a long time and saying that they're creating unrealistic expectations. But now Zillow with Zappraisal might be a way to put it has a very good business model and a very good idea of what your house is actually worth. So, keep an eye out for that if you're looking to buy a house right now today. You can go to Zillow and find out what there's Zestimate is their Zillow estimate for the value of the house. just generically speaking, assuming there are no major flaws in the home. And you can now also have Zillow buy your house from you and flip it. Now, because they only need a very small margin, you might actually do better on one of these Zillow flips. And they do charge like a six to I think it goes as high as 10%, which is higher than your normal real estate agent. But you're also guaranteed a sale and the sale is going to happen very quickly. So you know, I have to take all of that stuff into consideration when you're looking at it. Well, we're going to come back here in just a second, I want to talk about this guy called TheHackerGiraffe and why my warnings about smart TVs are coming true, what he's been doing and why he's retiring. So, here we go. So, this guy calls himself or called himself, I guess TheHackerGiraffe. And he said he's retired from hacking smart TVs. This is really, really interesting, because, you know, I've warned about this before, before, excuse me, we've had manufacturers who have installed cameras in your TV. And the idea is, the TV looks back at you. It's watching you while you're watching TV. And the idea for that type of spying on people is to try and figure out are you paying attention. Are you paying attention during the ads, when you look away, who's in the room, there's facial recognition software, some of these vendors are using, so they know oh, this, it's this person. And they always watch this type of show. It's that person, oh, we've got a whole family in here recognizes there's like, three, four or five faces and how often they're watching TV. And of course, that's a true invasion of our privacy. And there was a great article this week about Paul Vixie, a guy who I've admired for years and loosely worked with on some of the DNS stuff. I've used his code forever, you know, kind of advanced, techie geeky stuff on the internet. But he was all upset because he installed a machine that was calling home. And it was, I think it was a Google Home device. And it wasn't using his DNS and there's no way to change. And, you know, these devices we're putting in our homes, and we're using this Internet of Things, frankly, can be dangerous, and it can be spying on us and worse and can be used to find out by the bad guys when are people not home. That's the time to rob the house. Right? It's breaking and entering versus some form of assault that might happen. So people should be concerned very legitimately, I think, be concerned about all of this. And when you're talking about the smart TVs, there's, even more, this guy, TheHackerGiraffe, he hacked not just smart TVs, but he hacked these Google Chromecast streaming devices and others, and his whole idea was to promote a YouTuber that he liked. Have you heard of this guy, this PewDiePie guy, you might not have a very strange guy. Very, very strange guy. But he had a popular YouTube channel he did some things that kind of forced him out of favor and that was a problem because he started losing followers. I think in fact he got completely delisted off of YouTube and he managed to get back on. Well apparently this guy, TheHackerGiraffe and I'm looking at this article from Graham Cluley, thanks for forwarding this along it's kind of interesting just see that I don't know that he was paid I think he was paid yeah In fact he was he used Patreon in order to get paid but he was going out and hacking people's smart TV its TVs and smart TV devices and had it set up so that you turn on your TV you'd be watching a program or whatever and up would come and add to watch this PewDieGuy for his channel. There he was here's his YouTube channel click on it sort of thing it's kind of nuts when you think about it and he was doing it not just for PewDiePie he was doing for other people too. Now I have long criticized the Huffington Post because they used black techniques in order to get followers. In other words, they cheated and lied, they did a lot of things that people have accused all kinds of people on the internet for doing but you know the ends justify the means to these people. And in this case, the Huffington Post the ends justified the means we got to get our message out there. So we've got to get bigger and stronger and last year and in fact, that's exactly what they did. Well by getting bigger and stronger, and nastier they increase their base. They increase their base and then the rules changed and you could do the practice these dark arts back in the day to try and fool people into following your website but you couldn't do it anymore because they started getting clamped on people didn't like it and the FTC came after them saying this is illegal unfair etc and so they stopped doing it and it's not just the Huffington Puffington Post its many of these different websites and did it over the years. So now this guy was paid to go on to your Smart TV or Chromecast devices and hack them to get you to watch their internet channel so a lot of people were asking them, Chromecast owners Why their TV suddenly showing a video promoting this YouTube superstar PewDiePie and Chromecast owners complained to Google who looked into this and started finding out oh wait a minute now this all happened because of again incorrectly configured routers that had this you p&p enabled you might have heard of this before. It's a way for you to get out to the internet and then have the internet get back to you. But it's interesting. Just because it's misconfigured doesn't give you carte blanche access to it right. I misconfigured my WiFi Why does that somehow mean to you that you have every right to kind of break in on me? That's the question. And this guy, TheHackerGiraffe has decided he's going to retire. He posted his retirement notice online and we've seen the PewDiePie fans doing all kinds of things. Just last month, they defaced a section of the Wall Street Journal website. And of course, this TheHackerGiraffe guy and PewDiePie both claim they had no involvement in the attack. But it frankly it's interesting to look at right because we are seeing major changes in the internet and major changes in what we consider to be acceptable. So that brings us to Facebook and pedophiles. Believe it or not I'm not saying that everybody is a pedophile on new I said Facebook, but it's YouTube on YouTube. But it's kind of interesting to look at this here. Obviously, not everybody is a pedophile on YouTube. And I'm also not saying that YouTube has pictures that are showing minors in various seriously compromising positions if you will. But this is kind of interesting because a YouTuber by the name of Matt Watson just this week exposed what he called a wormhole into a soft core pedophilia ring. And he posted this out on YouTube. He uploaded it on Sunday. So it hasn't even quite been a week yet. But his video claim to reveal a frankly, what's not so underground YouTube community of adults who are encouraging very young children to upload videos of themselves in compromising situations. I don't want to go into all of this because it just disgusts me, frankly. And I'm sure would you too. Now what's happening here, because of this coming out, is major corporations have pulled all of their advertising from YouTube once this came out because remember, Facebook is supposed to be policing itself. In fact, it entered into an agreement with the Federal Trade Commission years ago about self-policing. And Facebook's been accused of not following through with the plea deal. YouTube is also trying to be self-policing. But remember I said we've seen a lot of changes over the years where you know what is acceptable is not the same today as it was a few years ago. Obviously, this is not acceptable. So all kinds of people pulled funding. YouTube's trying to figure out how can they deal with this how can they find the pictures because these are not naked pictures that are easy enough to find through a computer an algorithm that showing a certain percentage of skin etc right that you normally would look for for the typical pornography. So, in this case, it's very difficult for them. There's a hashtag campaign over on Twitter and you might look it up if your Twitter user to find out more about this the hashtag is YouTubeWakeUp and within 24 hours these advertisers had caught on. It's crazy. We've got multiple ads from McDonalds Disney Grammarly, Chromebook, Purina, IKEA, Glad, GNC, Lysol and many others that just said forget about this. So, Watson's calling this as a small victory. It certainly is a victory. This battle still ongoing. And frankly, the public's known for a long time about child exploitation on YouTube. But so far, YouTube has failed to adequately address the issue according to Watson and some of these other people out there. Now, this video came out just days after multiple innocent YouTube channels were wrongly deleted after being flagged for tagging their videos with the abbreviation for child porn. And that happens all too often as well. Someone who doesn't like a YouTube channel, somebody that's a competitor in the financial space with a YouTube channel will go and flag, inappropriately flag content is being inappropriate wrong kitty porn etc. etc. So YouTube has a nightmare on their hands, right? It is like the boy who called wolf we just had that rest just this week an actor on TV who apparently came out and hired a couple of Nigerian guys, big bodybuilders to supposedly beat him up and put a noose around his neck and dump bleach on him. The accusations are flying on that front. But people do this kind of stuff all the time. So how is YouTube supposed to be able to deal with this? Even ads getting ads pulled down because of so-called inappropriate content or doesn't meet the Facebook guidelines or you know all across out there. It's difficult as an advertiser nowadays to figure this whole thing out. So interesting these videos that Watson exposes this week obviously slipped under the radar but we'll see what happens that YouTube if it simply enforces its rule that forbade uploads from users under the age of 13 that would have stopped all of these videos that we're just talking about. So a lot more my website. Check it out. http://CraigPeterson.com. You can find everything we talked about today. Plus more including what Facebook's doing to people who threatened employees. Procrastination. This is great. a Chrome extension you should look that up. You'll find it on my website. Why Windows Defender antivirus is the most deployed antivirus and big, big businesses out there and what is the dark web how to access it what you'll find it's not all necessarily bad there certainly is enough of that but it's not all bad out there all of that. http://CraigPeterson.com, and make sure you get my newsletter because that's going to keep you up to date on all the security stuff and get you the weekly show notes as well. Have a good great day and we'll see you next week. Take care. Bye-bye. --- Related articles: What Is The Dark Web? How To Access It And What You’ll Find Did Your Smart-Tv Start Showing You Shows You Didn’t Subscribe To? “TheHackerGiraffe” Says He’s Retired From Hacking Smart Tvs To Promote Pewdiepie Youtuber Claims Online Pedophile Ring Operates Freely On Youtube Why Windows Defender Antivirus Is The Most Deployed In The Enterprise Prevent Procrastination With This Chrome Extension Facebook Uses Its Apps To Track Users It Thinks Could Threaten Employees And Offices Zillow Wants To Flip Your House --- More stories and tech updates at: www.craigpeterson.com Don't miss an episode from Craig. Subscribe and give us a rating: www.craigpeterson.com/itunes Follow me on Twitter for the latest in tech at: www.twitter.com/craigpeterson For questions, call or text: 855-385-5553
Paul Vixie, CEO of Farsight Security, explains how global Internationalized Domain Names, or global IDNs, sparked the emergence of confusingly similar website addresses with nefarious goals -- and how to combat them.
Arcan and OpenBSD, running OpenBSD 6.3 on RPI 3, why C is not a low-level language, HardenedBSD switching back to OpenSSL, how the Internet was almost broken, EuroBSDcon CfP is out, and the BSDCan 2018 schedule is available. Headlines Towards Secure System Graphics: Arcan and OpenBSD Let me preface this by saying that this is a (very) long and medium-rare technical article about the security considerations and minutiae of porting (most of) the Arcan ecosystem to work under OpenBSD. The main point of this article is not so much flirting with the OpenBSD crowd or adding further noise to software engineering topics, but to go through the special considerations that had to be taken, as notes to anyone else that decides to go down this overgrown and lonesome trail, or are curious about some less than obvious differences between how these things “work” on Linux vs. other parts of the world. A disclaimer is also that most of this have been discovered by experimentation and combining bits and pieces scattered in everything from Xorg code to man pages, there may be smarter ways to solve some of the problems mentioned – this is just the best I could find within the time allotted. I’d be happy to be corrected, in patch/pull request form that is 😉 Each section will start with a short rant-like explanation of how it works in Linux, and what the translation to OpenBSD involved or, in the cases that are still partly or fully missing, will require. The topics that will be covered this time are: Graphics Device Access Hotplug Input Backlight Xorg Pledging Missing Installing OpenBSD 6.3 (snapshots) on Raspberry pi 3 The Easy way Installing the OpenBSD on raspberry pi 3 is very easy and well documented which almost convinced me of not writing about it, but still I felt like it may help somebody new to the project (But again I really recommend reading the document if you are interested and have the time). Note: I'm always running snapshots and recommend anybody to do it as well. But the snapshots links will change to the next version every 6 month, so I changed the links to the 6.3 version to keep the blog post valid over times. If you're familiar to the OpenBSD flavors, feel free to use the snapshots links instead. Requirements Due to the lack of driver, the OpenBSD can not boot directly from the SD Card yet, So we'll need an USB Stick for the installtion target aside the SD Card for the U-Boot and installer. Also, a Serial Console connection is required. I Used a PL2303 USB to Serial (TTL) adapter connected to my Laptop via USB port and connected to the Raspberry via TX, RX and GND pins. iXsystems https://www.ixsystems.com/blog/truenas-m-series-veeam-pr-2018/ Why Didn’t Larrabee Fail? Every month or so, someone will ask me what happened to Larrabee and why it failed so badly. And I then try to explain to them that not only didn't it fail, it was a pretty huge success. And they are understandably very puzzled by this, because in the public consciousness Larrabee was like the Itanic and the SPU rolled into one, wasn't it? Well, not quite. So rather than explain it in person a whole bunch more times, I thought I should write it down. This is not a history, and I'm going to skip a TON of details for brevity. One day I'll write the whole story down, because it's a pretty decent escapade with lots of fun characters. But not today. Today you just get the very start and the very end. When I say "Larrabee" I mean all of Knights, all of MIC, all of Xeon Phi, all of the "Isle" cards - they're all exactly the same chip and the same people and the same software effort. Marketing seemed to dream up a new codeword every week, but there was only ever three chips: Knights Ferry / Aubrey Isle / LRB1 - mostly a prototype, had some performance gotchas, but did work, and shipped to partners. Knights Corner / Xeon Phi / LRB2 - the thing we actually shipped in bulk. Knights Landing - the new version that is shipping any day now (mid 2016). That's it. There were some other codenames I've forgotten over the years, but they're all of one of the above chips. Behind all the marketing smoke and mirrors there were only three chips ever made (so far), and only four planned in total (we had a thing called LRB3 planned between KNC and KNL for a while). All of them are "Larrabee", whether they do graphics or not. When Larrabee was originally conceived back in about 2005, it was called "SMAC", and its original goals were, from most to least important: Make the most powerful flops-per-watt machine for real-world workloads using a huge array of simple cores, on systems and boards that could be built into bazillo-core supercomputers. Make it from x86 cores. That means memory coherency, store ordering, memory protection, real OSes, no ugly scratchpads, it runs legacy code, and so on. No funky DSPs or windowed register files or wacky programming models allowed. Do not build another Itanium or SPU! Make it soon. That means keeping it simple. Support the emerging GPGPU market with that same chip. Intel were absolutely not going to build a 150W PCIe card version of their embedded graphics chip (known as "Gen"), so we had to cover those programming models. As a bonus, run normal graphics well. Add as little graphics-specific hardware as you can get away with. That ordering is important - in terms of engineering and focus, Larrabee was never primarily a graphics card. If Intel had wanted a kick-ass graphics card, they already had a very good graphics team begging to be allowed to build a nice big fat hot discrete GPU - and the Gen architecture is such that they'd build a great one, too. But Intel management didn't want one, and still doesn't. But if we were going to build Larrabee anyway, they wanted us to cover that market as well. ... the design of Larrabee was of a CPU with a very wide SIMD unit, designed above all to be a real grown-up CPU - coherent caches, well-ordered memory rules, good memory protection, true multitasking, real threads, runs Linux/FreeBSD, etc. Larrabee, in the form of KNC, went on to become the fastest supercomputer in the world for a couple of years, and it's still making a ton of money for Intel in the HPC market that it was designed for, fighting very nicely against the GPUs and other custom architectures. Its successor, KNL, is just being released right now (mid 2016) and should do very nicely in that space too. Remember - KNC is literally the same chip as LRB2. It has texture samplers and a video out port sitting on the die. They don't test them or turn them on or expose them to software, but they're still there - it's still a graphics-capable part. But it's still actually running FreeBSD on that card, and under FreeBSD it's just running an x86 program called DirectXGfx (248 threads of it). News Roundup C Is Not a Low-level Language : Your computer is not a fast PDP-11. In the wake of the recent Meltdown and Spectre vulnerabilities, it's worth spending some time looking at root causes. Both of these vulnerabilities involved processors speculatively executing instructions past some kind of access check and allowing the attacker to observe the results via a side channel. The features that led to these vulnerabilities, along with several others, were added to let C programmers continue to believe they were programming in a low-level language, when this hasn't been the case for decades. Processor vendors are not alone in this. Those of us working on C/C++ compilers have also participated. What Is a Low-Level Language? Computer science pioneer Alan Perlis defined low-level languages this way: "A programming language is low level when its programs require attention to the irrelevant." While, yes, this definition applies to C, it does not capture what people desire in a low-level language. Various attributes cause people to regard a language as low-level. Think of programming languages as belonging on a continuum, with assembly at one end and the interface to the Starship Enterprise's computer at the other. Low-level languages are "close to the metal," whereas high-level languages are closer to how humans think. For a language to be "close to the metal," it must provide an abstract machine that maps easily to the abstractions exposed by the target platform. It's easy to argue that C was a low-level language for the PDP-11. They both described a model in which programs executed sequentially, in which memory was a flat space, and even the pre- and post-increment operators cleanly lined up with the PDP-11 addressing modes. Fast PDP-11 Emulators The root cause of the Spectre and Meltdown vulnerabilities was that processor architects were trying to build not just fast processors, but fast processors that expose the same abstract machine as a PDP-11. This is essential because it allows C programmers to continue in the belief that their language is close to the underlying hardware. C code provides a mostly serial abstract machine (until C11, an entirely serial machine if nonstandard vendor extensions were excluded). Creating a new thread is a library operation known to be expensive, so processors wishing to keep their execution units busy running C code rely on ILP (instruction-level parallelism). They inspect adjacent operations and issue independent ones in parallel. This adds a significant amount of complexity (and power consumption) to allow programmers to write mostly sequential code. In contrast, GPUs achieve very high performance without any of this logic, at the expense of requiring explicitly parallel programs. The quest for high ILP was the direct cause of Spectre and Meltdown. A modern Intel processor has up to 180 instructions in flight at a time (in stark contrast to a sequential C abstract machine, which expects each operation to complete before the next one begins). A typical heuristic for C code is that there is a branch, on average, every seven instructions. If you wish to keep such a pipeline full from a single thread, then you must guess the targets of the next 25 branches. This, again, adds complexity; it also means that an incorrect guess results in work being done and then discarded, which is not ideal for power consumption. This discarded work has visible side effects, which the Spectre and Meltdown attacks could exploit. On a modern high-end core, the register rename engine is one of the largest consumers of die area and power. To make matters worse, it cannot be turned off or power gated while any instructions are running, which makes it inconvenient in a dark silicon era when transistors are cheap but powered transistors are an expensive resource. This unit is conspicuously absent on GPUs, where parallelism again comes from multiple threads rather than trying to extract instruction-level parallelism from intrinsically scalar code. If instructions do not have dependencies that need to be reordered, then register renaming is not necessary. Consider another core part of the C abstract machine's memory model: flat memory. This hasn't been true for more than two decades. A modern processor often has three levels of cache in between registers and main memory, which attempt to hide latency. The cache is, as its name implies, hidden from the programmer and so is not visible to C. Efficient use of the cache is one of the most important ways of making code run quickly on a modern processor, yet this is completely hidden by the abstract machine, and programmers must rely on knowing implementation details of the cache (for example, two values that are 64-byte-aligned may end up in the same cache line) to write efficient code. Backup URL Hacker News Commentary HardenedBSD Switching Back to OpenSSL Over a year ago, HardenedBSD switched to LibreSSL as the default cryptographic library in base for 12-CURRENT. 11-STABLE followed suit later on. Bernard Spil has done an excellent job at keeping our users up-to-date with the latest security patches from LibreSSL. After recently updating 12-CURRENT to LibreSSL 2.7.2 from 2.6.4, it has become increasingly clear to us that performing major upgrades requires a team larger than a single person. Upgrading to 2.7.2 caused a lot of fallout in our ports tree. As of 28 Apr 2018, several ports we consider high priority are still broken. As it stands right now, it would take Bernard a significant amount of his spare personal time to fix these issues. Until we have a multi-person team dedicated to maintaining LibreSSL in base along with the patches required in ports, HardenedBSD will use OpenSSL going forward as the default cryptographic library in base. LibreSSL will co-exist with OpenSSL in the source tree, as it does now. However, MK_LIBRESSL will default to "no" instead of the current "yes". Bernard will continue maintaining LibreSSL in base along with addressing the various problematic ports entries. To provide our users with ample time to plan and perform updates, we will wait a period of two months prior to making the switch. The switch will occur on 01 Jul 2018 and will be performed simultaneously in 12-CURRENT and 11-STABLE. HardenedBSD will archive a copy of the LibreSSL-centric package repositories and binary updates for base for a period of six months after the switch (expiring the package repos on 01 Jan 2019). This essentially gives our users eight full months for an upgrade path. As part of the switch back to OpenSSL, the default NTP daemon in base will switch back from OpenNTPd to ISC NTP. Users who have localopenntpdenable="YES" set in rc.conf will need to switch back to ntpd_enable="YES". Users who build base from source will want to fully clean their object directories. Any and all packages that link with libcrypto or libssl will need to be rebuilt or reinstalled. With the community's help, we look forward to the day when we can make the switch back to LibreSSL. We at HardenedBSD believe that providing our users options to rid themselves of software monocultures can better increase security and manage risk. DigitalOcean http://do.co/bsdnow -- $100 credit for 60 days How Dan Kaminsky Almost Broke the Internet In the summer of 2008, security researcher Dan Kaminsky disclosed how he had found a huge flaw in the Internet that could let attackers redirect web traffic to alternate servers and disrupt normal operations. In this Hacker History video, Kaminsky describes the flaw and notes the issue remains unfixed. “We were really concerned about web pages and emails 'cause that’s what you get to compromise when you compromise DNS,” Kaminsky says. “You think you’re sending an email to IBM but it really goes to the bad guy.” As the phone book of the Internet, DNS translates easy-to-remember domain names into IP addresses so that users don’t have to remember strings of numbers to reach web applications and services. Authoritative nameservers publish the IP addresses of domain names. Recursive nameservers talk to authoritative servers to find addresses for those domain names and saves the information into its cache to speed up the response time the next time it is asked about that site. While anyone can set up a nameserver and configure an authoritative zone for any site, if recursive nameservers don’t point to it to ask questions, no one will get those wrong answers. We made the Internet less flammable. Kaminsky found a fundamental design flaw in DNS that made it possible to inject incorrect information into the nameserver's cache, or DNS cache poisoning. In this case, if an attacker crafted DNS queries looking for sibling names to existing domains, such as 1.example.com, 2.example.com, and 3.example.com, while claiming to be the official "www" server for example.com, the nameserver will save that server IP address for “www” in its cache. “The server will go, ‘You are the official. Go right ahead. Tell me what it’s supposed to be,’” Kaminsky says in the video. Since the issue affected nearly every DNS server on the planet, it required a coordinated response to address it. Kaminsky informed Paul Vixie, creator of several DNS protocol extensions and application, and Vixie called an emergency summit of major IT vendors at Microsoft’s headquarters to figure out what to do. The “fix” involved combining the 16-bit transaction identifier that DNS lookups used with UDP source ports to create 32-bit transaction identifiers. Instead of fixing the flaw so that it can’t be exploited, the resolution focused on making it take more than ten seconds, eliminating the instantaneous attack. “[It’s] not like we repaired DNS,” Kaminsky says. “We made the Internet less flammable.” DNSSEC (Domain Name System Security Extensions), is intended to secure DNS by adding a cryptographic layer to DNS information. The root zone of the internet was signed for DNSSEC in July 2010 and the .com Top Level Domain (TLD) was finally signed for DNSSEC in April 2011. Unfortunately, adoption has been slow, even ten years after Kaminsky first raised the alarm about DNS, as less than 15 percent of users pass their queries to DNSSEC validating resolvers. The Internet was never designed to be secure. The Internet was designed to move pictures of cats. No one expected the Internet to be used for commerce and critical communications. If people lose faith in DNS, then all the things that depend on it are at risk. “What are we going to do? Here is the answer. Some of us gotta go out fix it,” Kaminsky says. OpenIndiana Hipster 2018.04 is here We have released a new OpenIndiana Hipster snapshot 2018.04. The noticeable changes: Userland software is rebuilt with GCC 6. KPTI was enabled to mitigate recent security issues in Intel CPUs. Support of Gnome 2 desktop was removed. Linked images now support zoneproxy service. Mate desktop applications are delivered as 64-bit-only. Upower support was integrated. IIIM was removed. More information can be found in 2018.04 Release notes and new medias can be downloaded from http://dlc.openindiana.org. Beastie Bits EuroBSDCon - Call for Papers OpenSSH 7.7 pkgsrc-2018Q1 released BSDCan Schedule Michael Dexter's LFNW talk Tarsnap ad Feedback/Questions Bob - Help locating FreeBSD Help Alex - Convert directory to dataset Adam - FreeNAS Question Florian - Three Questions Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv iX Ad spot: iXsystems TrueNAS M-Series Blows Away Veeam Backup Certification Tests
We cover an interview about Unix Architecture Evolution, another vBSDcon trip report, how to teach an old Unix about backspace, new NUMA support coming to FreeBSD, and stack pointer checking in OpenBSD. This episode was brought to you by Headlines Unix Architecture Evolution from the 1970 PDP-7 to the 2017 FreeBSD (https://fosdem.org/2018/interviews/diomidis-spinellis/) Q: Could you briefly introduce yourself? I'm a professor of software engineering, a programmer at heart, and a technology author. Currently I'm also the editor in chief of the IEEE Software magazine. I recently published the book Effective Debugging, where I detail 66 ways to debug software and systems. Q: What will your talk be about, exactly? I will describe how the architecture of the Unix operating system evolved over the past half century, starting from an unnamed system written in PDP-7 assembly language and ending with a modern FreeBSD system. My talk is based, first, on a GitHub repository where I tried to record the system's history from 1970 until today and, second, on the evolution of documented facilities (user commands, system calls, library functions) across revisions. I will thus present the early system's defining architectural features (layering, system calls, devices as files, an interpreter, and process management) and the important ones that followed in subsequent releases: the tree directory structure, user contributed code, I/O redirection, the shell as a user program, groups, pipes, scripting, and little languages. Q: Why this topic? Unix stands out as a major engineering breakthrough due to its exemplary design, its numerous technical contributions, its impact, its development model, and its widespread use. Furthermore, the design of the Unix programming environment has been characterized as one offering unusual simplicity, power, and elegance. Consequently, there are many lessons that we can learn by studying the evolution of the Unix architecture, which we can apply to the design of new systems. I often see modern systems that suffer from a bloat of architectural features and a lack of clear form on which functionality can be built. I believe that many of the modern Unix architecture defining features are excellent examples of what we should strive toward as system architects. Q: What do you hope to accomplish by giving this talk? What do you expect? I'd like FOSDEM attendees to leave the talk with their mind full with architectural features of timeless quality. I want them to realize that architectural elegance isn't derived by piling design patterns and does not need to be expensive in terms of resources. Rather, beautiful architecture can be achieved on an extremely modest scale. Furthermore, I want attendees to appreciate the importance of adopting flexible conventions rather than rigid enforcement mechanisms. Finally, I want to demonstrate through examples that the open source culture was part of Unix from its earliest days. Q: What are the most significant milestones in the development of Unix? The architectural development of Unix follows a path of continuous evolution, albeit at a slowing pace, so I don't see here the most important milestones. I would however define as significant milestones two key changes in the way Unix was developed. The first occurred in the late 1970s when significant activity shifted from a closely-knit team of researchers at the AT&T Bell Labs to the Computer Science Research Group in the University of California at Berkeley. This opened the system to academic contributions and growth through competitive research funding. The second took place in the late 1980s and the 1990s when Berkeley open-sourced the the code it had developed (by that time a large percentage of the system) and enthusiasts built on it to create complete open source operating system distributions: 386BSD, and then FreeBSD, NetBSD, OpenBSD, and others. Q: In which areas has the development of Unix stalled? The data I will show demonstrate that there were in the past some long periods where the number of C library functions and system calls remained mostly stable. Nowadays there is significant growth in the number of all documented facilities with the exception of file formats. I'm looking forward to a discussion regarding the meaning of these growth patterns in the Q&A session after the talk. Q: What are the core features that still link the 1970 PDP-7 system to the latest FreeBSD 11.1 release, almost half a century apart? Over the past half-century the Unix system has grown by four orders of magnitude from a few thousand lines of code to many millions. Nevertheless, looking at a 1970s architecture diagram and a current one reveals that the initial architectural blocks are still with us today. Furthermore, most system calls, user programs, and C library functions of that era have survived until today with essentially similar functionality. I've even found in modern FreeBSD some lines of code that have survived unchanged for 40 years. Q: Can we still add innovative changes to operating systems like FreeBSD without breaking the ‘Unix philosophy'? Will there be a moment where FreeBSD isn't recognizable anymore as a descendant of the 1970 PDP-7 system? There's a saying that “form liberates”. So having available a time-tested form for developing operating system functionality allows you to innovate in areas that matter rather than reinventing the wheel. Such concepts include having commands act as a filter, providing manual pages with a consistent structure, supplying build information in the form of a Makefile, installing files in a well-defined directory hierarchy, implementing filesystems with an standardized object-oriented interface, and packaging reusable functions as a library. Within this framework there's ample space for both incremental additions (think of jq, the JSON query command) and radical innovations (consider the Solaris-derived ZFS and dtrace functionality). For this reason I think that BSD and Linux systems will always be recognizable as direct or intellectual descendants of the 1970s Research Unix editions. Q: Have you enjoyed previous FOSDEM editions? Immensely! As an academic I need to attend many scientific conferences and meetings in order to present research results and interact with colleagues. This means too much time spent traveling and away from home, and a limited number of conferences I'm in the end able to attend. Nevertheless, attending FOSDEM is an easy decision due to the world-changing nature of its theme, the breadth of the topics presented, the participants' enthusiasm and energy, as well as the exemplary, very efficient conference organization. Another vBSDCon trip report we just found (https://www.weaponizedawesome.com/blog/?cat=53) We just got tipped about another trip report from vBSDCon, this time from one of the first time speakers: W. Dean Freeman Recently I had the honor of co-presenting on the internals of FreeBSD's Kernel RNG with John-Mark Gurney at the 3rd biennial vBSDCon, hosted in Reston, VA hosted by Verisign. I've been in and out of the FreeBSD community for about 20 years. As I've mentioned on here before, my first Unix encounter was FreeBSD 2.2.8 when I was in the 7th or 8th grade. However, for all that time I've never managed to get out to any of the cons. I've been to one or two BUG meetings and I've met some folks from IRC before, but nothing like this. A BSD conference is a very different experience than anything else out there. You have to try it, it is the only way to truly understand it. I'd also not had to do a stand-up presentation really since college before this. So, my first BSD con and my first time presenting rolled into one made for an interesting experience. See, he didn't say terrifying. It went very well. You should totally submit a talk for the next conference, even if it is your first. That said, it was amazing and invigorating experience. I got to meet a few big names in the FreeBSD community, discuss projects, ideas for FreeBSD, etc. I did seem to spend an unusual amount of time talking about FIPS and Common Criteria with folks, but to me that's a good sign and indicative that there is interest in working to close gaps between FreeBSD and the current requirements so that we can start getting FreeBSD and more BSD-based products into the government and start whittling away the domination of Linux (especially since Oracle has cut Solaris, SPARC and the ZFS storage appliance business units). There is nothing that can match the high bandwidth interchange of ideas in person. The internet has made all kinds of communication possible, and we use it all the time, but every once in a while, getting together in person is hugely valuable. Dean then went on to list some of the talks he found most valuable, including DTrace, Capsicum, bhyve, *BSD security tools, and Paul Vixie's talk about gets() I think the talk that really had the biggest impact on me, however, was Kyle Kneisl's talk on BSD community dynamics. One of the key points he asked was whether the things that drew us to the BSD community in the first place would be able to happen today. Obviously, I'm not a 12 or 13 year old kid anymore, but it really got me thinking. That, combined with getting face time with people I'd previously only known as screen names has recently drawn me back into participating in IRC and rejoining mailing lists (wdf on freenode. be on the lookout!) Then Dean covered some thoughts on his own talk: JMG and my talk seems to have been well received, with people paying lots of attention. I don't know what a typical number of questions is for one of these things, but on day one there weren't that many questions. We got about 5 during our question time and spent most of the rest of the day fielding questions from interested attendees. Getting a “great talk!” from GNN after coming down from the stage was probably one of the major highlights for me. I remember my first solo talk, and GNN asking the right question in the middle to get me to explain a part of it I had missed. It was very helpful. I think key to the interest in our presentation was that JMG did a good job framing a very complicated topic's importance in terms everyone could understand. It also helped that we got to drop some serious truth bombs. Final Thoughts: I met a lot of folks in person for the first time, and met some people I'd never known online before. It was a great community and I'm glad I got a chance to expand my network. Verisign were excellent hosts and they took good care of both speakers (covering airfare, rooms, etc.) and also conference attendees at large. The dinners that they hosted were quite good as well. I'm definitely interested in attending vBSDCon again and now that I've had a taste of meeting IRL with the community on scale of more than a handful, I have every intention of finally making it to BSDCan next year (I'd said it in 2017, but then moved to Texas for a new job and it wasn't going to be practical). This year for sure, though! Teaching an Almost 40-year Old UNIX about Backspace (https://virtuallyfun.com/2018/01/17/teaching_an_almost_40-year_old_unix_about_backspace/) Introduction I have been messing with the UNIX® operating system, Seventh Edition (commonly known as UNIX V7 or just V7) for a while now. V7 dates from 1979, so it's about 40 years old at this point. The last post was on V7/x86, but since I've run into various issues with it, I moved on to a proper installation of V7 on SIMH. The Internet has some really good resources on installing V7 in SIMH. Thus, I set out on my own journey on installing and using V7 a while ago, but that was remarkably uneventful. One convenience that I have been dearly missing since the switch from V7/x86 is a functioning backspace key. There seem to be multiple different definitions of backspace: BS, as in ASCII character 8 (010, 0x08, also represented as ^H), and DEL, as in ASCII character 127 (0177, 0x7F, also represented as ^?). V7 does not accept either for input by default. Instead, # is used as the erase character and @ is used as the kill character. These defaults have been there since UNIX V1. In fact, they have been “there” since Multics, where they got chosen seemingly arbitrarily. The erase character erases the character before it. The kill character kills (deletes) the whole line. For example, “ba##gooo#d” would be interpreted as “good” and “bad line@good line” would be interpreted as “good line”. There is some debate on whether BS or DEL is the correct character for terminals to send when the user presses the backspace key. However, most programs have settled on DEL today. tmux forces DEL, even if the terminal emulator sends BS, so simply changing my terminal to send BS was not an option. The change from the defaults outlined here to today's modern-day defaults occurred between 4.1BSD and 4.2BSD. enf on Hacker News has written a nice overview of the various conventions Getting the Diff For future generations as well as myself when I inevitably majorly break this installation of V7, I wanted to make a diff. However, my V7 is installed in SIMH. I am not a very intelligent man, I didn't keep backup copies of the files I'd changed. Getting data out of this emulated machine is an exercise in frustration. In the end, I printed everything on screen using cat(1) and copied that out. Then I performed a manual diff against the original source code tree because tabs got converted to spaces in the process. Then I applied the changes to clean copies that did have the tabs. And finally, I actually invoked diff(1). Closing Thoughts Figuring all this out took me a few days. Penetrating how the system is put together was surprisingly fairly hard at first, but then the difficulty curve eased up. It was an interesting exercise in some kind of “reverse engineering” and I definitely learned something about tty handling. I was, however, not pleased with using ed(1), even if I do know the basics. vi(1) is a blessing that I did not appreciate enough until recently. Had I also been unable to access recursive grep(1) on my host and scroll through the code, I would've probably given up. Writing UNIX under those kinds of editing conditions is an amazing feat. I have nothing but the greatest respect for software developers of those days. News Roundup New NUMA support coming to FreeBSD CURRENT (https://lists.freebsd.org/pipermail/freebsd-current/2018-January/068145.html) Hello folks, I am working on merging improved NUMA support with policy implemented by cpuset(2) over the next week. This work has been supported by Dell/EMC's Isilon product division and Netflix. You can see some discussion of these changes here: https://reviews.freebsd.org/D13403 https://reviews.freebsd.org/D13289 https://reviews.freebsd.org/D13545 The work has been done in user/jeff/numa if you want to look at svn history or experiment with the branch. It has been tested by Peter Holm on i386 and amd64 and it has been verified to work on arm at various points. We are working towards compatibility with libnuma and linux mbind. These commits will bring in improved support for NUMA in the kernel. There are new domain specific allocation functions available to kernel for UMA, malloc, kmem, and vmpage*. busdmamem consumers will automatically be placed in the correct domain, bringing automatic improvements to some device performance. cpuset will be able to constrains processes, groups of processes, jails, etc. to subsets of the system memory domains, just as it can with sets of cpus. It can set default policy for any of the above. Threads can use cpusets to set policy that specifies a subset of their visible domains. Available policies are first-touch (local in linux terms), round-robin (similar to linux interleave), and preferred. For now, the default is round-robin. You can achieve a fixed domain policy by using round-robin with a bitmask of a single domain. As the scheduler and VM become more sophisticated we may switch the default to first-touch as linux does. Currently these features are enabled with VMNUMAALLOC and MAXMEMDOM. It will eventually be NUMA/MAXMEMDOM to match SMP/MAXCPU. The current NUMA syscalls and VMNUMAALLOC code was 'experimental' and will be deprecated. numactl will continue to be supported although cpuset should be preferred going forward as it supports the full feature set of the new API. Thank you for your patience as I deal with the inevitable fallout of such sweeping changes. If you do have bugs, please file them in bugzilla, or reach out to me directly. I don't always have time to catch up on all of my mailing list mail and regretfully things slip through the cracks when they are not addressed directly to me. Thanks, Jeff Stack pointer checking – OpenBSD (https://marc.info/?l=openbsd-tech&m=151572838911297&w=2) Stefan (stefan@) and I have been working for a few months on this diff, with help from a few others. At every trap and system call, it checks if the stack-pointer is on a page that is marked MAPSTACK. execve() is changed to create such mappings for the process stack. Also, libpthread is taught the new MAPSTACK flag to use with mmap(). There is no corresponding system call which can set MAP_FLAG on an existing page, you can only set the flag by mapping new memory into place. That is a piece of the security model. The purpose of this change is to twart stack pivots, which apparently have gained some popularity in JIT ROP attacks. It makes it difficult to place the ROP stack in regular data memory, and then perform a system call from it. Workarounds are cumbersome, increasing the need for far more gadgetry. But also the trap case -- if any memory experiences a demand page fault, the same check will occur and potentially also kill the process. We have experimented a little with performing this check during device interrupts, but there are some locking concerns and performance may then become a concern. It'll be best to gain experience from handle of syncronous trap cases first. chrome and other applications I use run fine! I'm asking for some feedback to discover what ports this breaks, we'd like to know. Those would be ports which try to (unconventionally) create their stacks in malloc()'d memory or inside another Data structure. Most of them are probably easily fixed ... Qt 5.9 on FreeBSD (https://euroquis.nl/bobulate/?p=1768) Tobias and Raphael have spent the past month or so hammering on the Qt 5.9 branch, which has (finally!) landed in the official FreeBSD ports tree. This brings FreeBSD back up-to-date with current Qt releases and, more importantly, up-to-date with the Qt release KDE software is increasingly expecting. With Qt 5.9, the Elisa music player works, for instance (where it has run-time errors with Qt 5.7, even if it compiles). The KDE-FreeBSD CI system has had Qt 5.9 for some time already, but that was hand-compiled and jimmied into the system, rather than being a “proper” ports build. The new Qt version uses a new build system, which is one of the things that really slowed us down from a packaging perspective. Some modules have been reshuffled in the process. Some applications depending on Qt internal-private headers have been fixed along the way. The Telegram desktop client continues to be a pain in the butt that way. Following on from Qt 5.9 there has been some work in getting ready for Clang 6 support; in general the KDE and Qt stack is clean and modern C++, so it's more infrastructural tweaks than fixing code. Outside of our silo, I still see lots of wonky C++ code being fixed and plenty of confusion between pointers and integers and strings and chars and .. ugh. Speaking of ugh, I'm still planning to clean up Qt4 on ARM aarch64 for FreeBSD; this boils down to stealing suitable qatomic implementations from Arch Linux. For regular users of Qt applications on FreeBSD, there should be few to no changes required outside the regular upgrade cycle. For KDE Plasma users, note that development of the ports has changed branches; as we get closer to actually landing modern KDE bits, things have been renamed and reshuffled and mulled over so often that the old plasma5 branch wasn't really right anymore. The kde5-import branch is where it's at nowadays, and the instructions are the same: the x11/kde5 metaport will give you all the KDE Frameworks 5, KDE Plasma Desktop and modern KDE Applications you need. Adding IPv6 to an Nginx website on FreeBSD / FreshPorts (https://dan.langille.org/2018/01/13/adding-ipv6-to-an-nginx-website-on-freebsd-freshports/) FreshPorts recently moved to an IPv6-capable server but until today, that capability has not been utilized. There were a number of things I had to configure, but this will not necessarily be an exhaustive list for you to follow. Some steps might be missing, and it might not apply to your situation. All of this took about 3 hours. We are using: FreeBSD 11.1 Bind 9.9.11 nginx 1.12.2 Fallout I expect some monitoring fallout from this change. I suspect some of my monitoring assumes IP4 and now that IPv6 is available, I need to monitor both IP addresses. ZFS on TrueOS: Why We Love OpenZFS (https://www.trueos.org/blog/zfs-trueos-love-openzfs/) TrueOS was the first desktop operating system to fully implement the OpenZFS (Zettabyte File System or ZFS for short) enterprise file system in a stable production environment. To fully understand why we love ZFS, we will look back to the early days of TrueOS (formerly PC-BSD). The development team had been using the UFS file system in TrueOS because of its solid track record with FreeBSD-based computer systems and its ability to check file consistency with the built-in check utility fsck. However, as computing demands increased, problems began to surface. Slow fsck file verification on large file systems, slow replication speeds, and inconsistency in data integrity while using UFS logging / journaling began to hinder users. It quickly became apparent that TrueOS users would need a file system that scales with evolving enterprise storage needs, offers the best data protection, and works just as well on a hobbyist system or desktop computer. Kris Moore, the founder of the TrueOS project, first heard about OpenZFS in 2007 from chatter on the FreeBSD mailing lists. In 2008, the TrueOS development team was thrilled to learn that the FreeBSD Project had ported ZFS. At the time, ZFS was still unproven as a graphical desktop solution, but Kris saw a perfect opportunity to offer ZFS as a cutting-edge file system option in the TrueOS installer, allowing the TrueOS project to act as an indicator of how OpenZFS would fair in real-world production use. The team was blown away by the reception and quality of OpenZFS on FreeBSD-based systems. By its nature, ZFS is a copy-on-write (CoW) file system that won't move a block of data until it both writes the data and verifies its integrity. This is very different from most other file systems in use today. ZFS is able to assure that data stays consistent between writes by automatically comparing write checksums, which mitigates bit rot. ZFS also comes with native RaidZ functionality that allows for enterprise data management and redundancy without the need for expensive traditional RAID cards. ZFS snapshots allow for system configuration backups in a split-second. You read that right. TrueOS can backup or restore snapshots in less than a second using the ZFS file system. Given these advantages, the TrueOS team decided to use ZFS as its exclusive file system starting in 2013, and we haven't looked back since. ZFS offers TrueOS users the stable workstation experience they want, while simultaneously scaling to meet the increasing demands of the enterprise storage market. TrueOS users are frequently commenting on how easy it is to use ZFS snapshots with our built-in snapshot utility. This allows users the freedom to experiment with their system knowing they can restore it in seconds if anything goes wrong. If you haven't had a chance to try ZFS with TrueOS, browse to our download page and make sure to grab a copy of TrueOS. You'll be blown away by the ease of use, data protection functionality, and incredible flexibility of RaidZ. Beastie Bits Source Code Podcast Interview with Michael W Lucas (https://blather.michaelwlucas.com/archives/3099) Operating System of the Year 2017: NetBSD Third place (https://w3techs.com/blog/entry/web_technologies_of_the_year_2017) OPNsense 18.1-RC1 released (https://opnsense.org/opnsense-18-1-rc1-released/) Personal OpenBSD Wiki Notes (https://balu-wiki.readthedocs.io/en/latest/security/openbsd.html) BSD section can use some contribution (https://guide.freecodecamp.org/bsd-os/) The Third Research Edition Unix Programmer's Manual (now available in PDF) (https://github.com/dspinellis/unix-v3man) Feedback/Questions Alex - my first freebsd bug (http://dpaste.com/3DSV7BC#wrap) John - Suggested Speakers (http://dpaste.com/2QFR4MT#wrap) Todd - Two questions (http://dpaste.com/2FQ450Q#wrap) Matthew - CentOS to FreeBSD (http://dpaste.com/3KA29E0#wrap) Brian - Brian - openbsd 6.2 and enlightenment .17 (http://dpaste.com/24DYF1J#wrap) ***
Paul's perspective having been leading some of the efforts that shaped how the modern internet works today. We talked about how such complex and multi partied ecosystem is always going to create problems and issues we couldn't imagine and how we as a global community are still struggling to solve them.
Dr. Paul Vixie helped design the domain name system, is a member of the Internet Hall of Fame, and founded Farsight Security, where he's chairman and CEO. He tells Federal Drive with Tom Temin what it's like to be a pioneer in internet technology who has spent at least the past 20 years thinking about cybersecurity.
We recap vBSDcon, give you the story behind a PF EN, reminisce in Solaris memories, and show you how to configure different DEs on FreeBSD. This episode was brought to you by Headlines [vBSDCon] vBSDCon was held September 7 - 9th. We recorded this only a few days after getting home from this great event. Things started on Wednesday night, as attendees of the thursday developer summit arrived and broke into smallish groups for disorganized dinner and drinks. We then held an unofficial hacker lounge in a medium sized seating area, working and talking until we all decided that the developer summit started awfully early tomorrow. The developer summit started with a light breakfast and then then we dove right in Ed Maste started us off, and then Glen Barber gave a presentation about lessons learned from the 11.1-RELEASE cycle, and comparing it to previous releases. 11.1 was released on time, and was one of the best releases so far. The slides are linked on the DevSummit wiki page (https://wiki.freebsd.org/DevSummit/20170907). The group then jumped into hackmd.io a collaborative note taking application, and listed of various works in progress and upstreaming efforts. Then we listed wants and needs for the 12.0 release. After lunch we broke into pairs of working groups, with additional space for smaller meetings. The first pair were, ZFS and Toolchain, followed by a break and then a discussion of IFLIB and network drivers in general. After another break, the last groups of the day met, pkgbase and secure boot. Then it was time for the vBSDCon reception dinner. This standing dinner was a great way to meet new people, and for attendees to mingle and socialize. The official hacking lounge Thursday night was busy, and included some great storytelling, along with a bunch of work getting done. It was very encouraging to watch a struggling new developer getting help from a seasoned veteran. Watching the new developers eyes light up as the new information filled in gaps and they now understood so much more than just a few minutes before, and they raced off to continue working, was inspirational, and reminded me why these conferences are so important. The hacker lounge shut down relatively early by BSD conference standards, but, the conference proper started at 8:45 sharp the next morning, so it made sense. Friday saw a string of good presentations, I think my favourite was Jonathan Anderson's talk on Oblivious sandboxing. Jonathan is a very energetic speaker, and was able to keep everyone focused even during relatively complicated explanations. Friday night I went for dinner at ‘Big Bowl', a stir-fry bar, with a largish group of developers and users of both FreeBSD and OpenBSD. The discussions were interesting and varied, and the food was excellent. Benedict had dinner with JT and some other folks from iXsystems. Friday night the hacker lounge was so large we took over a bigger room (it had better WiFi too). Saturday featured more great talks. The talk I was most interested in was from Eric McCorkle, who did the EFI version of my GELIBoot work. I had reviewed some of the work, but it was interesting to hear the story of how it happened, and to see the parallels with my own story. My favourite speaker was Paul Vixie, who gave a very interesting talk about the gets() function in libc. gets() was declared unsafe before the FreeBSD project even started. The original import of the CSRG code into FreeBSD includes the compile time, and run-time warnings against using gets(). OpenBSD removed gets() in version 5.6, in 2014. Following Paul's presentation, various patches were raised, to either cause use of gets() to crash the program, or to remove gets() entirely, causing such programs to fail to link. The last talk before the closing was Benedict's BSD Systems Management with Ansible (https://people.freebsd.org/~bcr/talks/vBSDcon2017_Ansible.pdf). Shortly after, Allan won a MacBook Pro by correctly guessing the number of components in a jar that was standing next to the registration desk (Benedict was way off, but had a good laugh about the unlikely future Apple user). Saturday night ended with the Conference Social, and excellent dinner with more great conversations On Sunday morning, a number of us went to the Smithsonian Air and Space Museum site near the airport, and saw a Concorde, an SR-71, and the space shuttle Discovery, among many other exhibits. Check out the full photo album by JT (https://t.co/KRmSNzUSus), our producer. Thanks to all the sponsors for vBSDcon and all the organizers from Verisign, who made it such a great event. *** The story behind FreeBSD-EN-17.08.pf (https://www.sigsegv.be//blog/freebsd/FreeBSD-EN-17.08.pf) After our previous deep dive on a bug in episode 209, Kristof Provost, the maintainer of pf on FreeBSD (he is going to hate me for saying that) has written the story behind a recent ERRATA notice for FreeBSD First things first, so I have to point out that I think Allan misremembered things. The heroic debugging story is PR 219251, which I'll try to write about later. FreeBSD-EN-17:08.pf is an issue that affected some FreeBSD 11.x systems, where FreeBSD would panic at startup. There were no reports for CURRENT. There's very little to go on here, but we do know the cause of the panic ("integer divide fault"), and that the current process was "pf purge". The pf purge thread is part of the pf housekeeping infrastructure. It's a housekeeping kernel thread which cleans up things like old states and expired fragments. The lack of mention of pf functions in the backtrace is a hint unto itself. It suggests that the error is probably directly in pfpurgethread(). It might also be in one of the static functions it calls, because compilers often just inline those so they don't generate stack frames. Remember that the problem is an "integer divide fault". How can integer divisions be a problem? Well, you can try to divide by zero. The most obvious suspect for this is this code: idx = pfpurgeexpiredstates(idx, pfhashmask / (Vpfdefaultrule.timeout[PFTMINTERVAL] * 10)); However, this variable is both correctly initialised (in pfattachvnet()) and can only be modified through the DIOCSETTIMEOUT ioctl() call and that one checks for zero. At that point I had no idea how this could happen, but because the problem did not affect CURRENT I looked at the commit history and found this commit from Luiz Otavio O Souza: Do not run the pf purge thread while the VNET variables are not initialized, this can cause a divide by zero (if the VNET initialization takes to long to complete). Obtained from: pfSense Sponsored by: Rubicon Communications, LLC (Netgate) That sounds very familiar, and indeed, applying the patch fixed the problem. Luiz explained it well: it's possible to use Vpfdefaultrule.timeout before it's initialised, which caused this panic. To me, this reaffirms the importance of writing good commit messages: because Luiz mentioned both the pf purge thread and the division by zero I was easily able to find the relevant commit. If I hadn't found it this fix would have taken a lot longer. Next week we'll look at the more interesting story I was interested in, which I managed to nag Kristof into writing *** The sudden death and eternal life of Solaris (http://dtrace.org/blogs/bmc/2017/09/04/the-sudden-death-and-eternal-life-of-solaris/) A blog post from Bryan Cantrill about the death of Solaris As had been rumored for a while, Oracle effectively killed Solaris. When I first saw this, I had assumed that this was merely a deep cut, but in talking to Solaris engineers still at Oracle, it is clearly much more than that. It is a cut so deep as to be fatal: the core Solaris engineering organization lost on the order of 90% of its people, including essentially all management. Of note, among the engineers I have spoken with, I heard two things repeatedly: “this is the end” and (from those who managed to survive Friday) “I wish I had been laid off.” Gone is any of the optimism (however tepid) that I have heard over the years — and embarrassed apologies for Oracle's behavior have been replaced with dismay about the clumsiness, ineptitude and callousness with which this final cut was handled. In particular, that employees who had given their careers to the company were told of their termination via a pre-recorded call — “robo-RIF'd” in the words of one employee — is both despicable and cowardly. To their credit, the engineers affected saw themselves as Sun to the end: they stayed to solve hard, interesting problems and out of allegiance to one another — not out of any loyalty to the broader Oracle. Oracle didn't deserve them and now it doesn't have them — they have been liberated, if in a depraved act of corporate violence. Assuming that this is indeed the end of Solaris (and it certainly looks that way), it offers a time for reflection. Certainly, the demise of Solaris is at one level not surprising, but on the other hand, its very suddenness highlights the degree to which proprietary software can suffer by the vicissitudes of corporate capriciousness. Vulnerable to executive whims, shareholder demands, and a fickle public, organizations can simply change direction by fiat. And because — in the words of the late, great Roger Faulkner — “it is easier to destroy than to create,” these changes in direction can have lasting effect when they mean stopping (or even suspending!) work on a project. Indeed, any engineer in any domain with sufficient longevity will have one (or many!) stories of exciting projects being cancelled by foolhardy and myopic management. For software, though, these cancellations can be particularly gutting because (in the proprietary world, anyway) so many of the details of software are carefully hidden from the users of the product — and much of the innovation of a cancelled software project will likely die with the project, living only in the oral tradition of the engineers who knew it. Worse, in the long run — to paraphrase Keynes — proprietary software projects are all dead. However ubiquitous at their height, this lonely fate awaits all proprietary software. There is, of course, another way — and befitting its idiosyncratic life and death, Solaris shows us this path too: software can be open source. In stark contrast to proprietary software, open source does not — cannot, even — die. Yes, it can be disused or rusty or fusty, but as long as anyone is interested in it at all, it lives and breathes. Even should the interest wane to nothing, open source software survives still: its life as machine may be suspended, but it becomes as literature, waiting to be discovered by a future generation. That is, while proprietary software can die in an instant, open source software perpetually endures by its nature — and thrives by the strength of its communities. Just as the existence of proprietary software can be surprisingly brittle, open source communities can be crazily robust: they can survive neglect, derision, dissent — even sabotage. In this regard, I speak from experience: from when Solaris was open sourced in 2005, the OpenSolaris community survived all of these things. By the time Oracle bought Sun five years later in 2010, the community had decided that it needed true independence — illumos was born. And, it turns out, illumos was born at exactly the right moment: shortly after illumos was announced, Oracle — in what remains to me a singularly loathsome and cowardly act — silently re-proprietarized Solaris on August 13, 2010. We in illumos were indisputably on our own, and while many outsiders gave us no chance of survival, we ourselves had reason for confidence: after all, open source communities are robust because they are often united not only by circumstance, but by values, and in our case, we as a community never lost our belief in ZFS, Zones, DTrace and myriad other technologies like MDB, FMA and Crossbow. Indeed, since 2010, illumos has thrived; illumos is not only the repository of record for technologies that have become cross-platform like OpenZFS, but we have also advanced our core technologies considerably, while still maintaining highest standards of quality. Learning some of the mistakes of OpenSolaris, we have a model that allows for downstream innovation, experimentation and differentiation. For example, Joyent's SmartOS has always been focused on our need for a cloud hypervisor (causing us to develop big features like hardware virtualization and Linux binary compatibility), and it is now at the heart of a massive buildout for Samsung (who acquired Joyent a little over a year ago). For us at Joyent, the Solaris/illumos/SmartOS saga has been formative in that we have seen both the ill effects of proprietary software and the amazing resilience of open source software — and it very much informed our decision to open source our entire stack in 2014. Judging merely by its tombstone, the life of Solaris can be viewed as tragic: born out of wedlock between Sun and AT&T and dying at the hands of a remorseless corporate sociopath a quarter century later. And even that may be overstating its longevity: Solaris may not have been truly born until it was made open source, and — certainly to me, anyway — it died the moment it was again made proprietary. But in that shorter life, Solaris achieved the singular: immortality for its revolutionary technologies. So while we can mourn the loss of the proprietary embodiment of Solaris (and we can certainly lament the coarse way in which its technologists were treated!), we can rejoice in the eternal life of its technologies — in illumos and beyond! News Roundup OpenBSD on the Lenovo Thinkpad X1 Carbon (5th Gen) (https://jcs.org/2017/09/01/thinkpad_x1c) Joshua Stein writes about his experiences running OpenBSD on the 5th generation Lenovo Thinkpad X1 Carbon: ThinkPads have sort of a cult following among OpenBSD developers and users because the hardware is basic and well supported, and the keyboards are great to type on. While no stranger to ThinkPads myself, most of my OpenBSD laptops in recent years have been from various vendors with brand new hardware components that OpenBSD does not yet support. As satisfying as it is to write new kernel drivers or extend existing ones to make that hardware work, it usually leaves me with a laptop that doesn't work very well for a period of months. After exhausting efforts trying to debug the I2C touchpad interrupts on the Huawei MateBook X (and other 100-Series Intel chipset laptops), I decided to take a break and use something with better OpenBSD support out of the box: the fifth generation Lenovo ThinkPad X1 Carbon. Hardware Like most ThinkPads, the X1 Carbon is available in a myriad of different internal configurations. I went with the non-vPro Core i7-7500U (it was the same price as the Core i5 that I normally opt for), 16Gb of RAM, a 256Gb NVMe SSD, and a WQHD display. This generation of X1 Carbon finally brings a thinner screen bezel, allowing the entire footprint of the laptop to be smaller which is welcome on something with a 14" screen. The X1 now measures 12.7" wide, 8.5" deep, and 0.6" thick, and weighs just 2.6 pounds. While not available at initial launch, Lenovo is now offering a WQHD IPS screen option giving a resolution of 2560x1440. Perhaps more importantly, this display also has much better brightness than the FHD version, something ThinkPads have always struggled with. On the left side of the laptop are two USB-C ports, a USB-A port, a full-size HDMI port, and a port for the ethernet dongle which, despite some reviews stating otherwise, is not included with the laptop. On the right side is another USB-A port and a headphone jack, along with a fan exhaust grille. On the back is a tray for the micro-SIM card for the optional WWAN device, which also covers the Realtek microSD card reader. The tray requires a paperclip to eject which makes it inconvenient to remove, so I think this microSD card slot is designed to house a card semi-permanently as a backup disk or something. On the bottom are the two speakers towards the front and an exhaust grille near the center. The four rubber feet are rather plastic feeling, which allows the laptop to slide around on a desk a bit too much for my liking. I wish they were a bit softer to be stickier. Charging can be done via either of the two USB-C ports on the left, though I wish more vendors would do as Google did on the Chromebook Pixel and provide a port on both sides. This makes it much more convenient to charge when not at one's desk, rather than having to route a cable around to one specific side. The X1 Carbon includes a 65W USB-C PD with a fixed USB-C cable and removable country-specific power cable, which is not very convenient due to its large footprint. I am using an Apple 61W USB-C charger and an Anker cable which charge the X1 fine (unlike HP laptops which only work with HP USB-C chargers). Wireless connectivity is provided by a removable Intel 8265 802.11a/b/g/n/ac WiFi and Bluetooth 4.1 card. An Intel I219-V chip provides ethernet connectivity and requires an external dongle for the physical cable connection. The screen hinge is rather tight, making it difficult to open with one hand. The tradeoff is that the screen does not wobble in the least bit when typing. The fan is silent at idle, and there is no coil whine even under heavy load. During a make -j4 build, the fan noise is reasonable and medium-pitched, rather than a high-pitched whine like on some laptops. The palm rest and keyboard area remain cool during high CPU utilization. The full-sized keyboard is backlit and offers two levels of adjustment. The keys have a soft surface and a somewhat clicky feel, providing very quiet typing except for certain keys like Enter, Backspace, and Escape. The keyboard has a reported key travel of 1.5mm and there are dedicated Page Up and Page Down keys above the Left and Right arrow keys. Dedicated Home, End, Insert, and Delete keys are along the top row. The Fn key is placed to the left of Control, which some people hate (although Lenovo does provide a BIOS option to swap it), but it's in the same position on Apple keyboards so I'm used to it. However, since there are dedicated Page Up, Page Down, Home, and End keys, I don't really have a use for the Fn key anyway. Firmware The X1 Carbon has a very detailed BIOS/firmware menu which can be entered with the F1 key at boot. F12 can be used to temporarily select a different boot device. A neat feature of the Lenovo BIOS is that it supports showing a custom boot logo instead of the big red Lenovo logo. From Windows, download the latest BIOS Update Utility for the X1 Carbon (my model was 20HR). Run it and it'll extract everything to C:driversflash(some random string). Drop a logo.gif file in that directory and run winuptp.exe. If a logo file is present, it'll ask whether to use it and then write the new BIOS to its staging area, then reboot to actually flash it. + OpenBSD support Secure Boot has to be disabled in the BIOS menu, and the "CSM Support" option must be enabled, even when "UEFI/Legacy Boot" is left on "UEFI Only". Otherwise the screen will just go black after the OpenBSD kernel loads into memory. Based on this component list, it seems like everything but the fingerprint sensor works fine on OpenBSD. *** Configuring 5 different desktop environments on FreeBSD (https://www.linuxsecrets.com/en/entry/51-freebsd/2017/09/04/2942-configure-5-freebsd-x-environments) This fairly quick tutorial over at LinuxSecrets.com is a great start if you are new to FreeBSD, especially if you are coming from Linux and miss your favourite desktop environment It just goes to show how easy it is to build the desktop you want on modern FreeBSD The tutorial covers: GNOME, KDE, Xfce, Mate, and Cinnamon The instructions for each boil down to some variation of: Install the desktop environment and a login manager if it is not included: > sudo pkg install gnome3 Enable the login manager, and usually dbus and hald: > sudo sysrc dbusenable="YES" haldenable="YES" gdmenable="YES" gnomeenable="YES"? If using a generic login manager, add the DE startup command to your .xinitrc: > echo "exec cinnamon" > ~/.xinitrc And that is about it. The tutorial goes into more detail on other configuration you can do to get your desktop just the way you like it. To install Lumina: > sudo pkg install lumina pcbsd-utils-qt5 This will install Lumina and the pcbsd utilities package which includes pcdm, the login manager. In the near future we hear the login manager and some of the other utilities will be split into separate packages, making it easier to use them on vanilla FreeBSD. > sudo sysrc pcdmenable=”YES” dbusenable="YES" hald_enable="YES" Reboot, and you should be greeted with the graphical login screen *** A return-oriented programming defense from OpenBSD (https://lwn.net/Articles/732201/) We talked a bit about RETGUARD last week, presenting Theo's email announcing the new feature Linux Weekly News has a nice breakdown on just how it works Stack-smashing attacks have a long history; they featured, for example, as a core part of the Morris worm back in 1988. Restrictions on executing code on the stack have, to a great extent, put an end to such simple attacks, but that does not mean that stack-smashing attacks are no longer a threat. Return-oriented programming (ROP) has become a common technique for compromising systems via a stack-smashing vulnerability. There are various schemes out there for defeating ROP attacks, but a mechanism called "RETGUARD" that is being implemented in OpenBSD is notable for its relative simplicity. In a classic stack-smashing attack, the attack code would be written directly to the stack and executed there. Most modern systems do not allow execution of on-stack code, though, so this kind of attack will be ineffective. The stack does affect code execution, though, in that the call chain is stored there; when a function executes a "return" instruction, the address to return to is taken from the stack. An attacker who can overwrite the stack can, thus, force a function to "return" to an arbitrary location. That alone can be enough to carry out some types of attacks, but ROP adds another level of sophistication. A search through a body of binary code will turn up a great many short sequences of instructions ending in a return instruction. These sequences are termed "gadgets"; a large program contains enough gadgets to carry out almost any desired task — if they can be strung together into a chain. ROP works by locating these gadgets, then building a series of stack frames so that each gadget "returns" to the next. There is, of course, a significant limitation here: a ROP chain made up of exclusively polymorphic gadgets will still work, since those gadgets were not (intentionally) created by the compiler and do not contain the return-address-mangling code. De Raadt acknowledged this limitation, but said: "we believe once standard-RET is solved those concerns become easier to address separately in the future. In any case a substantial reduction of gadgets is powerful". Using the compiler to insert the hardening code greatly eases the task of applying RETGUARD to both the OpenBSD kernel and its user-space code. At least, that is true for code written in a high-level language. Any code written in assembly must be changed by hand, though, which is a fair amount of work. De Raadt and company have done that work; he reports that: "We are at the point where userland and base are fully working without regressions, and the remaining impacts are in a few larger ports which directly access the return address (for a variety of reasons)". It can be expected that, once these final issues are dealt with, OpenBSD will ship with this hardening enabled. The article wonders about applying the same to Linux, but notes it would be difficult because the Linux kernel cannot currently be compiled using LLVM If any benchmarks have been run to determine the cost of using RETGUARD, they have not been publicly posted. The extra code will make the kernel a little bigger, and the extra overhead on every function is likely to add up in the end. But if this technique can make the kernel that much harder to exploit, it may well justify the extra execution overhead that it brings with it. All that's needed is somebody to actually do the work and try it out. Videos from BSDCan have started to appear! (https://www.youtube.com/playlist?list=PLeF8ZihVdpFfVEsCxNWGDmcATJfRZacHv) Henning Brauer: tcp synfloods - BSDCan 2017 (https://www.youtube.com/watch?v=KuHepyI0_KY) Benno Rice: The Trouble with FreeBSD - BSDCan 2017 (https://www.youtube.com/watch?v=1DM5SwoXWSU) Li-Wen Hsu: Continuous Integration of The FreeBSD Project - BSDCan 2017 (https://www.youtube.com/watch?v=SCLfKWaUGa8) Andrew Turner: GENERIC ARM - BSDCan 2017 (https://www.youtube.com/watch?v=gkYjvrFvPJ0) Bjoern A. Zeeb: From the outside - BSDCan 2017 (https://www.youtube.com/watch?v=sYmW_H6FrWo) Rodney W. Grimes: FreeBSD as a Service - BSDCan 2017 (https://www.youtube.com/watch?v=Zf9tDJhoVbA) Reyk Floeter: The OpenBSD virtual machine daemon - BSDCan 2017 (https://www.youtube.com/watch?v=Os9L_sOiTH0) Brian Kidney: The Realities of DTrace on FreeBSD - BSDCan 2017 (https://www.youtube.com/watch?v=NMUf6VGK2fI) The rest will continue to trickle out, likely not until after EuroBSDCon *** Beastie Bits Oracle has killed sun (https://meshedinsights.com/2017/09/03/oracle-finally-killed-sun/) Configure Thunderbird to send patch friendly (http://nanxiao.me/en/configure-thunderbird-to-send-patch-friendly/) FreeBSD 10.4-BETA4 Available (https://www.freebsd.org/news/newsflash.html#event20170909:01) iXsystems looking to hire kernel and zfs developers (especially Sun/Oracle Refugees) (https://www.facebook.com/ixsystems/posts/10155403417921508) Speaking of job postings, UnitedBSD.com has few job postings related to BSD (https://unitedbsd.com/) Call for papers USENIX FAST ‘18 - February 12-15, 2018, Due: September 28 2017 (https://www.freebsdfoundation.org/news-and-events/call-for-papers/usenix-fast-18-call-for-papers/) Scale 16x - March 8-11, 2018, Due: October 31, 2017 (https://www.freebsdfoundation.org/news-and-events/call-for-papers/scale-16x-call-for-participation/) FOSDEM ‘18 - February 3-4, 2018, Due: November 3 2017 (https://www.freebsdfoundation.org/news-and-events/call-for-papers/fosdem-18-call-for-participation/) Feedback/Questions Jason asks about cheap router hardware (http://dpaste.com/340KRHG) Prashant asks about latest kernels with freebsd-update (http://dpaste.com/2J7DQQ6) Matt wants know about VM Performance & CPU Steal Time (http://dpaste.com/1H5SZ81) John has config questions regarding Dell precision 7720, FreeBSD, NVME, and ZFS (http://dpaste.com/0X770SY) ***
one of the maintainers of BIND, and weâ??re honored to have him with us to talk about DNS RPZ (Response Policy Zone). RPZ is a mechanism for use by DNS recursive resolvers to allow customised handling of the resolution of collections of domain name information (zones).
Matthew Hodgson, Paul Vixie and Dave Taht discuss this with the regulars.
Slides Here: https://defcon.org/images/defcon-22/dc-22-presentations/Vixie/DEFCON-22-Paul-Vixie-2014-07-15-botnets.pdf White paper available for download here: https://defcon.org/images/defcon-22/dc-22-presentations/Vixie/DEFCON-22-Paul-Vixie-WP.pdf Domain Name Problems and Solutions Dr. Paul Vixie CEO, FARSIGHT SECURITY Spammers can't use dotted quads or any other literal IP address, since SpamAssassin won't let it through, since it looks too much like spam. So, spammers need cheap and plentiful — dare we say 'too cheap to meter'? — domain names. The DNS industry is only too happy to provide these domain names, cheaply and at massive scale. The end result is that 90% of all domain names are crap, with more on the way. DNS registrars and registries sometimes cooperate with law enforcement and commercial takedown efforts since it results in domains that die sooner thus creating demand for more domains sooner. Spammers and other abusers of the Internet commons sometimes try to keep their domains alive a little longer by changing name server addresses, or changing name server names, many times per day. All of this action and counteraction leaves tracks, and around those tracks, security minded network and server operators can build interesting defenses including DNS RPZ, a firewall that works on DNS names, DNS responses, and DNS metadata; and NOD, a feed of Newly Observed Domains that can be used for brand enforcement, as well as an RPZ that can direct a DNS firewall to treat infant domain names unfairly. Dr. Paul Vixie, long time maintainer of BIND and now CEO of Farsight Security, will explain and demonstrate." Dr. Paul Vixie is the CEO of Farsight Security. He previously served as President, Chairman and Founder of Internet Systems Consortium (ISC), as President of MAPS, PAIX and MIBH, as CTO of Abovenet/MFN, and on the board of several for-profit and non-profit companies. He served on the ARIN Board of Trustees from 2005 to 2013, and as Chairman in 2008 and 2009. Vixie is a founding member of ICANN Root Server System Advisory Committee (RSSAC) and ICANN Security and Stability Advisory Committee (SSAC). Vixie has been contributing to Internet protocols and UNIX systems as a protocol designer and software architect since 1980. He is considered the primary author and technical architect of BIND 8, and he hired many of the people who wrote BIND 9 and the people now working on BIND 10. He has authored or co-authored a dozen or so RFCs, mostly on DNS and related topics, and of Sendmail: Theory and Practice (Digital Press, 1994). He earned his Ph.D. from Keio University for work related to the Internet Domain Name System (DNS and DNSSEC).
Black Hat Briefings, Las Vegas 2005 [Video] Presentations from the security conference
Paul Vixie has been contributing to Internet protocols and UNIX systems as a protocol designer and software architect since 1980. Early in his career, he developed and introduced sends, proxynet, rtty, cron and other lesser-known tools. Today, Paul is considered the primary modern author and technical architect of BINDv8 the Berkeley Internet Name Domain Version 8, the open source reference implementation of the Domain Name System (DNS). He formed the Internet Software Consortium (ISC) in 1994, and now acts as Chairman of its Board of Directors. The ISC reflects Paul's commitment to developing and maintaining production quality open source reference implementations of core Internet protocols. More recently, Paul cofounded MAPS LLC (Mail Abuse Prevention System), a California nonprofit company established in 1998 with the goal of hosting the RBL (Realtime Blackhole List) and stopping the Internet's email system from being abused by spammers. Vixie is currently the Chief Technology Officer of Metromedia Fiber Network Inc (MFNX.O). Along with Frederick Avolio, Paul co-wrote "Sendmail: Theory and Practice" (Digital Press, 1995). He has authored or co-authored several RFCs, including a Best Current Practice document on "Classless IN-ADDR.ARPA Delegation" (BCP 20). He is also responsible for overseeing the operation of F.root-servers.net, one of the thirteen Internet root domain name servers.
Black Hat Briefings, Las Vegas 2005 [Audio] Presentations from the security conference
Paul Vixie has been contributing to Internet protocols and UNIX systems as a protocol designer and software architect since 1980. Early in his career, he developed and introduced sends, proxynet, rtty, cron and other lesser-known tools. Today, Paul is considered the primary modern author and technical architect of BINDv8 the Berkeley Internet Name Domain Version 8, the open source reference implementation of the Domain Name System (DNS). He formed the Internet Software Consortium (ISC) in 1994, and now acts as Chairman of its Board of Directors. The ISC reflects Paul's commitment to developing and maintaining production quality open source reference implementations of core Internet protocols. More recently, Paul cofounded MAPS LLC (Mail Abuse Prevention System), a California nonprofit company established in 1998 with the goal of hosting the RBL (Realtime Blackhole List) and stopping the Internet's email system from being abused by spammers. Vixie is currently the Chief Technology Officer of Metromedia Fiber Network Inc (MFNX.O). Along with Frederick Avolio, Paul co-wrote "Sendmail: Theory and Practice" (Digital Press, 1995). He has authored or co-authored several RFCs, including a Best Current Practice document on "Classless IN-ADDR.ARPA Delegation" (BCP 20). He is also responsible for overseeing the operation of F.root-servers.net, one of the thirteen Internet root domain name servers.