POPULARITY
Here are my 100 interesting things to learn about cryptography: For a 128-bit encryption key, there are 340 billion billion billion billion possible keys. [Calc: 2**128/(1e9**4)] For a 256-bit encryption key, there are 115,792 billion billion billion billion billion billion billion billion possible keys. [Calc: 2**256/(1e9**8)] To crack a 128-bit encryption with brute force using a cracker running at 1 Teracracks/second, will take — on average — 5 million million million years to crack. Tera is 1,000 billion. [Calc: 2**128/100e9/2/60/60/24/365/(1e6**3)] For a 256-bit key this is 1,835 million million million million million million million million million years. For the brute force cracking of a 35-bit key symmetric key (such as AES), you only need to pay for the boiling of a teaspoon of energy. For a 50-bit key, you just need to have enough money to pay to boil the water for a shower. For a 90-bit symmetric key, you would need the energy to boil a sea, and for a 105-bit symmetric key, you need the energy to boil and ocean. For a 128-bit key, there just isn't enough water on the planet to boil for that. Ref: here. With symmetric key encryption, anything below 72 bits is relatively inexpensive to crack with brute force. One of the first symmetric key encryption methods was the LUCIFER cipher and was created by Horst Feistel at IBM. It was further developed into the DES encryption method. Many, at the time of the adoption of DES, felt that its 56-bit key was too small to be secure and that the NSA had a role in limiting them. With a block cipher, we only have to deal with a fixed size of blocks. DES and 3DES use a 64-bit (eight-byte) block size, and AES uses a 128-bit block size (16 bytes). With symmetric key methods, we either have block ciphers, such as DES, AES CBC and AES ECB, or stream ciphers, such as ChaCha20 and RC4. In order to enhance security, AES has a number of rounds where parts of the key are applied. With 128-bit AES we have 10 rounds, and 14 rounds for 256-bit AES. In AES, we use an S-box to scramble the bytes, and which is applied for each round. When decrypting, we have the inverse of the S-box used in the encrypting process. A salt/nonce or Initialisation Vector (IV) is used with an encryption key in order to change the ciphertext for the same given input. Stream ciphers are generally much faster than block cipers, and can generally be processed in parallel. With the Diffie-Hellman method. Bob creates x and shares g^x (mod p), and Alice creates y, and shares g^y (mod p). The shared key is g^{xy} (mod p). Ralph Merkle — the boy genius — submitted a patent on 5 Sept 1979 and which outlined the Merkle hash. This is used to create a block hash. Ralph Merkle's PhD supervisor was Martin Hellman (famous as the co-creator of the Diffie-Hellman method). Adi Shamir defines a secret share method, and which defines a mathematical equation with the sharing of (x,y), and where a constant value in the equation is the secret. With Shamir Secret Shares (SSS), for a quadratic equation of y=x²+5x+6, the secret is 6. We can share three points at x=1, x=2 and y=3, and which gives y=12, y=20, and y=20, respectively. With the points of (1,12), (2,20), and (3,20), we can recover the value of 6. Adi Shamir broke the Merkle-Hellman knapsack method at a live event at a rump session of a conference. With secret shares, with the highest polynomial power of n, we need n+1 points to come together to regenerate the secret. For example, y=2x+5 needs two points to come together, while y=x²+15x+4 needs three points. The first usable public key method was RSA — and created by Rivest, Shamir and Adleman. It was first published in 1979 and defined in the RSA patent entitled “Cryptographic Communications System and Method”. In public key encryption, we use the public key to encrypt data and the private key to decrypt it. In digital signing, we use the private key to sign a hash and create a digital signature, and then the associated public key to verify the signature. Len Adleman — the “A” in the RSA method — thought that the RSA paper would be one of the least significant papers he would ever publish. The RSA method came to Ron Rivest while he slept on a couch. Martin Gardner published information on the RSA method in his Scientific American article. Initially, there were 4,000 requests for the paper (which rose to 7,000), and it took until December 1977 for them to be posted. The security of RSA is based on the multiplication of two random prime numbers (p and q) to give a public modulus (N). The difficulty of RSA is the difficulty in factorizing this modulus. Once factorized, it is easy to decrypt a ciphertext that has been encrypted using the related modulus. In RSA, we have a public key of (e,N) and a private key of (d,N). e is the public exponent and d is the private exponent. The public exponent is normally set at 65,537. The binary value of 65,537 is 10000000000000001 — this number is efficient in producing ciphertext in RSA. In RSA, the ciphertext is computed from a message of M as C=M^e (mod N), and is decrypted with M=C^d (mod N). We compute the the private exponent (d) from the inverse of the public exponent (e) modulus PHI, and where PHI is (p-1)*(q-1). If we can determine p and q, we can compute PHI. Anything below a 738-bit public modulus is relatively inexpensive to crack for RSA. To crack 2K RSA at the current time, we would need the energy to boil ever ocean on the planet to break it. RSA requires padding is required for security. A popular method has been PCKS#1v1.5 — but this is not provably secure and is susceptible to Bleichenbacher's attack. An improved method is Optimal Asymmetric Encryption Padding (OAEP) and was defined by Bellare and Rogaway and standardized in PKCS#1 v2. The main entity contained in a digital certificate is the public key of a named entity. This is either an RSA or an Elliptic Curve key. A digital certificate is signed with the private key of a trusted entity — Trent. The public key of Trent is then used to prove the integrity and trust of the associated public key. For an elliptic curve of y²=x³+ax+b (mod p), not every (x,y) point is possible. The total number of points is defined as the order (n). ECC (Elliptic Curve Cryptography) was invented by Neal Koblitz and Victor S. Miller in 1985. Elliptic curve cryptography algorithms did not take off until 2004. In ECC, the public key is a point on the elliptic curve. For secp256k1, we have a 256-bit private key and a 512-bit (x,y) point for the public key. A “04” in the public key is an uncompressed public key, and “02” and “03” are compressed versions with only the x-co-ordinate and whether the y coordinate is odd or even. Satoshi selected the secp256k1 curve for Bitcoin, and which gives the equivalent of 128-bit security. The secp256k1 curve uses the mapping of y²=x³ + 7 (mod p), and is known as a Short Weierstrass (“Vier-strass”) curve. The prime number used with secp256k1 is 2²⁵⁶-2³²-2⁹-2⁸-2⁷-2⁶-2⁴-1. An uncompressed secp256k1 public key has 512 bits and is an (x,y) point on the curve. The point starts with a “04”. A compressed secp256k1 public key only stores the x-co-ordinate value and whether the y coordinate is odd or even. It starts with a “02” if the y-co-ordinate is even; otherwise, it starts with a “03”. In computing the public key in ECC of a.G, we use the Montgomery multiplication method and which was created by Peter Montgomery in 1985, in a paper entitled, “Modular Multiplication without Trial Division.” Elliptic Curve methods use two basic operations: point address (P+Q) and point doubling (2.P). These can be combined to provide the scalar operation of a.G. In 1999, Don Johnson Alfred Menezes published a classic paper on “The Elliptic Curve Digital Signature Algorithm (ECDSA)”. It was based on the DSA (Digital Signature Algorithm) — created by David W. Kravitz in a patent which was assigned to the US. ECDSA is a digital signature method and requires a random nonce value (k), and which should never be reused or repeated. ECDSA is an elliptic curve conversion of the DSA signature method. Digital signatures are defined in FIPS (Federal Information Processing Standard) 186–5. NIST approved the Rijndael method (led by Joan Daemen and Vincent Rijmen) for Advanced Encryption Standard (AES). Other contenders included Serpent (led by Ross Anderson), TwoFish (led by Bruce Schneier), MARS (led by IBM), and RC6 (led by Ron Rivest). ChaCha20 is a stream cipher that is based on Salsa20 and developed by Daniel J. Bernstein. MD5 has a 128-bit hash, SHA-1 has 160 bits and SHA-256 has 256-bits. It is relatively easy to create a hash collision with MD5. Google showed that it was possible to create a signature collision for a document with SHA-1. It is highly unlikely to get a hash collision for SHA-256. In 2015, NIST defined SHA-3 as a standard, and which was built on the Keccak hashing family — and which used a different method to SHA-2. The Keccak hash family uses a sponge function and was created by Guido Bertoni, Joan Daemen, Michaël Peeters, and Gilles Van Assche and standardized by NIST in August 2015 as SHA-3. Hash functions such as MD5, SHA-1 and SHA-256 have a fixed hash length, whereas an eXtendable-Output Function (XOF) produces a bit string that can be of any length. Examples are SHAKE128, SHAKE256, BLAKE2XB and BLAKE2XS. BLAKE 3 is the fastest cryptographically secure hashing method and was created by Jack O'Connor, Jean-Philippe Aumasson, Samuel Neves, and Zooko Wilcox-O'Hearn. Hashing methods can be slowed down with a number of rounds. These slower hashing methods include Bcrypt, PBKDF2 and scrypt. Argon 2 uses methods to try and break GPU cracking, such as using a given amount of memory and defining the CPU utlization. To speed up the operation of the SHA-3 hash, the team reduced the security of the method and reduce the number of rounds. The result is the 12 Kangaroo's hashing method. The number of rounds was reduced from 24 to 12 (with a security level of around 128 bits). Integrated Encryption Scheme (IES) is a hybrid encryption scheme which allows Alice to get Bob's public key and then generate an encryption key based on this public key, and she will use her private key to recover the symmetric. With ECIES, we use elliptic curve methods for the public key part. A MAC (Message Authentication Code) uses a symmetric key to sign a hash, and where Bob and Alice share the same secret key. The most popular method is HMAC (hash-based message authentication code). The AES block cipher can be converted into a stream cipher using modes such as GCM (Galois Counter Mode) and CCM (counter with cipher block chaining message authentication code; counter with CBC-MAC). A MAC is added to a symmetric key method in order to stop the ciphertext from being attacked by flipping bits. GCM does not have a MAC, and is thus susceptible to this attack. CCM is more secure, as it contains a MAC. With symmetric key encryption, we must remove the encryption keys in the reverse order they were applied. Commutative encryption overcomes this by allowing the keys to be removed in any order. It is estimated that Bitcoin miners consume 17.05 GW of electrical power per day and 149.46 TWh per year. A KDF (Key Derivation Function) is used to convert a passphrase or secret into an encryption key. The most popular methods are HKDF, PBKDF2 and Bcrypt. RSA, ECC and Discrete Log methods will all be cracked by quantum computers using Shor's algorithm Lattice methods represent bit values as polynomial values, such as 1001 is x³+1 as a polynomial. Taher Elgamal — the sole inventor of the ElGamal encryption method — and Paul Koche were the creators of SSL, and developed it for the Netscape browser. David Chaum is considered as a founder of electronic payments and, in 1983, created ECASH, along with publishing a paper on “Blind signatures for untraceable payments”. Satoshi Nakamoto worked with Hal Finney on the first versions of Bitcoin, and which were created for a Microsoft Windows environment. Blockchains can either be permissioned (requiring rights to access the blockchain) or permissionless (open to anyone to use). Bitcoin and Ethereum are the two most popular permissionless blockchains, and Hyperledger is the most popular permissioned ledger. In 1992, Eric Hughes, Timothy May, and John Gilmore set up the cypherpunk movement and defined, “We the Cypherpunks are dedicated to building anonymous systems. We are defending our privacy with cryptography, with anonymous mail forwarding systems, with digital signatures, and with electronic money.” In Bitcoin and Ethereum, a private key (x) is converted to a public key with x.G, and where G is the base point on the secp256k1 curve. Ethereum was first conceived in 2013 by Vitalik Buterin, Gavin Wood, Charles Hoskinson, Anthony Di Iorio and Joseph Lubin. It introduced smaller blocks, improved proof of work, and smart contracts. NI-ZKPs involves a prover (Peggy), a verifier (Victor) and a witness (Wendy) and were first defined by Manuel Blum, Paul Feldman, and Silvio Micali in their paper entitled “Non-interactive zero-knowledge and its applications”. Popular ZKP methods include ZK-SNARKs (Zero-Knowledge Succinct Non-Interactive Argument of Knowledge) and ZK-STARKs (Zero-Knowledge Scalable Transparent Argument of Knowledge). Bitcoin and Ethereum are pseudo-anonymised, and where the sender and recipient of a transaction, and its value, can be traced. Privacy coins enable anonymous transactions. These include Zcash and Monero. In 1992, David Chaum and Torben Pryds Pedersen published “Wallet databases with observers,” and outlined a method of shielding the details of a monetary transaction. In 1992, Adi Shamir (the “S” in RSA) published a paper on “How to share a secret” in the Communications of the ACM. This supported the splitting of a secret into a number of shares (n) and where a threshold value (t) could be defined for the minimum number of shares that need to be brought back together to reveal the secret. These are known as Shamir Secret Shares (SSS). In 1991, Torbin P Pedersen published a paper entitled “Non-interactive and information-theoretic secure verifiable secret sharing” — and which is now known as Pedersen Commitment. This is where we produce our commitment and then show the message that matches the commitment. Distributed Key Generation (DKG) methods allow a private key to be shared by a number of trusted nodes. These nodes can then sign for a part of the ECDSA signature by producing a partial signature with these shares of the key. Not all blockchains use ECDSA. The IOTA blockchain uses the EdDSA signature, and which uses Curve 25519. This is a more lightweight signature version and has better support for signature aggregation. It uses Twisted Edwards Curves. The core signing method used in EdDSA is based on the Schnorr signature scheme and which was created by Claus Schnorr in 1989. This was patented as a “Method for identifying subscribers and for generating and verifying electronic signatures in a data exchange system”. The patent ran out in 2008. Curve 25519 uses the prime number of 2²⁵⁵-19 and was created by Daniel J. Bernstein. Peter Shor defined that elliptic curve methods can be broken with quantum computers. To overcome the cracking of the ECDSA signature from quantum computers, NIST are standardising a number of methods. At present, this focuses on CRYSTALS-Dilithium, and which is a lattice cryptography method. Bulletproofs were created in 2017 by Stanford's Applied Cryptography Group (ACG). They define a zero-knowledge proof as where a value can be checked to see it lies within a given range. The name “bulletproofs” is defined as they are short, like a bullet, and with bulletproof security assumptions. Homomorphic encryption methods allow for the processing of encrypted values using arithmetic operations. A public key is used to encrypt the data, and which can then be processed using an arithmetic circuit on the encrypted data. The owner of the associated private key can then decrypt the result. Some traditional public key methods enable partial homomorphic encryption. RSA and ElGamal allow for multiplication and division, whilst Pailier allows for homomorphic addition and subtraction. Full homomorphic encryption (FHE) supports all of the arithmetic operations and includes Fan-Vercauteren (FV) and BFV (Brakerski/Fan-Vercauteren) for integer operations and HEAAN (Homomorphic Encryption for Arithmetic of Approximate Numbers) for floating point operations. Most of the Full Homomorphic encryption methods use lattice cryptography. Some blockchain applications use Barreto-Lynn-Scott (BLS) curves which are pairing-friendly. They can be used to implement Bilinear groups and which are a triplet of groups (G1, G2 and GT), so that we can implement a function e() such that e(g1^x,g2^y)=gT^{xy}. Pairing-based cryptography is used in ZKPs. The main BLS curves used are BLS12–381, BLS12–446, BLS12–455, BLS12–638 and BLS24–477. An accumulator can be used for zero-knowledge proof of knowledge, such as using a BLS curve to create to add and remove proof of knowledge. Metamask is one of the most widely used blockchain wallets and can integrate into many blockchains. Most wallets generate the seed from the operating system and where the browser can use the Crypto.getRandomValues function, and compatible with most browsers. With a Verifiable Delay Function (VDF), we can prove that a given amount of work has been done by a prover (Peggy). A verifier (Victor) can then send the prover a proof value and compute a result which verifies the work has been done, with the verifier not needing to do the work but can still prove the work has been done. A Physical Unclonable Functions (PUFs) is a one-way function which creates a unique signature pattern based on the inherent delays within the wires and transistors. This can be used to link a device to an NFT.
Hello folks, welcome to the Security box, podcast 134. This podcast is going to talk about PBKDF2, an encryption algorithm that is used in certain situations. We'll also have a moron of the week, maybe two, maybe more! We'll also dip our toes in the landscape and see what is on other folks minds. Morons of the podcast This first one is quite dumb. In fact, when I saw the boost which saw it, I had to title this the way I did. You Stupid F**k … its not going to look good for you now when you get picked up is my blog post on an article coming from a site called news24.com. Not only is this guy found guilty, you're going to read that he just chose not to even show up! How dumb can you be? This second one comes from this blog post about TikTok's newest challenge. There is some strong language with this post, and from at least two people, it is well warranted. I try not to use strong language in my posts, but this one is definitely beyond repair. The short of this, for those who may be offended by strong language, is that scammers will stop at nothing to either get their wares out or to cause as much harm as possible. In this latest twist, we have someone going aroudn claiming they can turn loved one's ashes to a sculpture or even a painting. They claim its free ... but it isn't. The article leads to this Kim Komando article talking about the TikTok Scam. If it isn't mentioned as part of this segment, which it just may, please feel free to weigh in on this one. Our topic, PBKDF2 PBKdf2 is one of many encryption types for passwords and the like. Here's the Wikipedia article on it as we take from the first two sections for now. There are replacements for it and its covered within this article, but we'll let you look this up later. Book Selection: Tracers in the dark I chose my next book. Tracers in the Dark by Andy Greenberg. During the podcast, we'll check in with folks to see where they are in this ... or other books that are on our list. This is my blog post talking about Tracers in the dark if you need it. I finally got my book review up for If it's smart, its vulnerable by Mikko Hypponnen. Here's that blog post if you're interested to see what I have to say about it. Supporting our podcast If you'd like to support our efforts on what this podcast is doing, you can feel free to donate to the network, subscribing to the the security box discussion list or sending us a note through contact information throughout the podcast. You can also find contact details on our blog page found here. Thanks so much for listening, reading and learning! We can't do this alone.
Steve Gibson shares some important notes found on the LastPass website. Gibson briefly explains the handling of client master passwords regarding PBKDF2. Great information on Security Now with Steve Gibson and Leo Laporte. For the full episode, goto: twit.tv/sn/906 Hosts: Leo Laporte and Steve Gibson You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/ Sponsor: acilearning.com
Steve Gibson shares some important notes found on the LastPass website. Gibson briefly explains the handling of client master passwords regarding PBKDF2. Great information on Security Now with Steve Gibson and Leo Laporte. For the full episode, goto: twit.tv/sn/906 Hosts: Leo Laporte and Steve Gibson You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/ Sponsor: acilearning.com
On Ask The Tech Guys, Leo Laporte and Mikah Sargent talk about other password managers you can use, like 1Password, and what you can do to protect your password manager even more. For more, check out Ask the Tech Guys: https://twit.tv/attg/1957 Hosts: Leo Laporte and Mikah Sargent You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/ Sponsor: acilearning.com
On Ask The Tech Guys, Leo Laporte and Mikah Sargent talk about other password managers you can use, like 1Password, and what you can do to protect your password manager even more. For more, check out Ask the Tech Guys: https://twit.tv/attg/1957 Hosts: Leo Laporte and Mikah Sargent You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/ Sponsor: acilearning.com
Watch the live stream: Watch on YouTube About the show Sponsored by us: Check out the courses over at Talk Python And Brian's book too! Special guest: Laís Carvalho Michael #1: Django 4.0 released Django is picking up speed: 4.0 Dec 2021 (+1) 3.0 Dec 2020 (+3) 2.0 Dec 2017 (+7) 1.0.1 May 2010 Feature highlights: The new RedisCache backend provides built-in support for caching with Redis. To ease customization of Forms, Formsets, and ErrorList they are now rendered using the template engine. The Python standard library's zoneinfo is now the default timezone implementation in Django. scrypt password hasher: The new scrypt password hasher is more secure and recommended over PBKDF2. However, it's not the default as it requires OpenSSL 1.1+ and more memory. Django 3.2 has reached the end of mainstream support. The final minor bug fix release, 3.2.10, was issued today. Django 3.2 is an LTS release and will receive security and data loss fixes until April 2024. Some backwards incompatible changes you'll want to be aware of when upgrading from Django 3.2 or earlier. They've begun the deprecation process for some features. Django 4.0 supports Python 3.8, 3.9, and 3.10. Brian #2: python-minifier Suggested by Lance Reinsmith My first thought was “we don't need a minifier for Python” The docs give one reason: “AWS Cloudformation templates may have AWS lambda function source code embedded in them, but only if the function is less than 4KiB. I wrote this package so I could write python normally and still embed the module in a template.” Lance has another reason: “I needed it because the RAM on Adafruit boards using the common M0 chip is around 192KB to 256KB total--not all of which is available to your program. To get around this, you can either 1) compile your code to an .mpy file or 2) minify it. The second worked for me and allowed me to alter it without constantly re-compiling.” Fair enough, what does it do? All of these features are options you can turn off, and are documented well: Combine Import statements Remove Pass statements Remove literal statements (docstrings) Remove Annotations Hoist Literals Rename Locals, with preserved Locals list Rename Globals, with preserved Globals list Convert Positional-Only Arguments to Normal Arguments Also looks like it replaces spaces with tabs Begrudgingly, that makes sense in this context. You can try it at python-minifier.com Laís #3: It's time to stop using Python 3.6 Python 3.6 is reaching the end of it's life in 1 week and 1 day (Dec 23rd), i.e. no more releases after it. You should care because the Python dev team will no longer release security updates for 3.6 ⚠️ if you use Linux, you have a bit more time BUT security updates will be released and bug fixes will not. also, Python 3rd party libraries and frameworks will drop support for 3.6 soon enough. See the log4j issue and Java. Brian might like this one: Grype - a vulnerability scanner for container images and filesystems Michael #4: How to Visualize the Formula 1 Championship in Python Race Highlights | 2021 Abu Dhabi Grand Prix Formula 1: Drive to Survive (Season 3) | Official Trailer Wanting to get into Formula 1 data analysis, the Ergast API is a very good starting point. This tutorial will show you how to use data from the Ergast API to visualize the changes in the 2021 championship standings over the rounds. Introduces fastf1: Wrapper library for F1 data and telemetry API with additional data processing capabilities. Brian #5: nbdime: Jupyter Notebook Diff and Merge tools Suggestion from Henrik Finsberg “you recently covered ‘jut' for viewing Jupyter notebooks from the terminal. Check out ‘mbdime'.” (that was episode 258) So I did. And it looks cool. nbdime provides tools for diffing and merging of Jupyter Notebooks. nbdiff compare notebooks in a terminal-friendly way nbmerge three-way merge of notebooks with automatic conflict resolution nbdiff-web shows you a rich rendered diff of notebooks nbmerge-web gives you a web-based three-way merge tool for notebooks nbshow present a single notebook in a terminal-friendly way Laís #6: Using AI to analyse and recommend software stacks for Python apps thanks Fridolin! Project Thoth: an open source cloud-based Python dependency resolver ML (reinforcement learning) that solves dependency issues taking into consideration runtime envs, hardware and other inputs. Using Markov's decision process. “a smarter pip” that instead of using backtracking, precomputes the dependency information and stores it in a database that can be queried for future resolutions. Using pre-specified criteria by the developer. In summary: Thot's resolver uses automated bots that guarantee dependencies are locked down to specific versions, making builds and deployments reproducible; the aggregated knowledge (reinforcement learning from installed logs) helps the bots to lock the dependencies to the best libraries, instead of the latest. They are in beta phase but welcoming feedback and suggestions from the community. Extras Brian: Pragmatic Bookshelf 12 days of Christmas Today, pytest book is part of the deal, nice timing, right? Michael: My talk at FlaskCon is out Firefox releases RLBox We're all getting identity theft monitoring for 1 year for free :-/ Laís: Python Ireland's speaker's coaching session is on Jan 22nd Learning git the visual way - cool for beginners, thorough explanations Good read for Java devs who want to start with Python (by Real Python) Joke: Janga Python (hellish) virtual envs
Watch the live stream: Watch on YouTube About the show Sponsored by Shortcut Special guest: Morleh So-kargbo Michael #1: Django 4.0 beta 1 released Django 4.0 beta 1 is now available. Django 4.0 has an abundance of new features The new *expressions positional argument of UniqueConstraint() enables creating functional unique constraints on expressions and database functions. The new scrypt password hasher is more secure and recommended over PBKDF2. The new django.core.cache.backends.redis.RedisCache cache backend provides built-in support for caching with Redis. To enhance customization of Forms, Formsets, and ErrorList they are now rendered using the template engine. Brian #2: py - The Python launcher py has been bundled with Python for Windows only since Python 3.3, as py.exe See Python Launcher for Windows I've mostly ignored it since I use Python on Windows, MacOS, and Linux and don't want to have different workflows on different systems. But now Brett Cannon has developed python-launcher which brings py to MacOS and various other Unix-y systems or any OS which supports Rust. Now py is everywhere I need it to be, and I've switched my workflow to use it. Usage py : Run the latest Python version on your system py -3 : Run the latest Python 3 version py -3.9 : Run latest 3.9 version py -2.7 : Even run 2.x versions py --``list : list all versions (with py-launcher, it also lists paths) py --``list-paths : py.exe only - list all versions with path Why is this cool? - I never have to care where Python is installed or where it is in my search path. - I can always run any version of Python installed without setting up symbolic links. - Same workflow works on Windows, MacOS, and Linux Old workfow Make sure latest Python is found first in search path, then call python3 -m venv venv For a specific version, make sure python3.8, for example, or python38 or something is in my Path. If not, create it somewhere. New workflow. py -m venv venv - Create a virtual environment with the latest Python installed. After activation, everything happens in the virtual env. Create a specific venv to test something on an older version: py -3.8 venv venv --``prompt '``3.8``' Or even just run a script with an old version py -3.8 script_name.py Of course, you can run it with the latest version also py script_name.py Note: if you use py within a virtual environment, the default version is the one from the virtual env, not the latest. Morleh #3: Transformers As General-Purpose Architecture The Attention Is All You Need paper first proposed Transformers in June 2017. The Hugging Face (
This week, the crew talks about passwords. Web applications store a great deal of sensitive information. But, there is something categorically different about storing passwords. Because—if compromised—a password from one application may grant a malicious actor access to another application. As such, it is essential that we store our customers' passwords using modern, one-way hashing algorithms that protect the underlying payload against increasingly powerful compute resources. And, that we have a way to evolve our password hashing strategies in order to stay a step ahead of potential attackers.Of course, sometimes the best password hashing strategies is to not store a password at all. Using a "passwordless login" allows you to defer the responsibility of password storage off to another, trusted vendor.Also, we've been doing this podcast for half-a-year! How awesome is that! Yay for us!Triumphs & FailuresAdam's Failure - While Adam has been quite keen on Testing code, he recently ran into a testing scenario that he found very challenging. And, he ended up taking half-a-day to refactor already working code just so that he could add the tests. In the long run, it wasn't a waste of time; but, it was a very humbling experience in the moment.Ben's Triumph - After weeks of struggling to debug an authentication issue within a Sketch plug-in, Ben and his team finally figured out what was going wrong! As fate would often have it, Ben was the engineer that originally wrote the problematic code - so, that was unfortunate. But, at least they figured out how to fix the user experience!Carol's Failure - Carol has been having trouble walking away from problems even when she feels stuck. So, instead of stepping back and clearing her head, she continues to beat it against the wall (often to no avail). She knows this is counterproductive; but, sometimes she gets lost in the details.Tim's Triumph / Failure - Tim finds himself coasting this week. Nothing has been all that note-worthy; either in triumph or in failure.Notes & LinksOWASP Password Cheat Sheet - industry standard best practices for storing passwords - covers Argon2, BCrypt, SCrypt, and PBKDF2.Have I Been Pwned - a service that tells you if your password has been exposed in a data breach.1Password - the world's most-loved password manager.Authy - a user-friendly two-factor authentication app.Shibboleth - an identity provider solution.OAuth - a standard for granting access to a website or application without having to provide it with your password.SAML - a standard for exchanging authentication between parties.Diceware - a method for generation secure, random passwords using playing dice.NIST Password Guidelines - Auth0 explains new passwords guidelines from NIST.Single Sign-On (SSO) - an authentication scheme in which one login grantes access to several, unrelated applications.Netlify Identity Management - a solution for user management in a Netlify app.Firebase Identity Management - a solution for user management in a Firebase app.XKCD: Password Strength - A web comic about how we make passwords hard for people but easy for computers.Follow the show! Our website is workingcode.dev and we're @WorkingCodePod on Twitter and Instagram. Or, leave us a message at (512) 253-2633 (that's 512-253-CODE). New episodes drop weekly on Wednesday.And, if you're feeling the love, support us on Patreon.
We reveal the shady password practices that are all too common at many utility providers, and hash out why salts are essential to proper password storage. Plus the benefits of passphrases, and what you can do to keep your local providers on the up and up.
Second round of ZFS improvements in FreeBSD, Postgres finds that non-FreeBSD/non-Illumos systems are corrupting data, interview with Kevin Bowling, BSDCan list of talks, and cryptographic right answers. Headlines [Other big ZFS improvements you might have missed] 9075 Improve ZFS pool import/load process and corrupted pool recovery One of the first tasks during the pool load process is to parse a config provided from userland that describes what devices the pool is composed of. A vdev tree is generated from that config, and then all the vdevs are opened. The Meta Object Set (MOS) of the pool is accessed, and several metadata objects that are necessary to load the pool are read. The exact configuration of the pool is also stored inside the MOS. Since the configuration provided from userland is external and might not accurately describe the vdev tree of the pool at the txg that is being loaded, it cannot be relied upon to safely operate the pool. For that reason, the configuration in the MOS is read early on. In the past, the two configurations were compared together and if there was a mismatch then the load process was aborted and an error was returned. The latter was a good way to ensure a pool does not get corrupted, however it made the pool load process needlessly fragile in cases where the vdev configuration changed or the userland configuration was outdated. Since the MOS is stored in 3 copies, the configuration provided by userland doesn't have to be perfect in order to read its contents. Hence, a new approach has been adopted: The pool is first opened with the untrusted userland configuration just so that the real configuration can be read from the MOS. The trusted MOS configuration is then used to generate a new vdev tree and the pool is re-opened. When the pool is opened with an untrusted configuration, writes are disabled to avoid accidentally damaging it. During reads, some sanity checks are performed on block pointers to see if each DVA points to a known vdev; when the configuration is untrusted, instead of panicking the system if those checks fail we simply avoid issuing reads to the invalid DVAs. This new two-step pool load process now allows rewinding pools across vdev tree changes such as device replacement, addition, etc. Loading a pool from an external config file in a clustering environment also becomes much safer now since the pool will import even if the config is outdated and didn't, for instance, register a recent device addition. With this code in place, it became relatively easy to implement a long-sought-after feature: the ability to import a pool with missing top level (i.e. non-redundant) devices. Note that since this almost guarantees some loss Of data, this feature is for now restricted to a read-only import. 7614 zfs device evacuation/removal This project allows top-level vdevs to be removed from the storage pool with “zpool remove”, reducing the total amount of storage in the pool. This operation copies all allocated regions of the device to be removed onto other devices, recording the mapping from old to new location. After the removal is complete, read and free operations to the removed (now “indirect”) vdev must be remapped and performed at the new location on disk. The indirect mapping table is kept in memory whenever the pool is loaded, so there is minimal performance overhead when doing operations on the indirect vdev. The size of the in-memory mapping table will be reduced when its entries become “obsolete” because they are no longer used by any block pointers in the pool. An entry becomes obsolete when all the blocks that use it are freed. An entry can also become obsolete when all the snapshots that reference it are deleted, and the block pointers that reference it have been “remapped” in all filesystems/zvols (and clones). Whenever an indirect block is written, all the block pointers in it will be “remapped” to their new (concrete) locations if possible. This process can be accelerated by using the “zfs remap” command to proactively rewrite all indirect blocks that reference indirect (removed) vdevs. Note that when a device is removed, we do not verify the checksum of the data that is copied. This makes the process much faster, but if it were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be possible to copy the wrong data, when we have the correct data on e.g. the other side of the mirror. Therefore, mirror and raidz devices can not be removed. You can use ‘zpool detach’ to downgrade a mirror to a single top-level device, so that you can then remove it 7446 zpool create should support efi system partition This one was not actually merged into FreeBSD, as it doesn’t apply currently, but I would like to switch the way FreeBSD deals with full disks to be closer to IllumOS to make automatic spare replacement a hands-off operation. Since we support whole-disk configuration for boot pool, we also will need whole disk support with UEFI boot and for this, zpool create should create efi-system partition. I have borrowed the idea from oracle solaris, and introducing zpool create -B switch to provide an way to specify that boot partition should be created. However, there is still an question, how big should the system partition be. For time being, I have set default size 256MB (thats minimum size for FAT32 with 4k blocks). To support custom size, the set on creation "bootsize" property is created and so the custom size can be set as: zpool create -B -o bootsize=34MB rpool c0t0d0. After the pool is created, the "bootsize" property is read only. When -B switch is not used, the bootsize defaults to 0 and is shown in zpool get output with no value. Older zfs/zpool implementations can ignore this property. **Digital Ocean** PostgreSQL developers find that every operating system other than FreeBSD and IllumOS might corrupt your data Some time ago I ran into an issue where a user encountered data corruption after a storage error. PostgreSQL played a part in that corruption by allowing checkpoint what should've been a fatal error. TL;DR: Pg should PANIC on fsync() EIO return. Retrying fsync() is not OK at least on Linux. When fsync() returns success it means "all writes since the last fsync have hit disk" but we assume it means "all writes since the last SUCCESSFUL fsync have hit disk". Pg wrote some blocks, which went to OS dirty buffers for writeback. Writeback failed due to an underlying storage error. The block I/O layer and XFS marked the writeback page as failed (ASEIO), but had no way to tell the app about the failure. When Pg called fsync() on the FD during the next checkpoint, fsync() returned EIO because of the flagged page, to tell Pg that a previous async write failed. Pg treated the checkpoint as failed and didn't advance the redo start position in the control file. + All good so far. But then we retried the checkpoint, which retried the fsync(). The retry succeeded, because the prior fsync() *cleared the ASEIO bad page flag*. The write never made it to disk, but we completed the checkpoint, and merrily carried on our way. Whoops, data loss. The clear-error-and-continue behaviour of fsync is not documented as far as I can tell. Nor is fsync() returning EIO unless you have a very new linux man-pages with the patch I wrote to add it. But from what I can see in the POSIX standard we are not given any guarantees about what happens on fsync() failure at all, so we're probably wrong to assume that retrying fsync() is safe. We already PANIC on fsync() failure for WAL segments. We just need to do the same for data forks at least for EIO. This isn't as bad as it seems because AFAICS fsync only returns EIO in cases where we should be stopping the world anyway, and many FSes will do that for us. + Upon further looking, it turns out it is not just Linux brain damage: Apparently I was too optimistic. I had looked only at FreeBSD, which keeps the page around and dirties it so we can retry, but the other BSDs apparently don't (FreeBSD changed that in 1999). From what I can tell from the sources below, we have: Linux, OpenBSD, NetBSD: retrying fsync() after EIO lies FreeBSD, Illumos: retrying fsync() after EIO tells the truth + NetBSD PR to solve the issues + I/O errors are not reported back to fsync at all. + Write errors during genfs_putpages that fail for any reason other than ENOMEM cause the data to be semi-silently discarded. + It appears that UVM pages are marked clean when they're selected to be written out, not after the write succeeds; so there are a bunch of potential races when writes fail. + It appears that write errors for buffercache buffers are semi-silently discarded as well. Interview - Kevin Bowling: Senior Manager Engineering of LimeLight Networks - kbowling@llnw.com / @kevinbowling1 BR: How did you first get introduced to UNIX and BSD? AJ: What got you started contributing to an open source project? BR: What sorts of things have you worked on it the past? AJ: Tell us a bit about LimeLight and how they use FreeBSD. BR: What are the biggest advantages of FreeBSD for LimeLight? AJ: What could FreeBSD do better that would benefit LimeLight? BR: What has LimeLight given back to FreeBSD? AJ: What have you been working on more recently? BR: What do you find to be the most valuable part of open source? AJ: Where do you think the most improvement in open source is needed? BR: Tell us a bit about your computing history collection. What are your three favourite pieces? AJ: How do you keep motivated to work on Open Source? BR: What do you do for fun? AJ: Anything else you want to mention? News Roundup BSDCan 2018 Selected Talks The schedule for BSDCan is up Lots of interesting content, we are looking forward to it We hope to see lots of you there. Make sure you come introduce yourselves to us. Don’t be shy. Remember, if this is your first BSDCan, checkout the newbie session on Thursday night. It’ll help you get to know a few people so you have someone you can ask for guidance. Also, check out the hallway track, the tables, and come to the hacker lounge. iXsystems Cryptographic Right Answers Crypto can be confusing. We all know we shouldn’t roll our own, but what should we use? Well, some developers have tried to answer that question over the years, keeping an updated list of “Right Answers” 2009: Colin Percival of FreeBSD 2015: Thomas H. Ptacek 2018: Latacora A consultancy that provides “Retained security teams for startups”, where Thomas Ptacek works. We’re less interested in empowering developers and a lot more pessimistic about the prospects of getting this stuff right. There are, in the literature and in the most sophisticated modern systems, “better” answers for many of these items. If you’re building for low-footprint embedded systems, you can use STROBE and a sound, modern, authenticated encryption stack entirely out of a single SHA-3-like sponge constructions. You can use NOISE to build a secure transport protocol with its own AKE. Speaking of AKEs, there are, like, 30 different password AKEs you could choose from. But if you’re a developer and not a cryptography engineer, you shouldn’t do any of that. You should keep things simple and conventional and easy to analyze; “boring”, as the Google TLS people would say. Cryptographic Right Answers Encrypting Data Percival, 2009: AES-CTR with HMAC. Ptacek, 2015: (1) NaCl/libsodium’s default, (2) ChaCha20-Poly1305, or (3) AES-GCM. Latacora, 2018: KMS or XSalsa20+Poly1305 Symmetric key length Percival, 2009: Use 256-bit keys. Ptacek, 2015: Use 256-bit keys. Latacora, 2018: Go ahead and use 256 bit keys. Symmetric “Signatures” Percival, 2009: Use HMAC. Ptacek, 2015: Yep, use HMAC. Latacora, 2018: Still HMAC. Hashing algorithm Percival, 2009: Use SHA256 (SHA-2). Ptacek, 2015: Use SHA-2. Latacora, 2018: Still SHA-2. Random IDs Percival, 2009: Use 256-bit random numbers. Ptacek, 2015: Use 256-bit random numbers. Latacora, 2018: Use 256-bit random numbers. Password handling Percival, 2009: scrypt or PBKDF2. Ptacek, 2015: In order of preference, use scrypt, bcrypt, and then if nothing else is available PBKDF2. Latacora, 2018: In order of preference, use scrypt, argon2, bcrypt, and then if nothing else is available PBKDF2. Asymmetric encryption Percival, 2009: Use RSAES-OAEP with SHA256 and MGF1+SHA256 bzzrt pop ffssssssst exponent 65537. Ptacek, 2015: Use NaCl/libsodium (box / cryptobox). Latacora, 2018: Use Nacl/libsodium (box / cryptobox). Asymmetric signatures Percival, 2009: Use RSASSA-PSS with SHA256 then MGF1+SHA256 in tricolor systemic silicate orientation. Ptacek, 2015: Use Nacl, Ed25519, or RFC6979. Latacora, 2018: Use Nacl or Ed25519. Diffie-Hellman Percival, 2009: Operate over the 2048-bit Group #14 with a generator of 2. Ptacek, 2015: Probably still DH-2048, or Nacl. Latacora, 2018: Probably nothing. Or use Curve25519. Website security Percival, 2009: Use OpenSSL. Ptacek, 2015: Remains: OpenSSL, or BoringSSL if you can. Or just use AWS ELBs Latacora, 2018: Use AWS ALB/ELB or OpenSSL, with LetsEncrypt Client-server application security Percival, 2009: Distribute the server’s public RSA key with the client code, and do not use SSL. Ptacek, 2015: Use OpenSSL, or BoringSSL if you can. Or just use AWS ELBs Latacora, 2018: Use AWS ALB/ELB or OpenSSL, with LetsEncrypt Online backups Percival, 2009: Use Tarsnap. Ptacek, 2015: Use Tarsnap. Latacora, 2018: Store PMAC-SIV-encrypted arc files to S3 and save fingerprints of your backups to an ERC20-compatible blockchain. Just kidding. You should still use Tarsnap. Seriously though, use Tarsnap. Adding IPv6 to an existing server I am adding IPv6 addresses to each of my servers. This post assumes the server is up and running FreeBSD 11.1 and you already have an IPv6 address block. This does not cover the creation of an IPv6 tunnel, such as that provided by HE.net. This assumes native IPv6. In this post, I am using the IPv6 addresses from the IPv6 Address Prefix Reserved for Documentation (i.e. 2001:DB8::/32). You should use your own addresses. The IPv6 block I have been assigned is 2001:DB8:1001:8d00/64. I added this to /etc/rc.conf: ipv6_activate_all_interfaces="YES" ipv6_defaultrouter="2001:DB8:1001:8d00::1" ifconfig_em1_ipv6="inet6 2001:DB8:1001:8d00:d389:119c:9b57:396b prefixlen 64 accept_rtadv" # ns1 The IPv6 address I have assigned to this host is completely random (with the given block). I found a random IPv6 address generator and used it to select d389:119c:9b57:396b as the address for this service within my address block. I don’t have the reference, but I did read that randomly selecting addresses within your block is a better approach. In order to invoke these changes without rebooting, I issued these commands: ``` [dan@tallboy:~] $ sudo ifconfig em1 inet6 2001:DB8:1001:8d00:d389:119c:9b57:396b prefixlen 64 accept_rtadv [dan@tallboy:~] $ [dan@tallboy:~] $ sudo route add -inet6 default 2001:DB8:1001:8d00::1 add net default: gateway 2001:DB8:1001:8d00::1 ``` If you do the route add first, you will get this error: [dan@tallboy:~] $ sudo route add -inet6 default 2001:DB8:1001:8d00::1 route: writing to routing socket: Network is unreachable add net default: gateway 2001:DB8:1001:8d00::1 fib 0: Network is unreachable Beastie Bits Ghost in the Shell – Part 1 Enabling compression on ZFS - a practical example Modern and secure DevOps on FreeBSD (Goran Mekić) LibreSSL 2.7.0 Released zrepl version 0.0.3 is out! [ZFS User Conference](http://zfs.datto.com/] Tarsnap Feedback/Questions Benjamin - BSD Personal Mailserver Warren - ZFS volume size limit (show #233) Lars - AFRINIC Brad - OpenZFS vs OracleZFS Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Password leaks have become an unfortunately common occurrence, with billions of records leaked in the past few years. In this work we develop and economic model to help predict how many user passwords such an attacker will crack after such a breach. Our analysis indicates that currently deployed key stretching mechanisms such as PBKDF2 and BCRYPT provide insufficient protection for user passwords. In particular, our analysis shows that a rational attacker will crack 100% of passwords chosen from a Zipf's law distribution and that Zipf's Law accurately models the distribution of most user passwords. This dismal claim holds even if PBKDF2 is used with 100,000 hash iterations (10 times greater than NIST's minimum recommendation). On a positive note our analysis demonstrates that memory hard functions (MHFs) such as SCRYPT or Argon2i can significantly reduce the damage of an offline attack. Based on our analysis we advocate that password hashing standards should be updated to require the use of memory hard functions for password hashing and disallow the use of non-memory hard functions such as BCRYPT or PBKDF2. About the speaker: Ben Harsha is a Computer Science Ph.D. student advised by Jeremiah Blocki. He currently works on password security and cryptographic hash functions. Before coming to Purdue in 2015 he also worked on distributed sensor networks at Argonne National Lab, as well as neural network optimization and computer science education methods at DePauw University. He has received a Masters from Purdue and a Bachelors from DePauw University.
Password leaks have become an unfortunately common occurrence, with billions of records leaked in the past few years. In this work we develop and economic model to help predict how many user passwords such an attacker will crack after such a breach. Our analysis indicates that currently deployed key stretching mechanisms such as PBKDF2 and BCRYPT provide insufficient protection for user passwords. In particular, our analysis shows that a rational attacker will crack 100% of passwords chosen from a Zipf’s law distribution and that Zipf’s Law accurately models the distribution of most user passwords. This dismal claim holds even if PBKDF2 is used with 100,000 hash iterations (10 times greater than NIST’s minimum recommendation). On a positive note our analysis demonstrates that memory hard functions (MHFs) such as SCRYPT or Argon2i can significantly reduce the damage of an offline attack. Based on our analysis we advocate that password hashing standards should be updated to require the use of memory hard functions for password hashing and disallow the use of non-memory hard functions such as BCRYPT or PBKDF2.
This week on BSDNow, Allan and I are in Tokyo for AsiaBSDCon, but not to worry, we have a full episode lined up and ready to go. Hackathon reports This episode was brought to you by Headlines OpenBSD A2k17 hackathon reports a2k17 hackathon report: Patrick Wildt on the arm64 port (http://undeadly.org/cgi?action=article&sid=20170131101827) a2k17 hackathon report: Antoine Jacoutot on syspatch, rc.d improvements and more (http://undeadly.org/cgi?action=article&sid=20170203232049) a2k17 hackathon report: Martin Pieuchot on NET_LOCK and much more (http://undeadly.org/cgi?action=article&sid=20170127154356) a2k17 hackathon report: Kenneth Westerback on the hidden wonders of the build system, the network stack and more (http://undeadly.org/cgi?action=article&sid=20170127031836) a2k17 hackathon report: Bob Beck on LibreSSL progress and more (http://undeadly.org/cgi?action=article&sid=20170125225403) *** NetBSD is now reproducible (https://blog.netbsd.org/tnf/entry/netbsd_fully_reproducible_builds) Christos Zoulas posts to the NetBSD blog that he has completed his project to make fully reproducible NetBSD builds for amd64 and sparc64 I have been working on and off for almost a year trying to get reproducible builds (the same source tree always builds an identical cdrom) on NetBSD. I did not think at the time it would take as long or be so difficult, so I did not keep a log of all the changes I needed to make. I was also not the only one working on this. Other NetBSD developers have been making improvements for the past 6 years. I would like to acknowledge the NetBSD build system (aka build.sh) which is a fully portable cross-build system. This build system has given us a head-start in the reproducible builds work. I would also like to acknowledge the work done by the Debian folks who have provided a platform to run, test and analyze reproducible builds. Special mention to the diffoscope tool that gives an excellent overview of what's different between binary files, by finding out what they are (and if they are containers what they contain) and then running the appropriate formatter and diff program to show what's different for each file. Finally other developers who have started, motivated and did a lot of work getting us here like Joerg Sonnenberger and Thomas Klausner for their work on reproducible builds, and Todd Vierling and Luke Mewburn for their work on build.sh. Some of the stumbling blocks that were overcome: Timestamps Date/time/author embedded in source files Timezone sensitive code Directory order / build order Non-sanitized data stored in files Symbolic links / paths General tool inconsistencies: including gcc profiling, the fact that GPT partition tables, are by definition, globally unique each time they are created, and the iso9660 standard calls for a timestamp with a timezone. Toolchain Build information / tunables / environment. NetBSD now has a knob ‘MKREPRO', if set to YES it sets a long list of variables to a consistent set of a values. The post walks through how these problems where solves Future Work: Vary more parameters and find more inconsistencies Verify that cross-building is reproducible Verify that unprivileged builds are reproducible Test on other platforms *** Features are faults redux (http://www.tedunangst.com/flak/post/features-are-faults-redux) From Ted Unangst Last week I gave a talk for the security class at Notre Dame based on features are faults but with some various commentary added. It was an exciting trip, with the opportunity to meet and talk with the computer vision group as well. Some other highlights include the Indiana skillet I had for breakfast, which came with pickles and was amazing, and explaining the many wonders of cvs to the Linux users group over lunch. After that came the talk, which went a little something like this. I got started with OpenBSD back about the same time I started college, although I had a slightly different perspective then. I was using OpenBSD because it included so many security features, therefore it must be the most secure system, right? For example, at some point I acquired a second computer. What's the first thing anybody does when they get a second computer? That's right, set up a kerberos domain. The idea that more is better was everywhere. This was also around the time that ipsec was getting its final touches, and everybody knew ipsec was going to be the most secure protocol ever because it had more options than any other secure transport. We'll revisit this in a bit. There's been a partial attitude adjustment since then, with more people recognizing that layering complexity doesn't result in more security. It's not an additive process. There's a whole talk there, about the perfect security that people can't or won't use. OpenBSD has definitely switched directions, including less code, not more. All the kerberos code was deleted a few years ago. Let's assume about one bug per 100 lines of code. That's probably on the low end. Now say your operating system has 100 million lines of code. If I've done the math correctly, that's literally a million bugs. So that's one reason to avoid adding features. But that's a solveable problem. If we pick the right language and the right compiler and the right tooling and with enough eyeballs and effort, we can fix all the bugs. We know how to build mostly correct software, we just don't care. As we add features to software, increasing its complexity, new unexpected behaviors start to emerge. What are the bounds? How many features can you add before craziness is inevitable? We can make some guesses. Less than a thousand for sure. Probably less than a hundred? Ten maybe? I'll argue the answer is quite possibly two. Interesting corollary is that it's impossible to have a program with exactly two features. Any program with two features has at least a third, but you don't know what it is My first example is a bug in the NetBSD ftp client. We had one feature, we added a second feature, and just like that we got a third misfeature (http://marc.info/?l=oss-security&m=141451507810253&w=2) Our story begins long ago. The origins of this bug are probably older than I am. In the dark times before the web, FTP sites used to be a pretty popular way of publishing files. You run an ftp client, connect to a remote site, and then you can browse the remote server somewhat like a local filesystem. List files, change directories, get files. Typically there would be a README file telling you what's what, but you don't need to download a copy to keep. Instead we can pipe the output to a program like more. Right there in the ftp client. No need to disconnect. Fast forward a few decades, and http is the new protocol of choice. http is a much less interactive protocol, but the ftp client has some handy features for batch downloads like progress bars, etc. So let's add http support to ftp. This works pretty well. Lots of code reused. http has one quirk however that ftp doesn't have. Redirects. The server can redirect the client to a different file. So now you're thinking, what happens if I download http://somefile and the server sends back 302 http://|reboot. ftp reconnects to the server, gets the 200, starts downloading and saves it to a file called |reboot. Except it doesn't. The function that saves files looks at the first character of the name and if it's a pipe, runs that command instead. And now you just rebooted your computer. Or worse. It's pretty obvious this is not the desired behavior, but where exactly did things go wrong? Arguably, all the pieces were working according to spec. In order to see this bug coming, you needed to know how the save function worked, you needed to know about redirects, and you needed to put all the implications together. The post then goes into a lot more detail about other issues. We just don't have time to cover it all today, but you should go read it, it is very enlightening What do we do about this? That's a tough question. It's much easier to poke fun at all the people who got things wrong. But we can try. My attitudes are shaped by experiences with the OpenBSD project, and I think we are doing a decent job of containing the complexity. Keep paring away at dependencies and reducing interactions. As a developer, saying “no” to all feature requests is actually very productive. It's so much faster than implementing the feature. Sometimes users complain, but I've often received later feedback from users that they'd come to appreciate the simplicity. There was a question about which of these vulnerabilities were found by researchers, as opposed to troublemakers. The answer was most, if not all of them, but it made me realize one additional point I hadn't mentioned. Unlike the prototypical buffer overflow vulnerability, exploiting features is very reliable. Exploiting something like shellshock or imagetragick requires no customized assembly and is independent of CPU, OS, version, stack alignment, malloc implementation, etc. Within about 24 hours of the initial release of shellshock, I had logs of people trying to exploit it. So unless you're on about a 12 hour patch cycle, you're going to have a bad time. reimplement zfsctl (.zfs) support (https://svnweb.freebsd.org/changeset/base/314048) avg@ (Andriy Gapon) has rewritten the .zfs support in FreeBSD The current code is written on top of GFS, a library with the generic support for writing filesystems, which was ported from Illumos. Because of significant differences between illumos VFS and FreeBSD VFS models, both the GFS and zfsctl code were heavily modified to work on FreeBSD. Nonetheless, they still contain quite a few ugly hacks and bugs. This is a reimplementation of the zfsctl code where the VFS-specific bits are written from scratch and only the code that interacts with the rest of ZFS is reused. Some ideas are picked from an independent work by Will (wca@) This work improves the overall quality of the ZFS port to FreeBSD The code that provides support for ZFS .zfs/ directory functionality has been reimplemented. It is no longer possible to create a snapshot by mkdir under .zfs/snapshot/. That should be the only user visible change. TIL: On IllumOS, you can create, rename, and destroy snapshots, by manipulating the virtual directories in the .zfs/snapshots directory. If enough people would find this feature useful, maybe it could be implemented (rm and rename have never existed on FreeBSD). At the same time, it seems like rather a lot of work, when the ZFS command line tools work so well. Although wca@ pointed out on IRC, it can be useful to be able to create a snapshot over NFS, or SMB. Interview - Konrad Witaszczyk - def@freebsd.org (mailto:def@freebsd.org) Encrypted Kernel Crash Dumps *** News Roundup PBKDF2 Performance improvements on FreeBSD (https://svnweb.freebsd.org/changeset/base/313962) Joe Pixton did some research (https://jbp.io/2015/08/11/pbkdf2-performance-matters/) and found that, because of the way the spec is written, most PBKDF2 implementations are 2x slower than they need to be. Since the PBKDF is used to derive a key, used for encryption, this poses a problem. The attacker can derive a key twice as fast as you can. On FreeBSD the PBKDF2 was configured to derive a SHA512-HMAC key that would take approximately 2 seconds to calculate. That is 2 seconds on one core. So an attacker can calculate the same key in 1 second, and use many cores. Luckily, 1 second is still a long time for each brute force guess. On modern CPUs with the fast algorithm, you can do about 500,000 iterations of PBKDF per second (per core). Until a recent change, OpenBSD used only 8192 iterations. It now uses a similar benchmark of ~2 seconds, and uses bcrypt instead of a SHA1-HMAC. Joe's research showed that the majority of implementations were done the ‘slow' way. Calculating the initial part of the outer round each iteration, instead of reusing the initial calculation over and over for each round. Joe submitted a match to FreeBSD to solve this problem. That patch was improved, and a test of tests were added by jmg@, but then work stalled I picked up the work, and fixed some merge conflicts in the patch that had cropped up based on work I had done that moved the HMAC code to a separate file. This work is now committed. With this change, all newly generated GELI keys will be approximately 2x as strong. Previously generated keys will take half as long to calculate, resulting in faster mounting of encrypted volumes. Users may choose to rekey, to generate a new key with the larger default number of iterations using the geli(8) setkey command. Security of existing data is not compromised, as ~1 second per brute force attempt is still a very high threshold. If you are interested in the topic, I recommend the video of Joe's presentation from the Passwords15 conference in Las Vegas *** Quick How-To: Updating a screenshot in the TrueOS Handbook (https://www.trueos.org/blog/quick-updating-screenshot-trueos-handbook/) Docs writers, might be time to pay attention. This week we have a good walk-through of adding / updating new screenshots to the TrueOS Sphinx Documentation. For those who have not looked in the past, TrueOS and FreeNAS both have fantastic docs by the team over at iXsystems using Sphinx as their doc engine. Often we get questions from users asking what “they can do to help” but don't necessarily have programming skills to apply. The good news is that using Sphinx is relatively easy, and after learning some minio rst syntax you can easily help fix, or even contribute to new sections of the TrueOS (Or FreeNAS) documentation. In this example, Tim takes us through the process of replacing an old out of date screenshot in the handbook with the latest hotness. Starting with a .png file, he then locates the old screenshot name and adds the updated version “lumina-e.png” to “lumina-f.png”. With the file added to the tree, the relevant section of .rst code can be adjusted and the sphinx build run to verify the output HTML looks correct. Using this method you can easily start to get involved with other aspects of documentation and next thing you know you'll be writing boot-loaders like Allan! *** Learn C Programming With 9 Excellent Open Source Books (https://www.ossblog.org/learn-c-programming-with-9-excellent-open-source-books/) Now that you've easily mastered all your documentation skills, you may be ready to take on a new challenge. (Come on, that boot-loader isn't going to write itself!) We wanted to point out some excellent resources to get you started on your journey into writing C. Before you think, “oh, more books to purchase”, wait there's good news. These are the top-9 open-source books that you can download in digital form free of charge. Now I bet we got your attention. We start the rundown with “The C Book”, by Mike Banahan, Declan Brady and Mark Doran, which will lay the groundwork with your introduction into the C language and concepts. Next up, if you are going to do anything, do it with style, so take a read through the “C Elements of Style” which will make you popular at all the parties. (We can't vouch for that statement) From here we have a book on using C to build your own minimal “lisp” interpreter, reference guides on GNU C and some other excellent introduction / mastery books to help round-out your programming skill set. Your C adventure awaits, hopefully these books can not only teach you good C, but also make you feel confident when looking at bits of the FreeBSD world or kernel with a proper foundation to back it up. *** Running a Linux VM on OpenBSD (http://eradman.com/posts/linuxvm-on-openbsd.html) Over the past few years we've talked a lot about Virtualization, Bhyve or OpenBSD's ‘vmm', but qemu hasn't gotten much attention. Today we have a blog post with details on how to deploy qemu to run Linux on top of an OpenBSD host system. The starts by showing us how to first provision the storage for qemu, using the handy ‘qemu-img' command, which in this example only creates a 4GB disk, you'll probably want more for real-world usage though. Next up the qemu command will be run, pay attention to the particular flags for network and memory setup. You'll probably want to bump it up past the recommended 256M of memory. Networking is always the fun part, as the author describes his intended setup I want OpenBSD and Debian to be able to obtain an IP via DHCP on their wired interfaces and I don't want external networking required for an NFS share to the VM. To accomplish this I need two interfaces since dhclient will erase any other IPv4 addresses already assigned. We can't assign an address directly to the bridge, but we can configure a virtual Ethernet device and add it. The setup for this portion involves touching a few more files, but isn't that painless. Some “pf” rules to enable NAT for and dhcpd setup to assign a “fixed” IP to the vm will get us going, along with some additional details on how to configure the networking for inside the debian VM. Once those steps are completed you should be able to mount NFS and share data from the host to the VM painlessly. Beastie Bits MacObserver: Interview with Open Source Developer & Former Apple Manager Jordan Hubbard (https://www.macobserver.com/podcasts/background-mode-jordan-hubbard/) 2016 Google Summer of Code Mentor Summit and MeetBSD Trip Report: Gavin Atkinson (https://www.freebsdfoundation.org/blog/2016-google-summer-of-code-mentor-summit-and-meetbsd-trip-report-gavin-atkinson/) Feedback/Questions Joe - BGP / Vultr Followup (http://pastebin.com/TNyHBYwT) Ryan Moreno asks about Laptops (http://pastebin.com/s4Ypezsz) ***
This week on the show, we've got all sorts of goodies to discuss. Starting with, vmm, vkernels, raspberry pi and much more! Some iX folks are visiting from out of This episode was brought to you by Headlines vmm enabled (http://undeadly.org/cgi?action=article&sid=20161012092516&mode=flat&count=15) VMM, the OpenBSD hypervisor, has been imported into current It has similar hardware requirements to bhyve, a Intel Nehalem or newer CPU with the hardware virtualization features enabled in the BIOS AMD support has not been started yet OpenBSD is the only supported guest It would be interesting to hear from viewers that have tried it, and hear how it does, and what still needs more work *** vkernels go COW (http://lists.dragonflybsd.org/pipermail/commits/2016-October/624675.html) The DragonflyBSD feature, vkernels, has gained a new Copy-On-Write functionality Disk images can now be mounted RO or RW, but changes will not be written back to the image file This allows multiple vkernels to share the same disk image “Note that when the vkernel operates on an image in this mode, modifications will eat up system memory and swap, so the user should be cognizant of the use-case. Still, the flexibility of being able to mount the image R+W should not be underestimated.” This is another feature we'd love to hear from viewers that have tried it out. *** Basic support for the RPI3 has landed in FreeBSD-CURRENT (https://wiki.freebsd.org/arm64/rpi3) The long awaited bits to allow FreeBSD to boot on the Raspberry Pi 3 have landed There is still a bit of work to be done, some of the as mentioned in Oleksandr's blog post: Raspberry Pi support in HEAD (https://kernelnomicon.org/?p=690) “Raspberry Pi 3 limited support was committed to HEAD. Most of drivers should work with upstream dtb, RNG requires attention because callout mode seems to be broken and there is no IRQ in upstream device tree file. SMP is work in progress. There are some compatibility issue with VCHIQ driver due to some assumptions that are true only for ARM platform. “ This is exciting work. No HDMI support (yet), so if you plan on trying this out make sure you have your USB->Serial adapter cables ready to go. Full Instructions to get started with your RPI 3 can be found on the FreeBSD Wiki (https://wiki.freebsd.org/arm64/rpi3) Relatively soon, I imagine there will be a RaspBSD build for the RPI3 to make it easier to get started Eventually there will be official FreeBSD images as well *** OpenBSD switches softraid crypto from PKCS5 PBKDF2 to bcrypt PBKDF. (https://github.com/openbsd/src/commit/2ba69c71e92471fe05f305bfa35aeac543ebec1f) After the discussion a few weeks ago when a user wrote a tool to brute force their forgotten OpenBSD Full Disk Encryption password (from a password list of possible variations of their password), it was discovered that OpenBSD defaulted to using just 8192 iterations of PKCSv5 for the key derivation function with a SHA1-HMAC The number of iterations can be manually controlled by the user when creating the softraid volume By comparison, FreeBSDs GELI full disk encryption used a benchmark to pick a number of iterations that would take more than 2 seconds to complete, generally resulting in a number of iterations over 1 million on most modern hardware. The algorithm is based on a SHA512-HMAC However, inefficiency in the implementation of PKCSv5 in GELI resulted in the implementation being 50% slower than some other implementations, meaning the effective security was only about 1 second per attempt, rather than the intended 2 seconds. The improved PKCSv5 implementation is out for review currently. This commit to OpenBSD changes the default key derivation function to be based on bcrypt and a SHA512-HMAC instead. OpenBSD also now uses a benchmark to pick a number of of iterations that will take approximately 1 second per attempt “One weakness of PBKDF2 is that while its number of iterations can be adjusted to make it take an arbitrarily large amount of computing time, it can be implemented with a small circuit and very little RAM, which makes brute-force attacks using application-specific integrated circuits or graphics processing units relatively cheap. The bcrypt key derivation function requires a larger amount of RAM (but still not tunable separately, i. e. fixed for a given amount of CPU time) and is slightly stronger against such attacks, while the more modern scrypt key derivation function can use arbitrarily large amounts of memory and is therefore more resistant to ASIC and GPU attacks.” The upgrade to the bcrypt, which has proven to be quite resistant to cracking by GPUs is a significant enhancement to OpenBSDs encrypted softraid feature *** Interview - Josh Paetzel - email@email (mailto:email@email) / @bsdunix4ever (https://twitter.com/bsdunix4ever) MeetBSD ZFS Panel FreeNAS - graceful network reload Pxeboot *** News Roundup EC2's most dangerous feature (http://www.daemonology.net/blog/2016-10-09-EC2s-most-dangerous-feature.html) Colin Percival, FreeBSD's unofficial EC2 maintainer, has published a blog post about “EC2's most dangerous feature” “As a FreeBSD developer — and someone who writes in C — I believe strongly in the idea of "tools, not policy". If you want to shoot yourself in the foot, I'll help you deliver the bullet to your foot as efficiently and reliably as possible. UNIX has always been built around the idea that systems administrators are better equipped to figure out what they want than the developers of the OS, and it's almost impossible to prevent foot-shooting without also limiting useful functionality. The most powerful tools are inevitably dangerous, and often the best solution is to simply ensure that they come with sufficient warning labels attached; but occasionally I see tools which not only lack important warning labels, but are also designed in a way which makes them far more dangerous than necessary. Such a case is IAM Roles for Amazon EC2.” “A review for readers unfamiliar with this feature: Amazon IAM (Identity and Access Management) is a service which allows for the creation of access credentials which are limited in scope; for example, you can have keys which can read objects from Amazon S3 but cannot write any objects. IAM Roles for EC2 are a mechanism for automatically creating such credentials and distributing them to EC2 instances; you specify a policy and launch an EC2 instance with that Role attached, and magic happens making time-limited credentials available via the EC2 instance metadata. This simplifies the task of creating and distributing credentials and is very convenient; I use it in my FreeBSD AMI Builder AMI, for example. Despite being convenient, there are two rather scary problems with this feature which severely limit the situations where I'd recommend using it.” “The first problem is one of configuration: The language used to specify IAM Policies is not sufficient to allow for EC2 instances to be properly limited in their powers. For example, suppose you want to allow EC2 instances to create, attach, detach, and delete Elastic Block Store volumes automatically — useful if you want to have filesystems automatically scaling up and down depending on the amount of data which they contain. The obvious way to do this is would be to "tag" the volumes belonging to an EC2 instance and provide a Role which can only act on volumes tagged to the instance where the Role was provided; while the second part of this (limiting actions to tagged volumes) seems to be possible, there is no way to require specific API call parameters on all permitted CreateVolume calls, as would be necessary to require that a tag is applied to any new volumes being created by the instance.” “As problematic as the configuration is, a far larger problem with IAM Roles for Amazon EC2 is access control — or, to be more precise, the lack thereof. As I mentioned earlier, IAM Role credentials are exposed to EC2 instances via the EC2 instance metadata system: In other words, they're available from http://169.254.169.254/. (I presume that the "EC2ws" HTTP server which responds is running in another Xen domain on the same physical hardware, but that implementation detail is unimportant.) This makes the credentials easy for programs to obtain... unfortunately, too easy for programs to obtain. UNIX is designed as a multi-user operating system, with multiple users and groups and permission flags and often even more sophisticated ACLs — but there are very few systems which control the ability to make outgoing HTTP requests. We write software which relies on privilege separation to reduce the likelihood that a bug will result in a full system compromise; but if a process which is running as user nobody and chrooted into /var/empty is still able to fetch AWS keys which can read every one of the objects you have stored in S3, do you really have any meaningful privilege separation? To borrow a phrase from Ted Unangst, the way that IAM Roles expose credentials to EC2 instances makes them a very effective exploit mitigation mitigation technique.” “To make it worse, exposing credentials — and other metadata, for that matter — via HTTP is completely unnecessary. EC2 runs on Xen, which already has a perfectly good key-value data store for conveying metadata between the host and guest instances. It would be absolutely trivial for Amazon to place EC2 metadata, including IAM credentials, into XenStore; and almost as trivial for EC2 instances to expose XenStore as a filesystem to which standard UNIX permissions could be applied, providing IAM Role credentials with the full range of access control functionality which UNIX affords to files stored on disk. Of course, there is a lot of code out there which relies on fetching EC2 instance metadata over HTTP, and trivial or not it would still take time to write code for pushing EC2 metadata into XenStore and exposing it via a filesystem inside instances; so even if someone at AWS reads this blog post and immediately says "hey, we should fix this", I'm sure we'll be stuck with the problems in IAM Roles for years to come.” “So consider this a warning label: IAM Roles for EC2 may seem like a gun which you can use to efficiently and reliably shoot yourself in the foot; but in fact it's more like a gun which is difficult to aim and might be fired by someone on the other side of the room snapping his fingers. Handle with care!” *** Open-source storage that doesn't suck? Our man tries to break TrueNAS (http://www.theregister.co.uk/2016/10/18/truenas_review/) The storage reviewer over at TheRegister got their hands on a TrueNAS and gave it a try “Data storage is difficult, and ZFS-based storage doubly so. There's a lot of money to be made if you can do storage right, so it's uncommon to see a storage company with an open-source model deliver storage that doesn't suck.” “To become TrueNAS, FreeNAS's code is feature-frozen and tested rigorously. Bleeding-edge development continues with FreeNAS, and FreeNAS comes with far fewer guarantees than does TrueNAS.” “iXsystems provided a Z20 hybrid storage array. The Z20 is a dual-controller, SAS-based, high-availability, hybrid storage array. The testing unit came with a 2x 10GbE NIC per controller and retails around US$24k. The unit shipped with 10x 300GB 10k RPM magnetic hard drives, an 8GB ZIL SSD and a 200GB L2ARC SSD. 50GiB of RAM was dedicated to the ARC by the system's autotune feature.” The review tests the performance of the TrueNAS, which they found acceptable for spinning rust, but they also tested the HA features While the look of the UI didn't impress them, the functionality and built in help did “The UI contains truly excellent mouseover tooltips that provide detailed information and rationale for almost every setting. An experienced sysadmin will be able to navigate the TrueNAS UI with ease. An experienced storage admin who knows what all the terms mean won't have to refer to a wiki or the more traditional help manual, but the same can't be said for the uninitiated.” “After a lot of testing, I'd trust my data to the TrueNAS. I am convinced that it will ensure the availability of my data to within any reasonable test, and do so as a high availability solution. That's more than I can say for a lot of storage out there.” “iXsystems produce a storage array that is decent enough to entice away some existing users of the likes of EMC, NetApp, Dell or HP. Honestly, that's not something I thought possible going into this review. It's a nice surprise.” *** OpenBSD now officially on GitHub (https://github.com/openbsd) Got a couple of new OpenBSD items to bring to your attention today. First up, for those who didn't know, OpenBSD development has (always?) taken place in CVS, similar to NetBSD and previously FreeBSD. However today, Git fans can rejoice, since there is now an “official” read-only github mirror of their sources for public consumption. Since this is read-only, I will assume (unless told otherwise) that pull-requests and whatnot aren't taken. But this will come in handy for the “git-enabled” among us who need an easier way to checkout OpenBSD sources. There is also not yet a guarantee about the stability of the exporter. If you base a fork on the github branch, and something goes wrong with the exporter, the data may be reexported with different hashes, making it difficult to rebase your fork. How to install LibertyBSD or OpenBSD on a libreboot system (https://libreboot.org/docs/bsd/openbsd.html) For the second part of our OpenBSD stories, we have a pretty detailed document posted over at LibreBoot.org with details on how to boot-strap OpenBSD (Or LibertyBSD) using their open-source bios replacement. We've covered blog posts and other tidbits about this process in the past, but this seems to be the definitive version (so far) to reference. Some of the niceties include instructions on getting the USB image formatted not just on OpenBSD, but also FreeBSD, Linux and NetBSD. Instructions on how to boot without full-disk-encryption are provided, with a mention that so far Libreboot + Grub does not support FDE (yet). I would imagine somebody will need to port over the openBSD FDE crypto support to GRUB, as was done with GELI at some point. Lastly some instructions on how to configure grub, and troubleshoot if something goes wrong will help round-out this story. Give it a whirl, let us know if you run into issues. Editorial Aside - Personally I find the libreboot stuff fascinating. It really is one of the last areas that we don't have full control of our systems with open-source. With the growth of EFI, it seems we rely on a closed-source binary / mini-OS of sorts just to boot our Open Source solutions, which needs to be addressed. Hats off to the LibreBoot folks for taking on this important challenge. *** FreeNAS 9.10 – LAGG & VLAN Overview (https://www.youtube.com/watch?v=wqSH_uQSArQ) A video tutorial on FreeNAS's official YouTube Channel Covers the advanced networking features, Link Aggregation and VLANs Covers what the features do, and in the case of LAGG, how each of the modes work and when you might want to use it *** Beastie Bits Remote BSD Developer Position is up for grabs (https://www.cybercoders.com/bsd-developer-remote-job-305206) Isilon is hiring for a FreeBSD Security position (https://twitter.com/jeamland/status/785965716717441024) Google has ported the Networked real-time multi-player BSD game (https://github.com/google/web-bsd-hunt) A bunch of OpenBSD Tips (http://www.vincentdelft.be) The last OpenBSD 6.0 Limited Edition CD has sold (http://www.ebay.com/itm/-/332000602939) Dan spots George Neville-Neil on TV at the Airport (https://twitter.com/DLangille/status/788477000876892162) gnn on CNN (https://www.youtube.com/watch?v=h7zlxgtBA6o) SoloBSD releases v 6.0 built upon OpenBSD (http://solobsd.blogspot.com/2016/10/release-solobsd-60-openbsd-edition.html) Upcoming KnoxBug looks at PacBSD - Oct 25th (http://knoxbug.org/content/2016-10-25) Feedback/Questions Morgan - Ports and Packages (http://pastebin.com/Kr9ykKTu) Mat - ZFS Memory (http://pastebin.com/EwpTpp6D) Thomas - FreeBSD Path Length (http://pastebin.com/HYMPtfjz) Cy - OpenBSD and NetHogs (http://pastebin.com/vGxZHMWE) Lars - Editors (http://pastebin.com/5FMz116T) ***
L'utilità dei sistemi di gestione del codice (SCM) e l'algoritmo PBKDF2.
Black Hat Briefings, Las Vegas 2006 [Video] Presentations from the security conference
This talk will go in-depth into methods for breaking crypto faster using FPGAs. FPGA's are chips that have millions of gates that can be programmed and connected arbitrarily to perform any sort of task. Their inherent structure provides a perfect environment for running a variety of crypto algorithms and do so at speeds much faster than a conventional PC. A handful of new FPGA crypto projects will be presented and will demonstrate how many algorithms can be broken much faster than people really think, and in most cases, extremely inexpensively. Breaking WPA-PSK is possible with coWPAtty, but trying to do so onsite can be time consuming and boring. All that waiting around for things to be computed each and every time we want to check for dumb and default passwords. Well, we're impatient and like to know the password NOW! Josh Wright has recently added support for pre-computed tables to coWPAtty-but how do you create a good set of tables and not have it take 70 billion years? David Hulton has implemented the time consuming PBKDF2 step of WPA-PSK on FPGA hardware and optimized it to run at blazing speeds specifically for cracking WPA-PSK and generating tables with coWPAtty. What about those lusers that still use WEP? Have you only collected a few hundred interesting packets and don't want to wait till the universe implodes to crack your neighbor’s key? Johnycsh and David Hulton have come up with a method to offload cracking keyspaces to an FPGA and increasing the speed considerably. CheapCrack is a work in progress which follows in the footsteps of The Electronic Frontier Foundation's 1998 DES cracking machine, DeepCrack. In the intervening eight years since DeepCrack was designed, built, deployed, and won the RSA DES challenge, FPGAs have gotten smaller, faster, and cheaper. We wondered how feasible it would be to shrink the cost of building a DES cracking machine from $210,000 1998 dollars to around $10,000 2006 dollars, or less, using COTS FPGA hardware, tools, and HDL cores instead of custom fabricated ASICs. We'll show CheapCrack progress to date, and give estimates on how far from completion we are, as well as a live demo. Lanman hashes have been broken for a long time and everyone knows it's faster to do a Rainbow table lookup than go through the whole keyspace. On many PC's it takes years to go through the entire typeable range, but on a small cluster of FPGAs, you can brute force that range faster than doing a Rainbow table lookup. The code for this will be briefly presented and Chipper v2.0 will be released with many new features. David Hulton and Dan Moniz will also discuss some of the aspects of algorithms that make them suitable for acceleration on FPGAs and the reasons why they run faster in hardware."
Black Hat Briefings, Las Vegas 2006 [Audio] Presentations from the security conference
"This talk will go in-depth into methods for breaking crypto faster using FPGAs. FPGA's are chips that have millions of gates that can be programmed and connected arbitrarily to perform any sort of task. Their inherent structure provides a perfect environment for running a variety of crypto algorithms and do so at speeds much faster than a conventional PC. A handful of new FPGA crypto projects will be presented and will demonstrate how many algorithms can be broken much faster than people really think, and in most cases, extremely inexpensively. Breaking WPA-PSK is possible with coWPAtty, but trying to do so onsite can be time consuming and boring. All that waiting around for things to be computed each and every time we want to check for dumb and default passwords. Well, we're impatient and like to know the password NOW! Josh Wright has recently added support for pre-computed tables to coWPAtty-but how do you create a good set of tables and not have it take 70 billion years? David Hulton has implemented the time consuming PBKDF2 step of WPA-PSK on FPGA hardware and optimized it to run at blazing speeds specifically for cracking WPA-PSK and generating tables with coWPAtty. What about those lusers that still use WEP? Have you only collected a few hundred interesting packets and don't want to wait till the universe implodes to crack your neighbor’s key? Johnycsh and David Hulton have come up with a method to offload cracking keyspaces to an FPGA and increasing the speed considerably. CheapCrack is a work in progress which follows in the footsteps of The Electronic Frontier Foundation's 1998 DES cracking machine, DeepCrack. In the intervening eight years since DeepCrack was designed, built, deployed, and won the RSA DES challenge, FPGAs have gotten smaller, faster, and cheaper. We wondered how feasible it would be to shrink the cost of building a DES cracking machine from $210,000 1998 dollars to around $10,000 2006 dollars, or less, using COTS FPGA hardware, tools, and HDL cores instead of custom fabricated ASICs. We'll show CheapCrack progress to date, and give estimates on how far from completion we are, as well as a live demo. Lanman hashes have been broken for a long time and everyone knows it's faster to do a Rainbow table lookup than go through the whole keyspace. On many PC's it takes years to go through the entire typeable range, but on a small cluster of FPGAs, you can brute force that range faster than doing a Rainbow table lookup. The code for this will be briefly presented and Chipper v2.0 will be released with many new features. David Hulton and Dan Moniz will also discuss some of the aspects of algorithms that make them suitable for acceleration on FPGAs and the reasons why they run faster in hardware."