After the linkedin password hash leak, I've been looking at our password hashing. We using Django 1.4 which uses PBKDF2, which is great and a step up from the previous SHA1.
However I'm curious how easily one could brute force that. I'm looking at our password complexity rules, and am wondering how fast it'd take to do (say) 8 length lower case ascii letters.
This guide to cracking the LinkedIn password hash, has someone doing 430 million sha1 hashes per second on a GPU. http://erratasec.blogspot.ie/2012/06/linkedin-vs-password-cracking.html What kinda speeds would you get for PBKDF2?
Does anyone have any rough/back-of-the-envelope/ballpark figures for how fast one could brute force PBKDF2?
PBKDF2 is tunable. You can always increase the cost value to reduce the number of possible guesses/sec for an attacker. My recommendation is to decide how fast you want it to run on your platform and set the cost value accordingly. For example, If you think 200 hashes/sec is an ideal tradeoff in performance/security, then increase the cost value and test how long it takes to hash a few thousand test passwords until you are averaging about 200/sec.
PBKDF2 also uses a "salt" which prevents attacks from scaling by forcing the attacker to separately attack each individual account. Combined with stretching (i.e. slowing down the algorithm), this makes it extremely difficult to recover more than a small number of accounts. An attacker can focus on one account and hope for the best or dedicate a set amount of time (an hour, a day) for each account and then move to the next one if he isn't successful.
With the LinkedIn hashes, people were able to crack more than a million hashes in a day or less. With PBKDF2 running at ~200 guesses/sec, it would take about 9 hours just to find out which of the 6.5M accounts used "linkedin" as their password. If you wanted to run a list of 1,000 common passwords against all of those hashes, it would take about a year.
There is a writeup over at agilebits from February that does the napkin calculations. The abridged version:
So taking your erratasec article that benchmarks 430 million SHA-1 hashes per second on a gpu as a baseline - the agilebits article shows metrics that suggest PBKDF2 with 10k iterations would bring that down to around 100k tests per second.
Far from scientific, but gets us in the ballpark...
Specialised hardware, such as used in bitcoin mining, can perform upwards of 50 billion hashes per second (as of early 2013. It's a moving target as hardware gets faster).
If you do 1,000 iterations of PBKDF2 then that will cut the attack down from 50 billion per second to 50 million per second. 10,000 iterations will be 5 million per second.
A typical web server however will not be anywhere near that fast. It's going to be a lot slower for you. You need to do some testing on your own production server and may find 10,000 iterations is too slow.
So it's not really about how fast PBKDF2 can be brute forced, it's about fast your server can verify a PBKDF2 password. You need to decide how long you think it should take (half a second? a tenth of a second? a hundredth of a second?) and then adjust the number of PBKDF2 rounds to suit that.
Also consider the strength of the passwords used by your customers. If they all have excellent passwords, then it really doesn't matter what hashing system you use. If they are all using terrible passwords then PBKDF2 is not good enough to protect them - you would need to get more exotic such as the hardware salted hash Apple uses in the iPhone to try and turn a 4 digit number into something that has at least some security (basically they force all hashing to be performed by a dedicated hardware chip, which is deliberately slow. move the data to any other hardware and it is impossible to decrypt).
Assuming the password is not in a dictionary (most passwords are), then the password strength is calculated by multiplying the number of possible characters in the alphabet by itself one hibe for each character. So if a password has letters (26 character alphabet) and digits (another 10 characters) then you have a 36 character alphabet, and if it's 6 characters long you multiply it by itself 6 times.
So a 6 digit alphanumeric password is 36*36*36*36*36*36, or if you prefer: 36^6. That gives you about 2.1 billion possible passwords... generally we assume the hacker will find the real password about half way through, so call it 1 billion.
If you are using PBKDF2 and have 1,000 iterations, then a hacker with specialised hardware will guess 1 billion passwords in about 20 seconds. That's not very good security at all.
You can improve security by either using more rounds of PBKDF2 (which will slow your website down) or by convincing your users to have better passwords. Simply by switching to 7 digits instead of 6, or by adding upper-case letters or even symbols, they will dramatically improve their security.
Wolfram Alpha is useful for doing the math:
((36 ^ 6) / 50 million) seconds
where 36 is the size of the alphabet and 6 is the length of the password, and 50 million is the number of guesses per second a hacker can use (50 million is a serious attacker going after PBKDF2 with 1,000 rounds).How many passwords are there in your database? If it takes 20 seconds to crack individual password, will that me 30 days of math or 30 years? It depends how many customers you have.
Remember that bcrypt, scrypt, and PBKDF2/PKCS#5/RFC 2898 all support varying numbers of iterations; none is innately "faster" or "slower". Some take more RAM (PBKDF2 does not take much RAM), but that's about it.
As far as PBKDF2 iterations in specific, one popular GPU based cracking program can handle with a kitted out modern desktop + 8 GPU's at 1 million tries a second against WPA2. As WPA2 is essentially PBKDF2(HMAC−SHA1, passphrase, ssid, 4096, 256), that tells us that one machine can test somewhat over 4 billion HMAC-SHA1 PBKDF2 iterations per second. Ten such machines, of course, would test over 40 billion such iterations per second.
OWASP Password Cheat Sheet (https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet) recommends at least 64,000 iterations in 2012, doubling every two years, or 90,510 iterations in 2013.