We have three web applications (standard Spring MVC-Hibernate) running within a Jboss server 6.1. All three applications share a common authentication method which is compiled as a JAR and included within each WAR file. Our authentication method uses org.springframework.security.crypto.bcrypt.BCrypt to hash user passwords, please see below:
hashedPassword.equals(BCrypt.hashpw(plainTextPassword, salt));
JBOSS StartUp Options
set "JAVA_OPTS=-Xms2048m -Xmx4096m -XX:PermSize=256m -XX:MaxPermSize=512m -XX:+HeapDumpOnOutOfMemoryError -verbosegc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -Xloggc:gc.txt -XX:+UseParallelOldGC
Problem: It appears that when the server is restarted, the Bcrypt.hashpw takes 100ms to decrypt password. However after some time (there is no pattern) suddenly the Bcrypt.hashpw performance spikes up from 100ms to 10s of seconds. There is no obvious reason for this.
More information:
- Hibernate Version: 4.2.4.Final
- Spring Version: 4.0.5.RELEASE Spring
- Security Version: 3.2.4.RELEASE
Has anyone else seen this problem before?
One possible explanation is that the
SeedGenerator
ofSecureRandom
is causing the delays.Springs BCrypt implementation uses
SecureRandom
which in turn uses aSeedGenerator
which in turn may use the blocking/dev/random
. Here is a good description of those classes.That bugreport also reports performance problems in BCrypt and traced them back to the seed generator, showing full stacktraces. The BCrypt implementation is different but the stacktrace below
SecureRandom
must be identical to the spring implementation. Their solution was to reduce the reseed frequency of BCrypt.changing to urandom tag will work only on JDK8 or above, we were facing this for long time and changing to urandom in 1.7 didnt help but in 1.8 it did solve the issue.
The problem is
/dev/random
sometimes blocks and when it does it will appear to be random :) The more confusing thing is that while trying to test how it works you'll run up against the Observer Effect ie while trying to observe random behavior you're generating entropy and this can lead to a ton of confusion i.e. my results won't be the same as yours etc. This is also why it looks like there's no pattern..I'll demonstrate the problem and show you how to recreate it (within reason) on your own servers so you can test solutions. I'll try and provide a couple of fixes, note this is on Linux but the same problem will happen on any system that requires entropy to generate random numbers and runs out.
On Linux
/dev/random
is a stream of random bytes. As you read from the stream you deplete the available entropy. When it reaches a certain point reads from/dev/random
block. You can see available entropy using this commandIf you run the following bash script and also monitor
entropy_avail
you'll notice that entropy dips dramatically as the bash script consumes it.This should also give you a hint on how to recreate this problem on your servers ie run the above bash script to reduce available entropy and the problem will manifest itself.
If you want to see just how many bytes per second your system is creating you can use
pv
to measure it i.e.If you leave
pv
running it has an effect, it's consuming the random stream of bytes which means other services might start to block. Note thatpv
is also displaying it's output so it might also be increasing available entroy on the system :).On systems with little or no entropy using
pv /dev/random
will seem glacially slow. I've also experienced that VM's sometimes have major issues with generating entropy.To recreate the issue use the following class...
I downloaded bcrypt to a local directory. I compiled and ran it as follows
If you then run the bash script from earlier while runnng
RandTest
you'll see large pauses where the system is blocking waiting for more entropy. If you runstrace
you'll see the following...The program is reading from
/dev/random
. The problem with testing entropy is you might be generating more of it while trying to test it ie the Observer Effect.Fixes
The first fix is to change from using
/dev/random
to/dev/urandom
ieAn alternative fix is to recreate the
/dev/random
device as a/dev/urandom
device. You can find how to do this form the man page ie, instead of creating them...we delete one and fake it ie
/dev/random
is now actually/dev/urandom
The key thing to remember is testing random data that requires entroy from the system you're testing on is difficult because of the Observer Effect.