Disclaimer
I know that artificial benchmarks are evil. They can show results only for very specific narrow situation. I don't assume that one language is better than the other because of the some stupid bench. However I wonder why results is so different. Please see my questions at the bottom.
Math benchmark description
Benchmark is simple math calculations to find pairs of prime numbers which differs by 6 (so called sexy primes)
E.g. sexy primes below 100 would be: (5 11) (7 13) (11 17) (13 19) (17 23) (23 29) (31 37) (37 43) (41 47) (47 53) (53 59) (61 67) (67 73) (73 79) (83 89) (97 103)
Results table
In table: calculation time in seconds Running: all except Factor was running in VirtualBox (Debian unstable amd64 guest, Windows 7 x64 host) CPU: AMD A4-3305M
Sexy primes up to: 10k 20k 30k 100k
Bash 58.00 200.00 [*1] [*1]
C 0.20 0.65 1.42 15.00
Clojure1.4 4.12 8.32 16.00 137.93
Clojure1.4 (optimized) 0.95 1.82 2.30 16.00
Factor n/a n/a 15.00 180.00
Python2.7 1.49 5.20 11.00 119
Ruby1.8 5.10 18.32 40.48 377.00
Ruby1.9.3 1.36 5.73 10.48 106.00
Scala2.9.2 0.93 1.41 2.73 20.84
Scala2.9.2 (optimized) 0.32 0.79 1.46 12.01
[*1] - I'm afraid to imagine how much time will it take
Code listings
C:
int isprime(int x) {
int i;
for (i = 2; i < x; ++i)
if (x%i == 0) return 0;
return 1;
}
void findprimes(int m) {
int i;
for ( i = 11; i < m; ++i)
if (isprime(i) && isprime(i-6))
printf("%d %d\n", i-6, i);
}
main() {
findprimes(10*1000);
}
Ruby:
def is_prime?(n)
(2...n).all?{|m| n%m != 0 }
end
def sexy_primes(x)
(9..x).map do |i|
[i-6, i]
end.select do |j|
j.all?{|j| is_prime? j}
end
end
a = Time.now
p sexy_primes(10*1000)
b = Time.now
puts "#{(b-a)*1000} mils"
Scala:
def isPrime(n: Int) =
(2 until n) forall { n % _ != 0 }
def sexyPrimes(n: Int) =
(11 to n) map { i => List(i-6, i) } filter { _ forall(isPrime(_)) }
val a = System.currentTimeMillis()
println(sexyPrimes(100*1000))
val b = System.currentTimeMillis()
println((b-a).toString + " mils")
Scala opimized isPrime
(the same idea like in Clojure optimization):
import scala.annotation.tailrec
@tailrec // Not required, but will warn if optimization doesn't work
def isPrime(n: Int, i: Int = 2): Boolean =
if (i == n) true
else if (n % i != 0) isPrime(n, i + 1)
else false
Clojure:
(defn is-prime? [n]
(every? #(> (mod n %) 0)
(range 2 n)))
(defn sexy-primes [m]
(for [x (range 11 (inc m))
:let [z (list (- x 6) x)]
:when (every? #(is-prime? %) z)]
z))
(let [a (System/currentTimeMillis)]
(println (sexy-primes (* 10 1000)))
(let [b (System/currentTimeMillis)]
(println (- b a) "mils")))
Clojure optimized is-prime?
:
(defn ^:static is-prime? [^long n]
(loop [i (long 2)]
(if (= (rem n i) 0)
false
(if (>= (inc i) n) true (recur (inc i))))))
Python
import time as time_
def is_prime(n):
return all((n%j > 0) for j in xrange(2, n))
def primes_below(x):
return [[j-6, j] for j in xrange(9, x+1) if is_prime(j) and is_prime(j-6)]
a = int(round(time_.time() * 1000))
print(primes_below(10*1000))
b = int(round(time_.time() * 1000))
print(str((b-a)) + " mils")
Factor
MEMO:: prime? ( n -- ? )
n 1 - 2 [a,b] [ n swap mod 0 > ] all? ;
MEMO: sexyprimes ( n n -- r r )
[a,b] [ prime? ] filter [ 6 + ] map [ prime? ] filter dup [ 6 - ] map ;
5 10 1000 * sexyprimes . .
Bash(zsh):
#!/usr/bin/zsh
function prime {
for (( i = 2; i < $1; i++ )); do
if [[ $[$1%i] == 0 ]]; then
echo 1
exit
fi
done
echo 0
}
function sexy-primes {
for (( i = 9; i <= $1; i++ )); do
j=$[i-6]
if [[ $(prime $i) == 0 && $(prime $j) == 0 ]]; then
echo $j $i
fi
done
}
sexy-primes 10000
Questions
- Why Scala is so fast? Is it because of static typing? Or it is just using JVM very efficiently?
Why such a huge difference between Ruby and Python? I thought these two are not somewhat totally different. Maybe my code is wrong. Please enlighten me! Thanks.UPD Yes, that was error in my code. Python and Ruby 1.9 are pretty equal.- Really impressive jump in productivity between Ruby versions.
- Can I optimize Clojure code by adding type declarations? Will it help?
Don't forget Fortran! (Mostly joking, but I would expect similar performance to C). The statements with exclamation points are optional, but good style. (
!
is a comment character in fortran 90)Here's a fast Clojure version, using the same basic algorithms:
It runs about 20x faster than your original on my machine. And here's a version that leverages the new reducers library in 1.5 (requires Java 7 or JSR 166):
This runs about 40x faster than your original. On my machine, that's 100k in 1.5 seconds.
Just for the fun of it, here is a parallel Ruby version.
On my 1.8GHz Core i5 MacBook Air, the performance results are:
It looks like the JVM's JIT is giving Ruby a nice performance boost in the default case, while true multithreading helps JRuby perform 50% faster in the threaded case. What's more interesting is that JRuby 1.7 improves the JRuby 1.6 score by a healthy 17%!
The answer to your question #1 is that Yes, the JVM is incredably fast and yes static typing helps.
The JVM should be faster than C in the long run, possibly even faster than "Normal" assembly language--Of course you can always hand optimize assembly to beat anything by doing manual runtime profiling and creating a separate version for each CPU, you just have to be amazingly good and knowledgable.
The reasons for Java's speed are:
The JVM can analyze your code while it runs and hand-optimize it--for instance, if you had a method that could be statically analyzed at compile time to be a true function and the JVM noticed that you were often calling it with the same parameters, it COULD actually eliminate the call completely and just inject the results from the last call (I'm not sure if Java actually does this exactly, but it doest a lot of stuff like this).
Due to static typing, the JVM can know a lot about your code at compile time, this lets it pre-optimize quite a bit of stuff. It also lets the compiler optimize each class individually without knowledge of how another class is planning to use it. Also Java doesn't have arbitrary pointers to memory location, it KNOWS what values in memory may and may not be changed and can optimize accordingly.
Heap allocation is MUCH more efficient than C, Java's heap allocation is more like C's stack allocation in speed--yet more versatile. A lot of time has gone into the different algroithims used here, it's an art--for instance, all the objects with a short lifespan (like C's stack variables) are allocated to a "known" free location (no searching for a free spot with enough space) and are all freed together in a single step (like a stack pop).
The JVM can know quirks about your CPU architecture and generate machine code specifically for a given CPU.
The JVM can speed your code long after you shipped it. Much like moving a program to a new CPU can speed it up, moving it to a new version of the JVM can also give you huge speed performances taylored to CPUs that didn't even exist when you initially compiled your code, something c physically cannot do without a recomiple.
By the way, most of the bad rep for java speed comes from the long startup time to load the JVM (Someday someone will build the JVM into the OS and this will go away!) and the fact that many developers are really bad at writing GUI code (especially threaded) which caused Java GUIs to often become unresponsive and glitchy. Simple to use languages like Java and VB have their faults amplified by the fact that the capibilities of the average programmer tends to be lower than more complicated languages.
Here is my scala version in both parallel and no-parallel, just for fun: (In my dual core compute, the parallel version takes 335ms while the no-parallel version takes 655ms)
EDIT: According to Emil H's suggestion, I have changed my code to avoid the effects of IO and jvm warmup:
The result shows in my compute:
I'll answer just #2, since it's the only one I've got anything remotely intelligent to say, but for your Python code, you're creating an intermediate list in
is_prime
, whereas you're using.map
in yourall
in Ruby which is just iterating.If you change your
is_prime
to:they're on par.
I could optimize the Python further, but my Ruby isn't good enough to know when I've given more of an advantage (e.g., using
xrange
makes Python win on my machine, but I don't remember if the Ruby range you used creates an entire range in memory or not).EDIT: Without being too silly, making the Python code look like:
which doesn't change much more, puts it at 1.5s for me, and, with being extra silly, running it with PyPy puts it at .3s for 10K, and 21s for 100K.