I saw the question Why does Process.fork make stuff slower in Ruby on OS X? and was able to determine that Process.fork
does not actually make tasks, in general, slower.
However, it does seem to make Time.utc
, in particular, much slower.
require 'benchmark'
def do_stuff
50000.times { Time.utc(2016) }
end
puts "main: #{Benchmark.measure { do_stuff }}"
Process.fork do
puts "fork: #{Benchmark.measure { do_stuff }}"
end
Here are some results:
main: 0.100000 0.000000 0.100000 ( 0.103762)
fork: 0.530000 3.210000 3.740000 ( 3.765203)
main: 0.100000 0.000000 0.100000 ( 0.104218)
fork: 0.540000 3.280000 3.820000 ( 3.858817)
main: 0.100000 0.000000 0.100000 ( 0.102956)
fork: 0.520000 3.280000 3.800000 ( 3.831084)
One clue might be that the above takes place on OS X, whereas on Ubuntu, there doesn't seem to be a difference:
main: 0.100000 0.070000 0.170000 ( 0.166505)
fork: 0.090000 0.070000 0.160000 ( 0.169578)
main: 0.090000 0.080000 0.170000 ( 0.167889)
fork: 0.100000 0.060000 0.160000 ( 0.169160)
main: 0.100000 0.070000 0.170000 ( 0.170839)
fork: 0.100000 0.070000 0.170000 ( 0.176146)
Can anyone explain this oddity?
Further investigation:
@tadman suggested that it might be a bug in the macOS / OS X time code, so I wrote a similar test in Python:
from timeit import timeit
from os import fork
print timeit("datetime.datetime.utcnow()", setup="import datetime")
if fork() == 0:
print timeit("datetime.datetime.utcnow()", setup="import datetime")
else:
pass
Again, on Ubuntu, the benchmarks are the same for the forked/main processes. On OS X, however, the forked process is now slightly faster than the main process, which is the opposite of the behavior in Ruby.
This leads me to believe that the source of the "fork penalty" is in the Ruby implementation and not in the OS X time implementation.