Can someone explain to me why Process.fork
makes stuff so much slower in Ruby? I'm using Ruby 2.3.1 on OS X El Capitan.
require 'time'
require 'benchmark'
def do_stuff
50000.times { Time.parse(Time.utc(2016).iso8601) }
end
puts Benchmark.measure { do_stuff } # => 1.660000 0.010000 1.670000 ( 1.675466)
Process.fork do
puts Benchmark.measure { do_stuff } # => 3.170000 6.250000 9.420000 ( 9.508235)
end
EDIT: Just noticed that running that code on Linux (tested Debian or Ubuntu) does not result in a negative performance impact.
"Why does Process.fork make stuff slower in Ruby on OS X?"
Step one in getting to the bottom of this is to reduce the number of variables.
Your example of running Time.parse(Time.utc(2016).iso8601)
fifty thousand times seems oddly specific. I reformulated the benchmark test using a different "slow" Ruby task:
require 'benchmark'
def do_stuff
a = [nil] * 200
10.times do
a.each {|x| a.each {|y| a.each {|z| ; }}}; ()
end
end
puts "main: #{Benchmark.measure { do_stuff }}"
Process.fork do
puts "fork: #{Benchmark.measure { do_stuff }}"
end
Here I've replaced your Time
commands with a no-op nested loop over a large array.
The results:
main: 4.020000 0.010000 4.030000 ( 4.050664)
fork: 3.940000 0.000000 3.940000 ( 3.962207)
main: 3.840000 0.010000 3.850000 ( 3.856188)
fork: 3.850000 0.000000 3.850000 ( 3.865250)
main: 3.930000 0.000000 3.930000 ( 3.937741)
fork: 3.970000 0.000000 3.970000 ( 3.986397)
main: 4.340000 0.010000 4.350000 ( 4.370009)
fork: 4.300000 0.000000 4.300000 ( 4.308156)
No noticeable pattern of the forked process being slower or faster. I've tested in Ruby 1.9, 2.0, and 2.3 on both OS X and Ubuntu, and it remains the same.
The answer to your question is:
Process.fork
does not, in general, make stuff slower in Ruby on OS X.
However, there is a different interesting question here, which is Why is `Time.utc` slower in a forked process in Ruby on OS X (and not in Python)?