log doesn't work in production with delayed jo

2019-02-07 19:17发布

I'm suffering some weird issue where my delayed_jobs are failing in production. Finally I narrowed it down to the logger. If I comment out my log function calls, everything works. However if I try to log, I get this in the delayed_job handler:

 --- !ruby/struct:Delayed::PerformableMethod 
object: AR:User:1
method: :load_and_update_without_send_later
args: []

 | closed stream
/opt/ruby/lib/ruby/1.8/logger.rb:504:in `write'
/opt/ruby/lib/ruby/1.8/logger.rb:504:in `write'
/opt/ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize'
/opt/ruby/lib/ruby/1.8/logger.rb:496:in `write'
/opt/ruby/lib/ruby/1.8/logger.rb:326:in `add'
/opt/ruby/lib/ruby/1.8/logger.rb:374:in `info'
/home/rails/myapp.com/releases/20100203203031/app/models/gmail.rb:35:in `log'

My logger looks like this:

@@error_log_file = File.open("#{RAILS_ROOT}/log/error.log", 'a')
@@error_log_file.sync = true
def log(msg)
  msg.each do |line|
    line = "#{Time.now.strftime('%H:%M:%S')}  #{line}"
    @@error_log_file.info(line) # this is line 35 where it's failing
    puts line
  end
end

If I comment out the line "@@error_log_file.sync = true", it also works.

Is this a delayed job problem, or could it be related to my log directory being a symbolic link (as setup by a standard capistrano deploy)?

Also nothing is being written to my error.log file and nothing is being written to delayed_job.log. Totally stumped...

3条回答
做自己的国王
2楼-- · 2019-02-07 19:23

It looks like you're trying to log to a closed file. Have you perhaps considered trying the SyslogLogger gem?

Here's an article on how to use it with rails, which should help get you started.

查看更多
混吃等死
3楼-- · 2019-02-07 19:23

I've had a similar issue. The logging in the collectiveidea fork is being rewritten - see this issue for more info. Perhaps try the newest version to see if that fixes it for you.

查看更多
闹够了就滚
4楼-- · 2019-02-07 19:34

Woody Peterson found the problem here: http://groups.google.com/group/delayed_job/browse_thread/thread/f7d0534bb6c7c83f/37b4e8ed7bfaba42

The problem is:

DJ is using Rails' buffered log in production, and flushing the buffer is not being triggered for some reason (don't know if it's flushed by buffer size or explicitly flushed after a request).

The temporary fix (credit to Nathan Phelps) is:

When in production the buffered log is set to an auto_flushing value of 1000 which means that flush is not called until 1000 messages have been logged. Assuming you're using collectiveidea's fork of delayed_job, you can address this by setting auto_flushing to a more reasonable value in command.rb right after the logger is initialized on line 64. I.E.

Delayed::Worker.logger = Rails.logger

Delayed::Worker.logger.auto_flushing = 1 # or whatever

Works for me perfectly!

查看更多
登录 后发表回答