Rails 3.2.2 log files unordered, requests intertwi

2019-03-18 13:58发布

I recollect getting log files that were nicely ordered, so that you could follow one request, then the next, and so on.

Now, the log files are, as my 4 year old says "all scroggled up", meaning that they are no longer separate, distinct chunks of text. Loggings from two requests get intertwined/mixed up.

For instance:

Started GET /foobar
...
Completed 200 OK in 2ms (Views: 0.4ms | ActiveRecord: 0.8ms)
Patient Load (wait, that's from another request that has nothing to do with foobar!)
[ blank space ]
Something else

This is maddening, because I can't tell what's happening within one single request.

This is running on Passenger.

6条回答
三岁会撩人
2楼-- · 2019-03-18 14:16

Well, for me the TaggedLogging solution is a no go, I can live with some logs getting lost if the server crashes badly, but I want my logs to be perfectly ordered. So, following advice from the issue comments I'm applying this to my app:

# lib/sequential_logs.rb
module ActiveSupport
  class BufferedLogger
    def flush
      @log_dest.flush
    end
    def respond_to?(method, include_private = false)
      super
    end
  end
end

# config/initializers/sequential_logs.rb
require 'sequential_logs.rb'

Rails.logger.instance_variable_get(:@logger).instance_variable_get(:@log_dest).sync = false

As far as I can say this hasn't affected my app, it is still running and now my logs make sense again.

查看更多
放我归山
3楼-- · 2019-03-18 14:22

I haven't used it, but I believe Lumberjack's unit_of_work method may be what you're looking for. You call:

Lumberjack.unit_of_work do
  yield
end

And all logging done either in that block or in the yielded block are tagged with a unique ID.

查看更多
仙女界的扛把子
4楼-- · 2019-03-18 14:23

Yep!, they have made some changes in the ActiveSupport::BufferedLogger so it is not any more waiting until the request has ended to flush the logs:

But they have added the ActiveSupport::TaggedLogging which is very funny and you can stamp every log with any kind of mark you want.

In your case could be good to stamp the logs with the request UUID like this:

# config/application.rb
config.log_tags = [:uuid]

Then even if the logs are messed up you still can follow which of them correspond to the request you are following up.

You can make more funny things with this feature to help you in your logs study:

查看更多
仙女界的扛把子
5楼-- · 2019-03-18 14:28

If you value production log readability over everything else you can use the

PassengerMaxInstancesPerApp 1

configuration. It might cause some scaling issues. Alternatively you could stuff something like this in application.rb:

process_log_filename = Rails.root + "log/#{Rails.env}-#{Process.pid}.log"
log_file = File.open(process_log_filename, 'a')
Rails.logger = ActiveSupport::BufferedLogger.new(log_file)
查看更多
Viruses.
6楼-- · 2019-03-18 14:31

I tried to search for the same answer but couldn't find any good info. I'm not sure if you should fix server or rails code.

If you want more info about the issue here is the commit that removed old way of logging https://github.com/rails/rails/commit/04ef93dae6d9cec616973c1110a33894ad4ba6ed

查看更多
Root(大扎)
7楼-- · 2019-03-18 14:34

They should add some quasi-random reqid and write it in every line regarding one single request. This way you won't get confused.

查看更多
登录 后发表回答