Centralised Java Logging

2019-01-17 00:06发布

问题:

I'm looking for a way to centralise the logging concerns of distributed software (written in Java) which would be quite easy, since the system in question has only one server. But keeping in mind, that it is very likely that more instances of the particular server will run in the future (and there are going to be more application's in need for this), there would have to be something like a Logging-Server, which takes care of incoming logs and makes them accessable for the support-team.

The situation right now is, that several java-applications use log4j which writes it's data to local files, so if a client expiriences problems the support-team has to ask for the logs, which isn't always easy and takes a lot of time. In the case of a server-fault the diagnosis-problem is not as big, since there is remote-access anyways, but even though, monitoring everything through a Logging-Server would still make a lot of sense.

While I went through the questions regarding "centralised logging" I found another Question (actually the only one with a (in this case) useable answer. Problem being, all applications are running in a closed environment (within one network) and security-guidelines do not permit for anything concerning internal software to go out of the environments network.

I also found a wonderful article about how one would implement such a Logging-Server. Since the article was written in 2001, I would have thought that someone might have already solved this particular problem. But my search-results came up with nothing.

My Question: Is there a logging-framework which handle's logging over networks with a centralised server which can be accessed by the support-team?

Specification:

  • Availability
  • Server has to be run by us.
  • Java 1.5 compatibility
  • Compatibility to a heterogeneous network.
  • Best-Case: Protocol uses HTTP to send logs (to avoid firewall-issues)
  • Best-Case: Uses log4j or LogBack or basically anything that implements slf4j

Not necessary, but nice to have

  • Authentication and security is of course an issue, but could be set back for at least a while (if it is open-software we would extend it to our needs OT: we always give back to the projects).
  • Data mining and analysis is something which is very helpful to make software better, but that could as well be an external application.

My worst-case scenario is that their is no software like that. For that case, we would probably implement this ourselves. But if there is such a Client-Server Application I would very much appreciate not needing to do this particularly problematic bit of work.

Thanks in advance

Update: The solution has to run on several java-enabled platforms. (Mostly Windows, Linux, some HP Unix)

Update: After a lot more research we actually found a solution we were able to acquire. clusterlog.net (offline since at least mid-2015) provides logging services for distributed software and is compatible to log4j and logback (which is compatible to slf4j). It lets us analyze every single users way through the application. Thus making it very easy to reproduce reported bugs (or even non reported ones). It also notifies us of important events by email and has a report system were logs of the same origin are summorized into an easily accessable format. They deployed (which was flawless) it here just a couple of days ago and it is running great.

Update (2016): this question still gets a lot of traffic, but the site I referred to does not exist anymore.

回答1:

You can use Log4j with the SocketAppender, thus you have to write the server part as LogEvent processing. see http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/net/SocketAppender.html



回答2:

NXLOG or LogStash or Graylogs2

or

LogStash + ElasticSearch (+optionally Kibana)

Example:

1) http://logstash.net/docs/1.3.3/tutorials/getting-started-simple

2) http://logstash.net/docs/1.3.3/tutorials/getting-started-centralized



回答3:

Have a look at logFaces, looks like your specifications are met. http://www.moonlit-software.com/

  • Availability (check)
  • Server has to be run by us. (check)
  • Java 1.5 compatibility (check)
  • Compatibility to a heterogeneous network. (check)
  • Best-Case: Protocol uses HTTP to send logs (to avoid firewall-issues) (almost TCP/UDP)
  • Best-Case: Uses log4j or LogBack or basically anything that implements slf4j (check)
  • Authentication (check)
  • Data mining and analysis (possible through extension api)


回答4:

There's a ready-to-use solution from Facebook - Scribe - that is using Apache Hadoop under the hood. However, most companies I'm aware of still tend to develop in-house systems for that. I worked in one such company and dealt with logs there about two years ago. We also used Hadoop. In our case we had the following setup:

  • We had a small dedicated cluster of machines for log aggregation.
  • Workers mined logs from production service and then parse individual lines.
  • Then reducers would aggregate the necessary data and prepare reports.

We had a small and fixed number of reports that we were interested in. In rare cases when we wanted to perform a different kind of analysis we would simply add a specialized reducer code for that and optionally run it against old logs.

If you can't decide what kind of analyses you are interested in in advance then it'll be better to store structured data prepared by workers in HBase or some other NoSQL database (here, for example, people use Mongo DB). That way you won't need to re-aggregate data from the raw logs and will be able to query the datastore instead.

There are a number of good articles about such logging aggregation solutions, for example, using Pig to query the aggregated data. Pig lets you query large Hadoop-based datasets with SQL-like queries.