Yarn parsing job logs stored in hdfs

2019-06-02 15:28发布

问题:

Is there any parser, which I can use to parse the json present in yarn job logs(jhist files) which gets stored in hdfs to extract information from it.

回答1:

The second line in the .jhist file is the avro schema for the other jsons in the file. Meaning that you can create avro data out of the jhist file. For this you could use avro-tools-1.7.7.jar

# schema is the second line
sed -n '2p;3q' file.jhist > schema.avsc

# removing the first two lines
sed '1,2d' file.jhist > pfile.jhist

# finally converting to avro data
java -jar avro-tools-1.7.7.jar fromjson pfile.jhist --schema-file schema.avsc > file.avro

You've got an avro data, which you can for example import to a Hive table, and make queries on it.



回答2:

You can check out Rumen, a parsing tool from the apache ecosystem or When you visit the web UI, go to job history and look for the job for which you want to read .jhist file. Hit the Counters link at the left,now you will be able see an API which gives you all the parameters and the value like CPU time in milliseconds etc. which will read from a .jhist file itself.