Why is there no 'hadoop fs -head' shell co

2019-01-21 08:43发布

A fast method for inspecting files on HDFS is to use tail:

~$ hadoop fs -tail /path/to/file

This displays the last kilobyte of data in the file, which is extremely helpful. However, the opposite command head does not appear to be part of the shell command collections. I find this very surprising.

My hypothesis is that since HDFS is built for very fast streaming reads on very large files, there is some access-oriented issue that affects head. This makes me hesitant to do things to access the head. Does anyone have an answer?

标签: hadoop hdfs
5条回答
在下西门庆
2楼-- · 2019-01-21 09:24

I would say it's more to do with efficiency - a head can easily be replicated by piping the output of a hadoop fs -cat through the linux head command.

hadoop fs -cat /path/to/file | head

This is efficient as head will close out the underlying stream after the desired number of lines have been output

Using tail in this manner would be considerably less efficient - as you'd have to stream over the entire file (all HDFS blocks) to find the final x number of lines.

hadoop fs -cat /path/to/file | tail

The hadoop fs -tail command as you note works on the last kilobyte - hadoop can efficiently find the last block and skip to the position of the final kilobyte, then stream the output. Piping via tail can't easily do this.

查看更多
劳资没心,怎么记你
3楼-- · 2019-01-21 09:26

you can try the folowing command

hadoop fs -cat /path | head -n 

where -n can be replace with number of records to view

查看更多
劫难
4楼-- · 2019-01-21 09:26

In Hadoop v2:

hdfs dfs -cat /file/path|head

In Hadoop v1 and v3:

hadoop fs -cat /file/path|head
查看更多
欢心
5楼-- · 2019-01-21 09:41
hdfs -dfs /path | head

is a good way to solve the problem.

查看更多
时光不老,我们不散
6楼-- · 2019-01-21 09:42

Starting with version 3.1.0 we now have it:

Usage: hadoop fs -head URI

Displays first kilobyte of the file to stdout.

See here.

查看更多
登录 后发表回答