Hadoop fs lookup for block size?

2019-03-19 03:04发布

问题:

In Hadoop fs how to lookup the block size for a particular file?

I was primarily interested in a command line, something like:

hadoop fs ... hdfs://fs1.data/...

But it looks like that does not exist. Is there a Java solution?

回答1:

Seems hadoop fs doesn't have options to do this.

But hadoop fsck could.

You can try this

$HADOOP_HOME/bin/hadoop fsck /path/to/file -files -blocks


回答2:

The fsck commands in the other answers list the blocks and allow you to see the number of blocks. However, to see the actual block size with no extra cruft do:

hadoop fs -stat %o /filename

Default block size is:

hdfs getconf -confKey dfs.blocksize


回答3:

I think it should be doable with:

hadoop fsck /filename -blocks

but I get Connection refused



标签: hadoop hdfs