I have files in HDFS as:
drwxrwx--- - root supergroup 0 2016-08-19 06:21 /tmp/logs/root/logs/application_1464962104018_1639064
drwxrwx--- - root supergroup 0 2016-08-19 06:21 /tmp/logs/root/logs/application_1464962104018_1639065
Now /tmp/logs/root/logs/
directory will continuously get the new files in it.
I want to get the files which are created in last five minutes, taking current time into account. Then I need to copy these files into my local machine.
How about this:
hdfs dfs -ls /tmp | tr -s " " | cut -d' ' -f6-8 | grep "^[0-9]" | awk 'BEGIN{ MIN=5; LAST=60*MIN; "date +%s" | getline NOW } { cmd="date -d'\''"$1" "$2"'\'' +%s"; cmd | getline WHEN; DIFF=NOW-WHEN; if(DIFF < LAST){ print $3 }}'
Explanation:
List all the files:
hdfs dfs -ls /tmp
Replace extra spaces:
tr -s " "
Get the required columns:
cut -d' ' -f6-8
Remove non-required rows:
grep "^[0-9]"
Processing using awk:
awk
Initialize the DIFF duration and current time:
MIN=5; LAST=60*MIN; "date +%s" | getline NOW
Create a command to get the epoch value for timestamp of the file on HDFS:
cmd="date -d'\''"$1" "$2"'\'' +%s";
Execute the command to get epoch value for HDFS file:
cmd | getline WHEN;
Get the time difference:
DIFF=NOW-WHEN;
Print the output depending upon the difference:
if(DIFF < LAST){ print $3 }
You just need to change the variable value for MIN
depending upon your requirement (here its 5 minutes).
HTH
I have done it using below command : it will give me files that are created between a five minute window :
hadoop fs -ls /tmp/logs/root/logs | awk '{ if ((($6 == "'"2016-08-18"'" && $7 <= "'"21:00"'") && ($6 == "'"2016-08-18"'" && $7 >= "'"20:55"'"))) print $8 } '
It can be modified accordingly with current time stamp.