How to export daily disk usage to csv format in sh

2019-06-13 16:08发布

问题:

My script is as below. When we run the script, it automatically saves the disk space usage in separate cells.

SIZES_1=`df -h | awk 'FNR == 1 {print $1","$2","$3","$4","$5","$6}'`
SIZES_2=`df -h | awk 'FNR == 2 {print $1","$2","$3","$4","$5","$6}'`
SIZES_3=`df -h | awk 'FNR == 3 {print $1","$2","$3","$4","$5","$6}'`
SIZES_4=`df -h | awk 'FNR == 4 {print $1","$2","$3","$4","$5","$6}'`
SIZES_5=`df -h | awk 'FNR == 5 {print $1","$2","$3","$4","$5","$6}'`
SIZES_6=`df -h | awk 'FNR == 6 {print $1","$2","$3","$4","$5","$6}'`
SIZES_7=`df -h | awk 'FNR == 7 {print $1","$2","$3","$4","$5","$6}'`
SIZES_8=`df -h | awk 'FNR == 8 {print $1","$2","$3","$4","$5","$6}'`
echo `date +%Z-%Y-%m-%d_%H-%M-%S` >>/home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_1" >> /home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_2" >> /home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_3" >> /home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_4" >> /home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_5" >> /home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_6" >> /home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_7" >> /home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_8" >> /home/jeevagan/test_scripts/sizes/excel.csv

This script is okay for my machine.

My doubt is, if somebody else's machine has many file systems, my script won't work to fetch all the file systems usage. How to make it to grab all those automatically?

回答1:

Assuming you want all filesystems you can simplify that to:

printf '%s\n' "$(date +%Z-%Y-%m-%d_%H-%M-%S)" >> excel.csv
df -h | awk '{print $1","$2","$3","$4","$5","$6}' >> excel.csv


回答2:

I would simplify this to

{ date +%Z-%F_%H-%M-%S; df -h | tr -s ' ' ','; } >> excel.csv
  • Group commands so only a single redirect is needed
  • Squeeze spaces and replace them with a single comma using tr
  • No need for echo `date` or similar: it's the same as just date
  • date +%Y-%m-%d is the same as date +%F

Notice that this has a little flaw in that the first line of the output of df -h, which looks something like this originally

Filesystem          Size  Used Avail Use% Mounted on

has a space in the heading of the last column, so it becomes

Filesystem,Size,Used,Avail,Use%,Mounted,on

with an extra comma. The original awk solution just cut off the last word of the line, though. Similarly, spaces in paths would trip up this solution.

To fix the comma problem, you could for example run

sed -i 's/Mounted,on$/Mounted on/' excel.csv

every now and so often.


As an aside, to replace all field separators in awk, instead of

awk '{print $1","$2","$3","$4","$5","$6}'

you can use

awk 'BEGIN { OFS = "," } { $1 = $1; print }'

or, shorter,

awk -v OFS=',' '{$1=$1}1'