Amazon EC2 - disk full [closed]

2020-02-16 05:53发布

问题:


Want to improve this question? Update the question so it's on-topic for Stack Overflow.

Closed 3 years ago.

When I run df -h on my Amazon EC2 server, this is the output:

[ec2-user@ip-XXXX ~]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1             25G   25G     0 100% /
tmpfs                 4.0G     0  4.0G   0% /dev/shm

for some reason something is eating up my storage space.

Im trying to find all of the big files/folders and this is what i get back:

[ec2-user@ip-XXXX ~]$ sudo du -a / | sort -n -r | head -n 10
993580  /
639296  /usr
237284  /usr/share
217908  /usr/lib
206884  /opt
150236  /opt/app
150232  /opt/app/current
150224  /opt/app/current/[deleted].com
113432  /usr/lib64

How can I find out whats eating my storage space?

回答1:

Well, I think its one (or more) logfiles which have grown too large and need to be removed/backupped. I would suggest going after the big files first. So find all files greater than 10 MB (10 MB is a big enough file size, you can choose +1M for 1MB similarly)

sudo find / -type f -size +10M -exec ls -lh {} \;

and now you can identify which ones are causing the trouble and deal with them accordingly.

As for your original du -a / | sort -n -r | head -n 10 command, that won't work since it is sorting by size, and so, all ancestor directories of a large file will go up the pyramid, while the individual file will most probably be missed.

Note: It should be pretty simple to notice the occurence of similar other log files/binaries in the location of the files you so find, so as a suggestion, do cd in to the directory containing the original file to cleanup more files of the same kind. You can also iterate with the command for files with sizes greater than 1MB next, and so on.



回答2:

If you are not able to find any gigantic file , killing some processes might solve the issue (it worked for me, read full answer to know why)

Earlier:

/dev/xvda1       8256952 7837552         0 100% /

Now

/dev/xvda18256952 1062780   6774744  14% /

Reason: If you do rm <filename> on a file which is currently open by any process, it doesn't delete the file and the process still could be writing to the file. These ghost files can't be found by find command and they can't be deleted. Use this command to find out which processes are using deleted files:

lsof +L1

Kill the processes to release the files. Sometimes its difficult to kill all the processes using the file. Try restarting the system (I don't feel good, but that's a quick solution, makes sure no process uses the deleted file)

Read This: https://serverfault.com/questions/232525/df-in-linux-not-showing-correct-free-space-after-file-removal/232526



回答3:

At /, type du -hs * as root:

$ sudo su -
cd /; du -hs *

You will see the full size of all folders and identify the bigger ones.



回答4:

This space is consumed by mail notifications

you can check it by typing

sudo find / -type f -size +1000M -exec ls -lh {} \;

It will show large folders above 1000MB

Result will have a folder

/var/mail/username

You can free that space by running the following command

> /var/mail/username

Note that greater than (>) symbol is not a prompt, you have to run the cmd with it.

Now check you space free space by

df -h

Now you have enough free space, Enjoy... :)



回答5:

ansh0l's answer is the way to go to find large files. But, if you want to see how much space each directory in your files system is consuming, cd to the root directory, then do du -k --max-depth='. This will show you how much space is being consumed by each subdirectory within the root directory. When you spot the culprit, cd to that directory then run the same command again, and repeat, until you find the files that are consuming all of the space.



回答6:

If you have any snapshots against the file system the usage doesn't show in the O/S.

So the longer you leave your snapshot the more disk it will consume on your current volume. If you delete the snapshot, then reboot the missing disk capacity will re-appear.