Amazon EC2 and EBS disk space problem

2019-03-11 03:48发布

问题:

I am having a problem reconciling the space available on my EBS volume. According to the AWS console the volume is 50GB and is attached to an instance.

If I ssh to this instance and do a df -h, I get the following output:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              15G   13G  3.0G  81% /
udev                  858M   76K  858M   1% /dev
none                  858M     0  858M   0% /dev/shm
none                  858M   72K  858M   1% /var/run
none                  858M     0  858M   0% /var/lock
none                  858M     0  858M   0% /lib/init/rw

I am pretty new to AWS. I interpret this as "there is a device attached and it has 15GB capacity. Whats more, you're nearly out of space!"

Can anyone point out the cause of the apparent discrepancy between the space advertised in the console and what is displayed on the instance?

Many thanks in advance

S

回答1:

Perhaps the original 15 GB Volume was cloned into a 50 GB volume but then not resized?

Please see this tutorial on how to clone and resize: How to increase disk space on existing AWS EC2 Linux (Ubuntu) Instance without losing data

Hope that helps.



回答2:

Yes, the issue is simple. The volume is only associated with the instance, but not mounted.

Check on the AWS console which drive it is mounted as - most likely /dev/sdf.

Then (on ubuntu):

sudo mkfs.ext3 /dev/sdf
sudo mkdir /ebs
sudo mount /dev/sdf /ebs

The first line formats the volume - using the ext3 file system type. This is pretty standard -- but depending on your usage (e.g. app server, database server, ...) you could also select another one like ext4 or xfs.

The second command creates a mount point and the third mounts it into it. This means that effectively, the new volume will be at /ebs. It should also show up in df now.

Last but not least, maybe also add an entry to /etc/fstab to make it reboot-proof.



回答3:

Here is the simple way...

Assuming that you are using a linux AMI, in your case you have an easy method for increasing the size of the file system:

1) Stop the instance 2) Detach the root volume 3) Snapshot the volume 4) Create a new volume from the snapshot using the new size 5) Attach the new volume to the instance on the same place where the original one was 6) Start the instance, stop all services except ssh and set the root filesystem read only 7) Enlarge the filesystem (using for example resize2fs) and or the partition if needed 8) Reboot

As an alternative you can also launch a new instance and map the instance storage or you can create a new ami combining the two previous steps.



回答4:

The remaining of your space is mounted by default at /mnt.



回答5:

See Resizing the Root Disk on a Running EBS Boot EC2 Instance



回答6:

Only Rebooting the instance solved my problem

Earlier:

/dev/xvda1       8256952 7837552         0 100% /
udev              299044       8    299036   1% /dev
tmpfs             121892     164    121728   1% /run
none                5120       0      5120   0% /run/lock
none              304724       0    304724   0% /run/shm

Now

/dev/xvda18256952 1062780   6774744  14% /
udev              299044       8    299036   1% /dev
tmpfs             121892     160    121732   1% /run
none                5120       0      5120   0% /run/lock
none              304724       0    304724   0% /run/shm