I had some unknown issue with my old EC2 instance so that I can't ssh into it anymore. Therefore I created a new EBS volume from a snapshot of the old volume and tried to attach and mount it to the new instance. Here is what I did:
- Created a new volume from snapshot of the old one.
- Created a new EC2 instance and attached the volume to it as
/dev/xvdf
(or/dev/sdf
) SSHed into the instance and attempted to mount the old volume with:
$ sudo mkdir -m 000 /vol $ sudo mount /dev/xvdf /vol
And the output was:
mount: block device /dev/xvdf is write-protected, mounting read-only
mount: you must specify the filesystem type
Now, I know I should specify the filesytem as ext4
but since the volume contains a lot of important data, I cannot format it through $ sudo mkfs -t ext4 /dev/xvdf
. Still, I know of no other way of preserving the data and specifying the filesystem at the same time. I've searched a lot about it and I'm currently at a loss.
By the way, the mounting as 'read-only' also worries me but I haven't look into it yet since I can't mount the volume at all.
Thanks in advance!
Edit:
When I do sudo mount /dev/xvdf /vol -t ext4
(no formatting) I get:
mount: wrong fs type, bad option, bad superblock on /dev/xvdf,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
And dmesg | tail
gives me:
[ 1433.217915] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.222107] FAT-fs (xvdf): bogus number of reserved sectors
[ 1433.226127] FAT-fs (xvdf): Can't find a valid FAT filesystem
[ 1433.260752] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.265563] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.270477] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.274549] FAT-fs (xvdf): bogus number of reserved sectors
[ 1433.277632] FAT-fs (xvdf): Can't find a valid FAT filesystem
[ 1433.306549] ISOFS: Unable to identify CD-ROM format.
[ 2373.694570] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
I encountered this problem, and I got it now,
You should mount the
partition
not mount the
disk
I encountered this problem too after adding a new 16GB volume and attaching it to an existing instance. First of all you need to know what disks you have present Run
You'll' have an output that appears like the one shown below detailing information about your disks (volumes"
As you can see the newly added Disk /dev/xvdf is present. To make it available you need to create a filesystem on it and mount it to a mount point. You can achieve that with the following commands
Making a new file system clears everything in the volume so do this on a fresh volume without important data
Then mount it maybe in a directory under the /mnt folder
Confirm that you have mounted the volume to the instance by running
This is what you should have
And that's it you have the volume for use there attached to your existing instance. credit
The One Liner
Use this command to mount it if your filesystem type is ext4:
Many people have success with the following (if disk is partitioned):
where:
/dev/xvdf
is changed to the EBS Volume device being mounted/vol
is changed to the folder you want to mount to.ext4
is the filesystem type of the volume being mountedCommon Mistakes How To:
Attached Devices List
Check your mount command for correct EBS Volume device names and filesystem types. The following will list them all:
If your EBS Volume displays with an attached
partition
, mount thepartition
; not the disk.If it doesn't show, you didn't
Attach
your EBS Volume in AWS web-consoleAuto Remounting on Reboot
These devices become unmounted again if the EC2 Instance ever reboots.
A way to make them mount again upon startup is to edit the server file listed below and insert just the single mount command that you originally used.
(Place your change above
exit 0
, the last line in this file.)I had different issue, here when I checked in dmesg logs, the issue was with same UUID of existing root volume and UUID of root volume of another ec2. So to fix this I mounted it on another Linux type of ec2. It worked.
I noticed that for some reason the volume was located at
/dev/xvdf1
, not/dev/xvdf
.Using
worked like a charm
You do not need to create a file system of the newly created volume from the snapshot.simply attach the volume and mount the volume to the folder where you want. I have attached the new volume to the same location of the previously deleted volume and it was working fine.