We currently use multiple webservers accessing one mysql server and fileserver. Looking at moving to the cloud, can I use this same setup and attach the EBS to multiple machine instances or what's another solution?
相关问题
- Is shmid returned by shmget() unique across proces
- What is the best way to do a search in a large fil
- How to generate 12 digit unique number in redshift
- Use awslogs with kubernetes 'natively'
- how to get running process information in java?
The short answer is a categorical "No". Others have said it above.
Those who said "yes" did not answer the question, but a different question. If EFS is just an NFS service, then it isn't the answer to the question as originally stated. And it doesn't matter if EFS is "rolled out in all zones" or not, because you can do your own NFS instance quite, and have multiple servers mount NFS. That isn't anything new, we've done that in 1992 already. SMB and sshfs, all of these are just ways to mount drives as a remote file system.
Those who said "why would you want to do that" or "it will all end in tears" are wrong. We have been mounting multiple disks to multiple servers for decades. If you ever worked with a SAN (Storage Area Network) the ability to attach the same device to multiple nodes usually through FibreChannel SAN is completely normal. So anyone who has run servers a decade ago before the virtualization / cloud servers became ubiquitous has some exposure to that.
Soon there were clustered file systems where two systems could read and write to the exact same volume. I believe this started with the VAX and Alpha VMS time in history already. Clustered file systems use a distributed mutual exclusion scheme to be able to manipulate blocks directly.
The advantage of mounting the same disk to multiple nodes is speed and reducing single points of failures.
Now, clustered file systems have not become hugely popular in the "consumer" hosting business, that is true. And they are complicated and have some pitfalls. But you don't even need a clustered file system to make use of a disk attached to multiple compute nodes. What if you want a read-only drive? You don't even need a clustered file system! You just put into your /etc/fstab the same physical device as read only (ro). Then you mount to 2 or 10 EC2 servers and all of them can read directly from that device!
There is an obvious use case for this in the world of cloud servers when building rapidly scaling farms. You can have your main system disk all prepared and use just a very small boot and configuration disk for each of the servers. You can even have all of them boot from the same boot disk, and right before the remount of / in read-write mode, you can insert a Union-FS with 3 layers:
So, yes the question made a lot of sense, and unfortunately the answer is (still) "No". And no NFS is not a great replacement for that use case as it penalizes all read activity from the system disk. However, network boot from an NFS system disk is the only alternative to implementing the use case I described above. Unfortunately, since setting up network boot agent and NFS is much trickier than just accessing the same physical block device.
PS: I would have liked to submit a shorter version of this as comments, but I cannot because of the silly 51 credit points threshold, so I have to write an answer with the same essential "No" but to include my point why this is a relevant question that has not been receiving a deserved answer.
PPS: I just found someone over at StackExchange mention iSCSI. iSCSI is somewhat like NFS, but logically like a FibreChannel SAN. You get to access (and share) physical block devices. It would make boot disk sharing easier so you don't need to set up the bootd network booting which can be finicky. But then on AWS, there is no network booting available either.
Why won't you create one instance with volume and sshfs to that volume in other instances?
You can totally use one drive on multiple servers in AWS. I use sshfs to mount an external drive and share it with multiple servers in EC2.
The reason I needed to connect a single drive to multiple servers is to have a single place to put all my backups before pulling them down local.
There is something in the IT world known as Clustered Filesystem, Redhat GFS, Oracle OCFS2, Veritas CFS...
No, this is like using a hard drive in two computers.
If you want shared data, you can setup a server that all your instances can access. If you are wanting a simple storage area for all your instances, you can use Amazon's S3 storage service to store data that is distributed and scalable.
Moving to the cloud, you can have the exact same setup, but you can possibly replace the fileserver with S3, or have all your instances connect to your fileserver.
You have a lot of options, but sharing a hard drive between instances is probably not the best option.
No, according to the EBS docs: "A volume can only be attached to one instance at a time".
How are you using the shared storage currently? If it's just for serving files from the fileserver, have you considered setting up a system so that you could proxy certain requests to a process on the fileserver rather than having the webservers serve those files?