Is there a way to connect to an Amazon S3 bucket with FTP or SFTP rather than the built-in Amazon file transfer interface in the AWS console? Seems odd that this isn't a readily available option.
相关问题
- JavaScript File Transfer SSH
- Check whether the path exists on server or not in
- How to upload images from the browser to Amazon S3
- How to start pm2 website using jenkins on AWS ubun
- DevPay and Mfa are mutually exclusive authorizatio
相关文章
- how many objects are returned by aws s3api list-ob
- AWS S3 in rails - how to set the s3_signature_vers
- PUT to S3 with presigned url gives 403 error
- php - unlink throws error: Resource temporarily un
- AWS CLI s3 copy fails with 403 error, trying to ad
- Amazon Athena - Column cannot be resolved on basic
- Pre-signed URLs and x-amz-acl
- Getting the list of files over FTP
Well, S3 isn't FTP. There are lots and lots of clients that support S3, however.
Pretty much every notable FTP client on OS X has support, including Transmit and Cyberduck.
If you're on Windows, take a look at Cyberduck or CloudBerry.
Or spin Linux instance for SFTP Gateway in your AWS infrastructure that saves uploaded files to your Amazon S3 bucket.
Supported by Thorntech
Filezilla just released a Pro version of their FTP client. It connects to S3 buckets in a streamlined FTP like experience. I use it myself (no affiliation whatsoever) and it works great.
Update
S3 now offers a fully-managed SFTP Gateway Service for S3 that integrates with IAM and can be administered using aws-cli.
There are theoretical and practical reasons why this isn't a perfect solution, but it does work...
You can install an FTP/SFTP service (such as proftpd) on a linux server, either in EC2 or in your own data center... then mount a bucket into the filesystem where the ftp server is configured to chroot, using s3fs.
I have a client that serves content out of S3, and the content is provided to them by a 3rd party who only supports ftp pushes... so, with some hesitation (due to the impedance mismatch between S3 and an actual filesystem) but lacking the time to write a proper FTP/S3 gateway server software package (which I still intend to do one of these days), I proposed and deployed this solution for them several months ago and they have not reported any problems with the system.
As a bonus, since proftpd can chroot each user into their own home directory and "pretend" (as far as the user can tell) that files owned by the proftpd user are actually owned by the logged in user, this segregates each ftp user into a "subdirectory" of the bucket, and makes the other users' files inaccessible.
There is a problem with the default configuration, however.
Once you start to get a few tens or hundreds of files, the problem will manifest itself when you pull a directory listing, because ProFTPd will attempt to read the
.ftpaccess
files over, and over, and over again, and for each file in the directory,.ftpaccess
is checked to see if the user should be allowed to view it.You can disable this behavior in ProFTPd, but I would suggest that the most correct configuration is to configure additional options
-o enable_noobj_cache -o stat_cache_expire=30
in s3fs:Without this option, you'll make fewer requests to S3, but you also will not always reliably discover changes made to objects if external processes or other instances of s3fs are also modifying the objects in the bucket. The value "30" in my system was selected somewhat arbitrarily.
This option allows s3fs to remember that
.ftpaccess
wasn't there.Unrelated to the performance issues that can arise with ProFTPd, which are resolved by the above changes, you also need to enable
-o enable_content_md5
in s3fs.This is an option which never should have been an option -- it should always be enabled, because not doing this bypasses a critical integrity check for only a negligible performance benefit. When an object is uploaded to S3 with a
Content-MD5:
header, S3 will validate the checksum and reject the object if it's corrupted in transit. However unlikely that might be, it seems short-sighted to disable this safety check.Quotes are from the man page of s3fs. Grammatical errors are in the original text.
There are three options. You can use a native managed SFTP service recently added by Amazon (which is easier to set up). Or you can mount the bucket to a file system on a Linux server and access the files using the SFTP as any other files on the server (which gives you greater control). Or you can just use a (GUI) client that natively supports S3 protocol.
Managed SFTP Service
In your Amazon AWS Console, go to AWS Transfer for SFTP and create a new server.
In SFTP server page, add a new SFTP user (or users).
Permissions of users are governed by an associated AWS role in IAM service (for a quick start, you can use AmazonS3FullAccess policy).
The role must have a trust relationship to
transfer.amazonaws.com
.For details, see my guide Setting up an SFTP access to Amazon S3.
Mounting Bucket to Linux Server
Just mount the bucket using
s3fs
file system (or similar) to a Linux server (e.g. Amazon EC2) and use the server's built-in SFTP server to access the bucket.s3fs
access-key-id:secret-access-key
to/etc/passwd-s3fs
Add a bucket mounting entry to
fstab
:For details, see my guide Setting up an SFTP access to Amazon S3.
Use S3 Client
Or use any free "FTP/SFTP client", that's also an "S3 client", and you do not have setup anything on server-side. For example, my WinSCP or Cyberduck.