可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I used to be a happy s3cmd user. However recently when I try to transfer a large zip file (~7Gig) to Amazon S3, I am getting this error:
$> s3cmd put thefile.tgz s3://thebucket/thefile.tgz
....
20480 of 7563176329 0% in 1s 14.97 kB/s failed
WARNING: Upload failed: /thefile.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=1.25)
WARNING: Waiting 15 sec...
thefile.tgz -> s3://thebucket/thefile.tgz [1 of 1]
8192 of 7563176329 0% in 1s 5.57 kB/s failed
ERROR: Upload of 'thefile.tgz' failed too many times. Skipping that file.
I am using the latest s3cmd on Ubuntu.
Why is it so? and how can I solve it? If it is unresolvable, what alternative tool can I use?
回答1:
And now in 2014, the aws cli has the ability to upload big files in lieu of s3cmd.
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html has install / configure instructions, or often:
$ wget https://s3.amazonaws.com/aws-cli/awscli-bundle.zip
$ unzip awscli-bundle.zip
$ sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
$ aws configure
followed by
$ aws s3 cp local_file.tgz s3://thereoncewasans3bucket
will get you satisfactory results.
回答2:
I've just come across this problem myself. I've got a 24GB .tar.gz file to put into S3.
Uploading smaller pieces will help.
There is also ~5GB file size limit, and so I'm splitting the file into pieces, that can be re-assembled when the pieces are downloaded later.
split -b100m ../input-24GB-file.tar.gz input-24GB-file.tar.gz-
The last part of that line is a 'prefix'. Split will append 'aa', 'ab', 'ac', etc to it. The -b100m means 100MB chunks. A 24GB file will end up with about 240 100mb parts, called 'input-24GB-file.tar.gz-aa' to 'input-24GB-file.tar.gz-jf'.
To combine them later, download them all into a directory and:
cat input-24GB-file.tar.gz-* > input-24GB-file.tar.gz
Taking md5sums of the original and split files and storing that in the S3 bucket, or better, if its not so big, using a system like parchive to be able to check, even fix some download problems could also be valuable.
回答3:
I tried all of the other answers but none worked. It looks like s3cmd is fairly sensitive.
In my case the s3 bucket was in the EU. Small files would upload but when it got to ~60k it always failed.
When I changed ~/.s3cfg it worked.
Here are the changes I made:
host_base = s3-eu-west-1.amazonaws.com
host_bucket = %(bucket)s.s3-eu-west-1.amazonaws.com
回答4:
I had the same problem with ubuntu s3cmd.
s3cmd --guess-mime-type --acl-public put test.zip s3://www.jaumebarcelo.info/teaching/lxs/test.zip
test.zip -> s3://www.jaumebarcelo.info/teaching/lxs/test.zip [1 of 1]
13037568 of 14456364 90% in 730s 17.44 kB/s failed
WARNING: Upload failed: /teaching/lxs/test.zip (timed out)
WARNING: Retrying on lower speed (throttle=0.00)
WARNING: Waiting 3 sec...
test.zip -> s3://www.jaumebarcelo.info/teaching/lxs/test.zip [1 of 1]
2916352 of 14456364 20% in 182s 15.64 kB/s failed
WARNING: Upload failed: /teaching/lxs/test.zip (timed out)
WARNING: Retrying on lower speed (throttle=0.01)
WARNING: Waiting 6 sec...
The solution was to update s3cmd with the instructions from s3tools.org:
Debian & Ubuntu
Our DEB repository has been carefully created in the most compatible
way – it should work for Debian 5 (Lenny), Debian 6 (Squeeze), Ubuntu
10.04 LTS (Lucid Lynx) and for all newer and possibly for some older Ubuntu releases. Follow these steps from the command line:
Import S3tools signing key:
wget -O- -q http://s3tools.org/repo/deb-all/stable/s3tools.key | sudo apt-key add -
Add the repo to sources.list:
sudo wget -O/etc/apt/sources.list.d/s3tools.list http://s3tools.org/repo/deb-all/stable/s3tools.list
Refresh package cache and install the newest s3cmd:
sudo apt-get update && sudo apt-get install s3cmd
回答5:
This error occurs when Amazon returns an error: they seem to then disconnect the socket to keep you from uploading gigabytes of request to get back "no, that failed" in response. This is why for some people are getting it due to clock skew, some people are getting it due to policy errors, and others are running into size limitations requiring the use of the multi-part upload API. It isn't that everyone is wrong, or are even looking at different problems: these are all different symptoms of the same underlying behavior in s3cmd.
As most error conditions are going to be deterministic, s3cmd's behavior of throwing away the error message and retrying slower is kind of crazy unfortunate :(. Itthen To get the actual error message, you can go into /usr/share/s3cmd/S3/S3.py (remembering to delete the corresponding .pyc so the changes are used) and add a print e
in the send_file function's except Exception, e:
block.
In my case, I was trying to set the Content-Type of the uploaded file to "application/x-debian-package". Apparently, s3cmd's S3.object_put 1) does not honor a Content-Type passed via --add-header and yet 2) fails to overwrite the Content-Type added via --add-header as it stores headers in a dictionary with case-sensitive keys. The result is that it does a signature calculation using its value of "content-type" and then ends up (at least with many requests; this might be based on some kind of hash ordering somewhere) sending "Content-Type" to Amazon, leading to the signature error.
In my specific case today, it seems like -M would cause s3cmd to guess the right Content-Type, but it seems to do that based on filename alone... I would have hoped that it would use the mimemagic database based on the contents of the file. Honestly, though: s3cmd doesn't even manage to return a failed shell exit status when it fails to upload the file, so combined with all of these other issues it is probably better to just write your own one-off tool to do the one thing you need... it is almost certain that in the end it will save you time when you get bitten by some corner-case of this tool :(.
回答6:
s3cmd 1.0.0 does not support multi-part yet. I tried 1.1.0-beta and it works just fine. You can read about the new features here: http://s3tools.org/s3cmd-110b2-released
回答7:
In my case the reason of the failure was the server's time being ahead of the S3 time. Since I used GMT+4 in my server (located in US East) and I was using Amazon's US East storage facility.
After adjusting my server to the US East time, the problem was gone.
回答8:
I experienced the same issue, it turned out to be a bad bucket_location
value in ~/.s3cfg
.
This blog post lead me to the answer.
If the bucket you’re uploading to doesn’t exist (or you miss typed it ) it’ll fail with that error. Thank you generic error message. - See more at: http://jeremyshapiro.com/blog/2011/02/errno-32-broken-pipe-in-s3cmd/#sthash.ZbGwj5Ex.dpuf
After inspecting my ~/.s3cfg
is saw that it had:
bucket_location = Sydney
Rather than:
bucket_location = ap-southeast-2
Correcting this value to use the proper name(s) solved the issue.
回答9:
For me, the following worked:
In .s3cfg, I changed the host_bucket
host_bucket = %(bucket)s.s3-external-3.amazonaws.com
回答10:
s3cmd version 1.1.0-beta3 or better will automatically use multipart uploads to allow sending up arbitrarily large files (source). You can control the chunk size it uses, too. e.g.
s3cmd --multipart-chunk-size-mb=1000 put hugefile.tar.gz s3://mybucket/dir/
This will do the upload in 1 GB chunks.
回答11:
I encountered the same broken pipe error as the security group policy was set wrongly.. I blame S3 documentation.
I wrote about how to set the policy correctly in my blog, which is:
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:ListBucketMultipartUploads"
],
"Resource": "arn:aws:s3:::example_bucket",
"Condition": {}
},
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectAclVersion"
],
"Resource": "arn:aws:s3:::example_bucket/*",
"Condition": {}
}
]
}
回答12:
On my case, I've fixed this just adding right permissions.
Bucket > Properties > Permissions
"Authenticated Users"
- List
- Upload/Delete
- Edit Permissions
回答13:
I encountered a similar error which eventually turned out to be caused by a time drift on the machine. Correctly setting the time fixed the issue for me.
回答14:
Search for .s3cfg
file, generally in your Home Folder.
If you have it, you got the villain. Changing the following two parameters should help you.
socket_timeout = 1000
multipart_chunk_size_mb = 15
回答15:
I addressed this by simply not using s3cmd. Instead, I've had great success with the python project, S3-Multipart on GitHub. It does uploading and downloading, along with using as many threads as desired.