I used to be a happy s3cmd user. However recently when I try to transfer a large zip file (~7Gig) to Amazon S3, I am getting this error:
$> s3cmd put thefile.tgz s3://thebucket/thefile.tgz
....
20480 of 7563176329 0% in 1s 14.97 kB/s failed
WARNING: Upload failed: /thefile.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=1.25)
WARNING: Waiting 15 sec...
thefile.tgz -> s3://thebucket/thefile.tgz [1 of 1]
8192 of 7563176329 0% in 1s 5.57 kB/s failed
ERROR: Upload of 'thefile.tgz' failed too many times. Skipping that file.
I am using the latest s3cmd on Ubuntu.
Why is it so? and how can I solve it? If it is unresolvable, what alternative tool can I use?
I tried all of the other answers but none worked. It looks like s3cmd is fairly sensitive. In my case the s3 bucket was in the EU. Small files would upload but when it got to ~60k it always failed.
When I changed ~/.s3cfg it worked.
Here are the changes I made:
host_base = s3-eu-west-1.amazonaws.com
host_bucket = %(bucket)s.s3-eu-west-1.amazonaws.com
I encountered a similar error which eventually turned out to be caused by a time drift on the machine. Correctly setting the time fixed the issue for me.
And now in 2014, the aws cli has the ability to upload big files in lieu of s3cmd.
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html has install / configure instructions, or often:
followed by
will get you satisfactory results.
On my case, I've fixed this just adding right permissions.
I addressed this by simply not using s3cmd. Instead, I've had great success with the python project, S3-Multipart on GitHub. It does uploading and downloading, along with using as many threads as desired.
s3cmd 1.0.0 does not support multi-part yet. I tried 1.1.0-beta and it works just fine. You can read about the new features here: http://s3tools.org/s3cmd-110b2-released