I used to be a happy s3cmd user. However recently when I try to transfer a large zip file (~7Gig) to Amazon S3, I am getting this error:
$> s3cmd put thefile.tgz s3://thebucket/thefile.tgz
....
20480 of 7563176329 0% in 1s 14.97 kB/s failed
WARNING: Upload failed: /thefile.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=1.25)
WARNING: Waiting 15 sec...
thefile.tgz -> s3://thebucket/thefile.tgz [1 of 1]
8192 of 7563176329 0% in 1s 5.57 kB/s failed
ERROR: Upload of 'thefile.tgz' failed too many times. Skipping that file.
I am using the latest s3cmd on Ubuntu.
Why is it so? and how can I solve it? If it is unresolvable, what alternative tool can I use?
I had the same problem with ubuntu s3cmd.
The solution was to update s3cmd with the instructions from s3tools.org:
I experienced the same issue, it turned out to be a bad
bucket_location
value in~/.s3cfg
.This blog post lead me to the answer.
After inspecting my
~/.s3cfg
is saw that it had:Rather than:
Correcting this value to use the proper name(s) solved the issue.
In my case the reason of the failure was the server's time being ahead of the S3 time. Since I used GMT+4 in my server (located in US East) and I was using Amazon's US East storage facility.
After adjusting my server to the US East time, the problem was gone.