My app users upload their files to one bucket. How can I ensure that each object in my S3 bucket has a unique key to prevent objects from being overwritten?
At the moment I'm encrypting filenames with a random string in my php script before sending the file to S3.
For the sake of the discussion let's suppose that the uploader finds a way to manipulate the filename on upload. He wants to replace all the images on my site with a picture of a banana. What is a good way to prevent overwriting files in S3 if encryption fails?
Edit: I don't think versioning will work because I can't specify a version id in an image URL when displaying images from my bucket.
Are you encrypting, or hashing? If you are using md5 or sha1 hashes, an attacker could easily find a hash collision and make you slip on a banana skin. If you are encrypting without a random initialization vector, an attacker might be able to deduce your key after uploading a few hundred files, and encryption is probably not the best approach. It is computationally expensive, difficult to implement, and you can get a safer mechanism for this job with less effort.
If you prepend a random string to each filename, using a reasonably reliable source of entropy, you shouldn’t have any issues, but you should check whether the file already exists anyway. Although coding a loop to check, using S3::GetObject
, and generate a new random string might seem like a lot of effort for something that will almost never need to run, "almost never" means it has a high probability of happening eventually.
Checking for a file with that name before uploading it would work.
If the file already exists, re-randomize the file name, and try again.