AngularJs Image upload to S3

2019-02-18 12:03发布

问题:

I am:
- Creating a Web Application
- AngularJS front end with ng-file upload (https://github.com/danialfarid/ng-file-upload)
- Node.js backend
- Want to be able to upload images to my Amazon S3 bucket

I'm attempting to follow this tutorial: https://github.com/danialfarid/ng-file-upload/wiki/Direct-S3-upload-and-Node-signing-example

Essentially the program flow is select the file, click a button, request signing from the backend and then upload to S3.

I receive the signing from the backend with code 200 but when the frontend attempts to upload the image I see this in the developer menu:
OPTIONS https://mybucket.name.s3-us-east-1.amazonaws.com/ net::ERR_NAME_NOT_RESOLVED

Is it my code that is the problem or the way I set up my bucket?

Added Code if needed:

CORS on my S3 bucket

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>Authorization</AllowedHeader>
    </CORSRule>
    <CORSRule>
        <AllowedOrigin>my.computers.IP.Address</AllowedOrigin>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>DELETE</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

Node.js Backend Code

app.post('/signing', function(req, res) {
    var request = req.body;
    var fileName = request.filename

    var extension = fileName.substring(fileName.lastIndexOf('.'));
    var today = new Date();
    var path = '/' + today.getFullYear() + '/' + today.getMonth() + '/' + today.getDate() + '/' + uuid.v4() + extension;

    var readType = 'private';

    var expiration = moment().add(5, 'm').toDate(); //15 minutes

    var s3Policy = {
        'expiration': expiration,
        'conditions': [{
                'bucket': aws.bucket
            },
            ['starts-with', '$key', path], 
            {
                'acl': readType
            },
            {
              'success_action_status': '201'
            },
            ['starts-with', '$Content-Type', request.type],
            ['content-length-range', 2048, 10485760], //min and max
        ]
    };

    var stringPolicy = JSON.stringify(s3Policy);
    var base64Policy = new Buffer(stringPolicy, 'utf-8').toString('base64');

    // sign policy
    var signature = crypto.createHmac('sha1', aws.secret)
        .update(new Buffer(base64Policy, 'utf-8')).digest('base64');

    var credentials = {
        url: s3Url,
        fields: {
            key: path,
            AWSAccessKeyId: aws.key,
            acl: readType,
            policy: base64Policy,
            signature: signature,
            'Content-Type': request.type,
            success_action_status: 201
        }
    };
    res.jsonp(credentials);
});

AngularJS frontend code

App.controller('MyCtrl2', ['$scope', '$http', 'Upload', '$timeout', function ($scope, $http, Upload, $timeout) {
    $scope.uploadPic = function(file) {
        var filename = file.name;
        var type = file.type;
        var query = {
            filename: filename,
            type: type
        };
        $http.post('/signing', query)
            .success(function(result) {
                Upload.upload({
                    url: result.url, //s3Url
                    transformRequest: function(data, headersGetter) {
                        var headers = headersGetter();
                        delete headers.Authorization;
                        return data;
                    },
                    fields: result.fields, //credentials
                    method: 'POST',
                    file: file
                }).progress(function(evt) {
                    console.log('progress: ' + parseInt(100.0 * evt.loaded / evt.total));
                }).success(function(data, status, headers, config) {
                    // file is uploaded successfully
                    console.log('file ' + config.file.name + 'is uploaded successfully. Response: ' + data);
                }).error(function() {

                });
            })
            .error(function(data, status, headers, config) {
                // called asynchronously if an error occurs
                // or server returns response with an error status.
        });
    };
}]);

回答1:

The example contains something I would consider an error of oversimplification.

var s3Url = 'https://' + aws.bucket + '.s3-' + aws.region + '.amazonaws.com';

This works much of the time, but it is not a consistently valid way of crafting a URL to an object in S3.

It breaks in at least two cases, one of which is the one you've encountered.

The endpoint for the US Standard region of S3, which is located in the us-east-1 region of AWS, is not s3-us-east-1.amazonaws.com, as it would be if it used the same format for all other regions. For what appear to be legacy/evolutionary reasons, it is simply s3.amazonaws.com but can also be written s3-external-1.amazonaws.com. (Remember, S3 has been around for almost ten years as of this writing, and the service has expanded and evolved, while maintaining backwards compatibiliy -- a noteworthy feat, but bound to result in some conventions that seen confusing at first glance.)

However, all buckets -- including those not in us-east-1 -- but excluding those that would break the above code for the second reason (which I haven't gotten to yet) -- can be addressed simply as bucket-name.s3.amazonaws.com -- if you think about the hierarchical nature of DNS, you can see how this might work: S3, within a few minutes after a bucket is created, remaps that specific hostname, in DNS, to send the request to the correct regional S3 endpoint.

So the + '.s3-' + aws.region + should work if written, simply, + '.s3' +.

...unless, of course, you have created a bucket with dots in its name. In this case, you will have a problem with https (this is problem #2, alluded to above), because a bucket with dots in the name will not match the wildcard SSL (TLS) certificate presented by S3 (a design limitation in SSL wildcard certificates, not S3 itself).

If this is an issue, and using a bucket with no dots in the name is undesirable for some reason, your URLs must be crafted in what's called path-style format. This is the alternative to virtual-style format, where the bucket name is in the hostname. Here, the bucket name is the first part of the path:

https://s3[-region].amazonaws.com/bucket.name.with.dots/key-path

...and, again, the first component is just s3 or s3-external-1 for US Standard (us-east-1)... but using this format requires you to match the region, unlike the above, where DNS handles the request routing... otherwise S3 will throw a permanent redirect error.

http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html

http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region

This is a lot of info, but hopefully useful in explaining not only what you need to do differently, but also why.



回答2:

Your front-end machine is failing to resolve the hostname via the domain name service. The most likely problem is that you used the wrong hostname for the server you are uploading to:

mybucket.name.s3-us-east-1.amazonaws.com

If the hostname being used is correct, verify that the DNS information for the host you are trying to reach is available from your client. For example, use the dig command under linux or nslookup under Windows.