Best practices when running Node.js with port 80 (

2019-01-01 01:21发布

问题:

I am setting up my first Node.js server on a cloud Linux node and I am fairly new to the details of Linux admin. (BTW I am not trying to use Apache at the same time.)

Everything is installed correctly, but I found that unless I use the root login, I am not able to listen on port 80 with node. However I would rather not run it as root for security reason.

What is the best practice to:

  1. Set good permissions / user for node so that it is secure / sandboxed?
  2. Allow port 80 to be used within these constraints.
  3. Start up node and run it automatically.
  4. Handle log information sent to console.
  5. Any other general maintenance and security concerns.

Should I be forwarding port 80 traffic to a different listening port?

Thanks

回答1:

Port 80

What I do on my cloud instances is I redirect port 80 to port 3000 with this command:

sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000

Then I launch my Node.js on port 3000. Requests to port 80 will get mapped to port 3000.

You should also edit your /etc/rc.local file and add that line minus the sudo. That will add the redirect when the machine boots up. You don\'t need sudo in /etc/rc.local because the commands there are run as root when the system boots.

Logs

Use the forever module to launch your Node.js with. It will make sure that it restarts if it ever crashes and it will redirect console logs to a file.

Launch on Boot

Add your Node.js start script to the file you edited for port redirection, /etc/rc.local. That will run your Node.js launch script when the system starts.

Digital Ocean & other VPS

This not only applies to Linode, but Digital Ocean, AWS EC2 and other VPS providers as well. However, on RedHat based systems /etc/rc.local is /ect/rc.d/local.



回答2:

Give Safe User Permission To Use Port 80

Remember, we do NOT want to run your applications as the root user, but there is a hitch: your safe user does not have permission to use the default HTTP port (80). You goal is to be able to publish a website that visitors can use by navigating to an easy to use URL like http://ip:port/

Unfortunately, unless you sign on as root, you’ll normally have to use a URL like http://ip:port - where port number > 1024.

A lot of people get stuck here, but the solution is easy. There a few options but this is the one I like. Type the following commands:

sudo apt-get install libcap2-bin
sudo setcap cap_net_bind_service=+ep `readlink -f \\`which node\\``

Now, when you tell a Node application that you want it to run on port 80, it will not complain.

Check this reference link



回答3:

Drop root privileges after you bind to port 80 (or 443).

This allows port 80/443 to remain protected, while still preventing you from serving requests as root:

function drop_root() {
    process.setgid(\'nobody\');
    process.setuid(\'nobody\');
}

A full working example using the above function:

var process = require(\'process\');
var http = require(\'http\');
var server = http.createServer(function(req, res) {
    res.write(\"Success!\");
    res.end();
});

server.listen(80, null, null, function() {
    console.log(\'User ID:\',process.getuid()+\', Group ID:\',process.getgid());
    drop_root();
    console.log(\'User ID:\',process.getuid()+\', Group ID:\',process.getgid());
});

See more details at this full reference.



回答4:

For port 80 (which was the original question), Daniel is exactly right. I recently moved to https and had to switch from iptables to a light nginx proxy managing the SSL certs. I found a useful answer along with a gist by gabrielhpugliese on how to handle that. Basically I

  • Created an SSL Certificate Signing Request (CSR) via OpenSSL

    openssl genrsa 2048 > private-key.pem
    openssl req -new -key private-key.pem -out csr.pem
    
  • Got the actual cert from one of these places (I happened to use Comodo)
  • Installed nginx
  • Changed the location in /etc/nginx/conf.d/example_ssl.conf to

    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header X-Real-IP $remote_addr;
    }
    
  • Formatted the cert for nginx by cat-ing the individual certs together and linked to it in my nginx example_ssl.conf file (and uncommented stuff, got rid of \'example\' in the name,...)

    ssl_certificate /etc/nginx/ssl/cert_bundle.cert;
    ssl_certificate_key /etc/nginx/ssl/private-key.pem;
    

Hopefully that can save someone else some headaches. I\'m sure there\'s a pure-node way of doing this, but nginx was quick and it worked.



回答5:

Does Linode provide some \"front wall\" or firewall where You must open port for machine? Maybe it will be good place to find better solution than routing on every machine? When I\'m deploying server on Azure, I must define so called endpoints. Endpoint contains public port, private port (on machine) and protocol (TCP/UDP). So if You are running app on port 3000 on server, it is reachable on port 80 and routing is made by platform, not machine. I can also set ACLs on endpoints.