Parallel requests in Node.js web server

2019-02-19 09:19发布

问题:

If I have a web server that runs Node.js, then will I be able to serve multiple requests at the same time? From my preliminary tests I can see than Node being primarily single threaded can process only one HTTP request at the moment. But if one request takes too long to complete (for example, uploading of a large data) then all the other request have to wait.

Is there an workaround for this situation? Can we write the code such that it can server multiple HTTP requests at the same time?

回答1:

The fact that Node is single threaded does not necessarily mean it can only process 1 request at a time.

A lot of things in Node are purposely asynchronous; such as many file system operations, DB queries etc. This is the mindset of:

I know you're going to take some time to do this, so call me when you're done and in the meantime I'm going to put my single-thread of operation to some better use, rather than waiting for you to complete.

It is at that point that other work (which can be other requests) are processed. Of course, if they then have to perform an asynchronous operation, the flow of operation might return to where we suspended ourselves earlier, because the other operation has completed.

In your situation where a large upload is taking place, the incoming stream is processed asynchronously and is managed through the data event. If you want to write this stream to disk, again, the writing can be performed asynchronously; i.e. there are points during the file upload process where other requests could be processed.

Having multiple threads of operation is only beneficial if you have multiple CPU cores; then each thread of operation can run on a different core. To do this, Node has the cluster module, which you should look at.



回答2:

As we can see in node.js we have a non-blocking I/O server platform. So we can serve mutltiple request at the same time. For Eg:-

Consider file handling as a case :-

var fs=require("fs");
console.log("starting");
fs.readFile("path-to-file", function(error, data){
    console.log("Content:"+data);
});
console.log("Carry on executing");

OUTPUT:-

starting
carry on executing
Content: data

So we can see that while we are waiting for the contents of the file our code will carry on executing



回答3:

When a request takes a long time, that's not because the processor is taking a long time to process the request. It's because it spends a long time waiting for each chunk of data to come in. It's in this waiting time that node.js can process other requests, which makes it very scalable, and far more efficient than the threaded model that most other platforms use. For a detailed discussion, see the C10k problem by Dan Kegel.

JavaScript events work by having an event queue. This queue gets added to every time an event (such as a file read operation or a data chunk from a server request) fires. As long as the resulting event handling code isn't too processor intensive, the queue doesn't typically get very long, and code is executed almost immediately. That is why almost everything in node.js is async.