Why is creating a new process more expensive on Wi

2019-01-05 10:15发布

I've heard that creating a new process on a Windows box is more expensive than on Linux. Is this true? Can somebody explain the technical reasons for why it's more expensive and provide any historical reasons for the design decisions behind those reasons?

10条回答
家丑人穷心不美
2楼-- · 2019-01-05 10:35

As there seems to be some justification of MS-Windows in some of the answers e.g.

  • “NT kernel and Win32, are not the same thing. If you program to NT kernel then it is not so bad” — True, but unless you are writing a Posix subsystem, then who cares. You will be writing to win32.
  • “It is not fair to compare to compare fork, with ProcessCreate, as they do different things, and Windows does not have fork“ — True, but fork is very, very useful. If you want process isolation (e.g. between tabs in a web browser), then this is the easiest way to do it.

Now let us look at the facts, what is the difference in performance?

Data summerised from http://www.bitsnbites.eu/benchmarking-os-primitives/.
Because bias is inevitable, when summarising, I did it in favour of MS-Windows
Hardware for most tests i7 8 core 3.2GHz. This is only relevant when comparing MS-Windows with Raspberry-Pi running Gnu/Linux

In order of speed, fastest to slowest (numbers are time, small is better).

  • Linux CreateThread 12
  • Mac CreateThread 15
  • Linux Fork 19
  • Windows CreateThread 25
  • Linux CreateProcess (fork+exec) 45
  • Mac Fork 105
  • Mac CreateProcess (fork+exec) 453
  • Raspberry-Pi CreateProcess (fork+exec) 501
  • Windows CreateProcess 787
  • Windows CreateProcess With virus scanner 2850
  • Windows Fork (simulate with CreateProcess + fixup) grater than 2850

Notes: On linux fork is faster that MS-Windows preferred method CreateThread.

Now for some other figures

  • Creating a file.
    • Linux 13
    • Mac 113
    • Windows 225
    • Raspberry-Pi (with slow SD card) 241
    • Windows with defender and virus scanner etc 12950
  • Allocating memory
    • Linux 79
    • Windows 93
    • Mac 152
查看更多
爷、活的狠高调
3楼-- · 2019-01-05 10:39

All that plus there's the fact that on the Win machine most probably an antivirus software will kick in during the CreateProcess... That's usually the biggest slowdown.

查看更多
【Aperson】
4楼-- · 2019-01-05 10:41

It's also worth noting that the security model in Windows is vastly more complicated than in unix-based OSs, which adds a lot of overhead during process creation. Yet another reason why multithreading is preferred to multiprocessing in Windows.

查看更多
爱情/是我丢掉的垃圾
5楼-- · 2019-01-05 10:42

Unix has a 'fork' system call which 'splits' the current process into two, and gives you a second process that is identical to the first (modulo the return from the fork call). Since the address space of the new process is already up and running this is should be cheaper than calling 'CreateProcess' in Windows and having it load the exe image, associated dlls, etc.

In the fork case the OS can use 'copy-on-write' semantics for the memory pages associated with both new processes to ensure that each one gets their own copy of the pages they subsequently modify.

查看更多
仙女界的扛把子
6楼-- · 2019-01-05 10:45

Uh, there seems to be a lot of "it's better this way" sort of justification going on.

I think people could benefit from reading "Showstopper"; the book about the development of Windows NT.

The whole reason the services run as DLL's in one process on Windows NT was that they were too slow as separate processes.

If you got down and dirty you'd find that the library loading strategy is the problem.

On Unices ( in general) the Shared libraries (DLL's) code segments are actually shared.

Windows NT loads a copy of the DLL per process, becauase it manipulates the library code segment (and executable code segment) after loading. (Tells it where is your data ?)

This results in code segments in libraries that are not reusable.

So, the NT process create is actually pretty expensive. And on the down side, it makes DLL's no appreciable saving in memory, but a chance for inter-app dependency problems.

Sometimes it pays in engineering to step back and say, "now, if we were going to design this to really suck, what would it look like?"

I worked with an embedded system that was quite temperamental once upon a time, and one day looked at it and realized it was a cavity magnetron, with the electronics in the microwave cavity. We made it much more stable (and less like a microwave) after that.

查看更多
7楼-- · 2019-01-05 10:46

The key to this matter is the historical usage of both systems, I think. Windows (and DOS before that) have originally been single-user systems for personal computers. As such, these systems typically don't have to create a lot of processes all the time; (very) simply put, a process is only created when this one lonely user requests it (and we humans don't operate very fast, relatively speaking).

Unix-based systems have originally been multi-user systems and servers. Especially for the latter it is not uncommon to have processes (e.g. mail or http daemons) that split off processes to handle specific jobs (e.g. taking care of one incoming connection). An important factor in doing this is the cheap fork method (that, as mentioned by Rob Walker (47865), initially uses the same memory for the newly created process) which is very useful as the new process immediately has all the information it needs.

It is clear that at least historically the need for Unix-based systems to have fast process creation is far greater than for Windows systems. I think this is still the case because Unix-based systems are still very process oriented, while Windows, due to its history, has probably been more thread oriented (threads being useful to make responsive applications).

Disclaimer: I'm by no means an expert on this matter, so forgive me if I got it wrong.

查看更多
登录 后发表回答