Back in the 90s when I first started out with MFC I used to dynamically link my apps and shipped the relevant MFC DLLs. This caused me a few issues (DLL hell!) and I switched to statically linking instead - not just for MFC, but for the CRT and ATL. Other than larger EXE files, statically linking has never caused me any problems at all - so are there any downsides that other people have come across? Is there a good reason for revisiting dynamic linking again? My apps are mainly STL/Boost nowadays FWIW.
相关问题
- Sorting 3 numbers without branching [closed]
- How to compile C++ code in GDB?
- Why does const allow implicit conversion of refere
- thread_local variables initialization
- What uses more memory in c++? An 2 ints or 2 funct
相关文章
- Class layout in C++: Why are members sometimes ord
- How to mock methods return object with deleted cop
- Which is the best way to multiply a large and spar
- C++ default constructor does not initialize pointe
- Selecting only the first few characters in a strin
- What exactly do pointers store? (C++)
- Converting glm::lookat matrix to quaternion and ba
- What is the correct way to declare and use a FILE
There are some downsides:
We do static linking for our Windows apps, primarily because it allows xcopy deployment, which is just not possible with installing or relying on SxS DLL's in a way which works, since the process and mechanism is not well documented or easily remotable. If you use local DLL's in the install directory it will kinda work, but it's not well supported. The inability to easily do remote installation without going through a MSI on the remote system is the primary reason why we don't use dynamic linking, but (as you pointed out) there are many other benefits to static linking. There are pros and cons to each; hopefully this helps enumerate them.
Most definitely.
Allocation is done on a 'static' heap. Since allocation an deallocation should be done on the same heap, this means that if you ship a library, you should take care that client code can not call 'your'
p = new LibClass()
and delete that object itself usingdelete p;
.My conclusion: either shield allocation and deallocation from client code, or dynamically link the CRT.
One good feature of using dll's are that if multiple processess loads the same dll its code can be shared between them. This can save memory and shorten loading times for an application loading a dll that's already used by another program.
As long as you keep your usage limited to certain libraries and do not use any dll's then you should be good.
Unfortunately, there are some libraries that you cannot link statically. The best example I have is OpenMP. If you take advantage of Visual Studio's OpenMP support, you will have to make sure the runtime is installed (in this case vcomp.dll).
If you do use dll's then you can't pass some items back and forth without some serious gymnastics. std::strings come to mind. If your exe and dll are dynamically linked then the allocation takes place in in the CRT. Otherwise your program may try to allocate the string on one side and deallocate it on the other. Bad things ensue...
That said, I still statically link my exe's and dll's. It does reduce a lot of the variablilty in the install and I consider that well worth the few limitations.
Most of the answers I hear about this involve sharing your dll's with other programs, or having those dll's be updated without the need to patch your software.
Frankly I consider those to be downsides, not upsides. When a third party dll is updated, it can change enough to break your software. And these days, hard drive space isn't as precious as it once was, an extra 500k in your executable? Who cares?
The upsides far outweigh the downsides in my opinion
There are some software licenses such as LGPL that require you to either use a DLL or distribute your application as object files that the user can link together. If you are using such a library, you'll probably want to use it as a DLL.