At my company, we recently switched from VC9 to VC10.
We migrated our projects but then, the person in charge told us we would have to keep some base common DLLs compiled with VC9 on our production machines for some time.
These DLLs make use of custom structures, some of which contain std::vector
, std::map
and so on. Now, it has come to my attention that the size of standard containers changed: some got bigger, some got smaller. As a result, the size of our custom structures changed as well.
To solve the issues caused by the size change, a colleague of mine thought of artificially increasing the size of our structures to make it possible to compensate for future members size changes so that the structures keep the same size, whatever runtime we use, preventing stack corruption on function calls.
Personally, I feel that this "solution" is horrible because while the size matters, so does the layout of the structures. To me, increasing the memory footprint of all structures to fix organizational issues seems really wrong.
To make it short, my question is: is it even possible to use simultaneously two different runtimes (using the described trick or any other trick) while using non-C types in the function prototypes ? Do you have any good/bad experience regarding a similar situation ?
The STL never has guaranteed binary compatibility between different major versions. So, if you have DLL's with STL classes at the interface, you should use the same compiler and the same flavor of the CRT for the client of the DLL and the DLL itself.
If you want to build DLL's that can be safely used with different compiler versions, you have some options, like:
- Expose a pure C interface (the DLL can be written in C++, but the interface must be pure C, and C++ exceptions can't cross DLL boundaries).
- Expose abstract interfaces at the DLL interface, like explained in this article.
- Use COM.
You'd have to make sure that anything needing to use those old libraries was linked against them, and compiled against the header files which came with that version of those libraries. There's no other way to do it, because C++ has to be able to see the header files to know how to address any data structures.
I get the impression from your question that you'll be linking against some libraries which are in turn compiled and linked against the VC9 runtime, in which case it may well be possible to link the rest of the code with VC10's, just as long as the libraries don't expose any of the VC9 library types in their interfaces. I say 'may well be', this is an area fraught with pitfalls and traps and generally I would say that you should be using the same runtime throughout whenever possible. The last thing you need is the compiler getting confused as to which version of std::vector you're talking about (and you can guarantee the programmers will get confused as well even if you can persuade the compiler and linker to figure it out).
It's nastier, but it's easier, to just stick to the older runtime until such time as it's not required on any target machines anymore.
I've actually done this before, padding structures out in a similar manner. Yes, you can use two different runtimes and it should function fine as long as the ABI's are the same: here's where you'll hit the wall when structures start changing sizes, and moving C++ (where the ABI is all over the place) over DLL boundaries is really, really messy. Especially given that VC10 has quite a few changes in anticipation of C++11. I use C where DLL's are concerned, entirely for the guarantees it gives me in terms of binary compatibility.
It's hard for me to offer a specific case where things will really eat it, but let me put it to you this way: It's the bugs you don't anticipate that will get you, and this is a real hornets nest.