If two C++ files have different definitions of classes with the same name, then when they are compiled and linked, something is thrown out even without a warning. For example,
// a.cc
class Student {
public:
std::string foo() { return "A"; }
};
void foo_a()
{
Student stu;
std::cout << stu.foo() << std::endl;
}
// b.cc
class Student {
public:
std::string foo() { return "B"; }
};
void foo_b()
{
Student stu;
std::cout << stu.foo() << std::endl;
}
When compiled and linked together using g++, both will output "A" (if a.cc precedes b.cc in the command line order).
A similar topic is here. I see namespace will solve this problem but I don't know why the linker doesn't even shoot a warning. And if one definition of the class has extra function that isn't defined in another, say if b.cc is updated as:
// b.cc
class Student {
public:
std::string foo() { return "B"; }
std::string bar() { return "K"; }
};
void foo_b()
{
Student stu;
std::cout << stu.foo() << stu.bar() << std::endl;
}
Then stu.bar() works well. Thanks to anyone who can tell me how the compiler and linker work in such situation.
As an extra question, if classes are defined in header files, should they always be wrapped with unnamed namespace to avoid such situation? Is there any side effects?
This is a violation of the one definition rule (C++03, 3.2/5 "One definition rule"), which says (among other things):
There can be more than one definition of a class type (clause 9), ...
in a program provided that each definition appears in a different
translation unit, and provided the definitions satisfy the following
requirements. Given such an entity named D defined in more than one
translation unit, then
- each definition of D shall consist of the same sequence of tokens;
If you violate the one definition rule, the behavior is undefined (which means that strange things can happen).
The linker sees multiple definitions of Student::foo()
- one in a's object file and one in b's. However it doesn't complain about this; it just selects one of the two (as it happens, the first one it comes across). This 'soft' handling of duplicate functions apparently happens only for inline functions. For non-inline functions, the linker will complain about multiple definitions and will refuse to produce an executable (there may be options that relax this restriction). Both GNU ld
and MSVC's linker behave this way.
The behavior makes some sense; inline functions need to be available in every translation unit they're used in. And in the general case they need to have non-inline versions available (in case the call isn't inlined or if the function's address is taken). inline
is really just a free pass around the one-definition rule - but for it to work, all the inline definitions need to be the same.
When I look at dumps of the object files, I don't see anything obvious that explains to me how the linker knows that one function is permitted to have multiple definitions and others aren't, but I'm sure there's some flag or record which does just that. Unfortunately, I find that the workings of the linker and object file details aren't particularly well documented, so the precise mechanism will probably remain a mystery to me.
As for your second question:
As an extra question, if classes are defined in header files, should
they always be wrapped with unnamed namespace to avoid such situation?
Is there any side effects?
You almost certainly don't want to do this each class would be a distinct type in each translation unit, so technically instances of the class they couldn't be passed from one translation unit to another (by pointer, reference or copying). Also, you'd end up with multiple instances of any static members. That probably wouldn't work well.
Put them in different, named namespaces.
You violated the one definition rule for class definitions and the language specifically forbids doing this. It's not required for the compiler/linker to warn or diagnose, and such a scenario is certainly not guaranteed to work as expected in this case.
but I don't know why the linker doesn't even shoot a warning.
Violations of the One Definition Rule don’t require diagnostics because in order to implement diagnostics, the linker would have to prove that the two definitions are not, in fact, equivalent.
This is easy once the classes have different sizes, a different number of members or different base classes. But as soon as all those factors are equal, you could still have different member definitions for instance, and the linker would have to use some advanced introspection to compare them. Even if that were always possible (and I’m not sure it is, since the object files could be compiled with different options, leading to different output), it’s certainly very inefficient.
As a consequence, the linker just accepts that what you throw at it doesn’t violate the ODR. Not a perfect situation, but well.
I think your "extra" question is a clue to the main question.
If I understand your extra question, then I think you do not want to wrap them in a namespace, since if you #include the same class into multiple .cc files, then you probably want to use just one copy of each method, even if they are defined inside the class.
This (sort of) explains why you only get one version of each function in your main example. I expect the linker is just assuming that the two functions are identical -- generated from the same #included source. It would be nice if the linker would detect when they were different and issue a warning, but I guess that is hard.
As Mark B indicates, the practical answer is "Just don't go there".
Apart from violation of one definition rule , you don't see compiler complaining because of Name mangling in C++
Edit :
As pointed out by Konrad Rudolph : Mangled names in this case would be same.