There is almost no overhead to having a separate DLL. Basically, the first call to a function exported from a DLL will run a tiny stub that fixes up the function addresses so that subsequent calls are performed via a single jump through a jump table. The way CPUs work, this extra indirection is practically free.
The main "overhead" is actually an opportunity cost, not an "overhead" per-se. That is, modern compilers can do something called "whole program optimization" in which the entire module (.exe or .dll) is compiled and optimized at once, at link time. This means the compiler can do things like adjust calling conventions, inline functions and so on across all .cpp files in the whole program, rather than just within a single .cpp file.
This can result in fairly nice performance boost, for certain kinds of applications. But of course, whole program optimization cannot happen across DLL boundaries.
There are two overheads to a DLL. First as the DLL is loaded into memory the internal addresses must be fixed for the actual address that the DLL is loaded at, versus the addresses assumed by the linker. This can be minimized by re-basing the DLLs. The second overhead is when the program and DLL are loaded, as the program calls into the DLL have the addresses of the functions filled in. These overheads are generally negligible except for very large programs and DLLs.
If this is a real concern you can use delay-loaded DLLs which only get loaded as they are called. If the DLL is never used, for example it implements a very uncommon function, then it never gets loaded at all. The downside is that there's a short delay the first time the DLL is called.
I like to use statically linked libraries, not to decrease the overhead but to minimize the hassle of having to keep the DLL with the program.
Imported functions have no more overhead than virtual functions.
There is almost no overhead to having a separate DLL. Basically, the first call to a function exported from a DLL will run a tiny stub that fixes up the function addresses so that subsequent calls are performed via a single jump through a jump table. The way CPUs work, this extra indirection is practically free.
The main "overhead" is actually an opportunity cost, not an "overhead" per-se. That is, modern compilers can do something called "whole program optimization" in which the entire module (.exe or .dll) is compiled and optimized at once, at link time. This means the compiler can do things like adjust calling conventions, inline functions and so on across all .cpp files in the whole program, rather than just within a single .cpp file.
This can result in fairly nice performance boost, for certain kinds of applications. But of course, whole program optimization cannot happen across DLL boundaries.
There are two overheads to a DLL. First as the DLL is loaded into memory the internal addresses must be fixed for the actual address that the DLL is loaded at, versus the addresses assumed by the linker. This can be minimized by re-basing the DLLs. The second overhead is when the program and DLL are loaded, as the program calls into the DLL have the addresses of the functions filled in. These overheads are generally negligible except for very large programs and DLLs.
If this is a real concern you can use delay-loaded DLLs which only get loaded as they are called. If the DLL is never used, for example it implements a very uncommon function, then it never gets loaded at all. The downside is that there's a short delay the first time the DLL is called.
I like to use statically linked libraries, not to decrease the overhead but to minimize the hassle of having to keep the DLL with the program.