When you write a C++ program, you use a few low-level libraries to interface with the machine. The C++ Standard Library is one example. Consider for example, new
. When you call new
in your program, you’re invoking a piece of code that implements that functionality. Where is that actual code?
It’s in a library. That library is deployed in a few different ways. One way is through dynamic linking, where the library is in the form of a DLL that needs to be present on the machine where you run your program. That’s what MSVCP110.DLL
is — it’s one of the library files your program was compiled against. Another way is to use static linking, where the code from that library is compiled directly in to your program. This results in a signifigant increase in the size of your application, but the other side of that coin is you don’t need those library files to be on your target machine. You also need to make sure that other libraries your program use are also built against the same static library. If your program shares data with other programs, you further may need to ensure that those programs use the same static libraries.
Microsoft and Windows aren’t unique in this. The same thing happens under Linux, albeit the libraries have different names.
There are pros and cons to using either shared libraries (eg dynamic linking) or static libraries. It’s simple and catchy to say “gahrrr I hate shared libraries” but unless you understand why either is appropriate in what situation you stand to deploy a poorly-designed program.
1
solved Where is the native binary app for windows today?