We're developing a C++ library with currently over 500 hundred individual .cpp files. These are each compiled and archived into a static library. Even with a parallel build, this takes some minutes. I'd like to reduce this compilation time.
Each file is on average 110 lines with a function or two inside. However, for each .cpp file there is a corresponding .h header and these are often included by many of the .cpp files. For example, A.h
might be included by A.cpp
, B.cpp
, C.cpp
, and so on.
I'd first like to profile the compilation process. Is there a way to find out how much time is spent doing what? I'm worried that a lot of time is wasted opening header files only to check the include guards and ignore the file.
If that sort of thing is the culprit, what are best practices for reducing compilation time?
I'm willing to add new grouping headers, but probably not willing to change this many-file layout since this allows our library to also function as an as-needed header-only library.
Structure your code to used PIMPL paradigm. The 2 primary benefits are:
For a good overview see here