We're developing a C++ library with currently over 500 hundred individual .cpp files. These are each compiled and archived into a static library. Even with a parallel build, this takes some minutes. I'd like to reduce this compilation time.
Each file is on average 110 lines with a function or two inside. However, for each .cpp file there is a corresponding .h header and these are often included by many of the .cpp files. For example, A.h might be included by A.cpp, B.cpp, C.cpp, and so on.
I'd first like to profile the compilation process. Is there a way to find out how much time is spent doing what? I'm worried that a lot of time is wasted opening header files only to check the include guards and ignore the file.
If that sort of thing is the culprit, what are best practices for reducing compilation time?
I'm willing to add new grouping headers, but probably not willing to change this many-file layout since this allows our library to also function as an as-needed header-only library.
If your pre-processor supports the
#pragma oncedirective, use it. That will make sure that a .h file is not read more than once.If not, use
#includeguards in .cpp files.Say you have
A.h:
You can use the following method in A.cpp:
You would need to repeat that pattern for every .h file. E.g.
You can read more about use of
#includeguards in .cpp file at What is the function of include guard in .cpp (not in .h)?.