How does one handle third party libraries with completely different build systems?

2.4k Views Asked by At

The C++ (and C, though it matters less there) standard states that all translation units in a program need to have the same definition; and that includes things like compiler switches. For instance, on MSVC++, one must link to the right version of the C runtime library (/MT versus /MD versus /MTd versus /MDd) in all translation units.

However, there are a couple of third party dependencies we'd like to use, and there are a couple of things:

  • They all use different build systems (There's an autoconf, there's a cmake, and there's one which seems to have it's own hand-rolled thing..)
  • The build systems don't all expose these kinds of switches in their configuration, and the ones which are hard-coded are set differently in different systems. (E.g. one library forces /MD and /MDd, while another forces /MT and /MTd)

We aren't sure what the best way to handle these kinds of things are. We have discussed the following options:

  • Build our own build system around whatever third party dependencies.
    • PRO: We know things will match
    • PRO: We know that we can do cross platform support the right way
    • CON: We don't know exactly how each of the third party build systems work
    • CON: Lots and lots of work
    • CON: Breaks if the third party dependency changes
  • Try to use the third party build systems, and try to modify them to do what we need.
    • PRO: Seems to be less work
    • CON: We might break the third party system
    • CON: Breaks if the third party dependency changes
    • CON: Forces our own build to be really complicated

We don't know what to do though; and we can't believe that we are alone in having these kinds of issues. Should we do one of those options above, or some third alternative that I've not thought of?

4

There are 4 best solutions below

1
On

I assume that you are intentionally not mentioning any specific libs?

Anyway, you should ask yourself whether you really need this 3rd party code in your build system.

The 3rd party libs we use are compiled once (with their respective build scripts) and checked for the right VC switches, and then the DLL or LIB file is checked into source control of the app that uses the lib.

So the compilation of a 3rd party lib is something we only do once per 3rd party release and we don't burden our build system with the intricacies of building the 3rd party libs.

I guess there are valid arguments for either approach, and maybe you can provide some details in the question why you need/want to have the 3rd party libs inside your build system.

0
On

You are right - you are not alone in having these kind of issues!

In our experience a loose coupling of dependencies (a fancy way of saying manually copying files I guess) and modifying the 3rd party build system is the most effective - especially if building for windows.

A few things we have found useful:-

Document the changes you had to apply (We use a wiki page) for the particular version and include any steps / dependencies required here (Perl interpreter required for building OpenSSL for example) and run any/all included tests before using the build.

We found that renaming the output libs so that they are marked according to the ABI consistently is really helpful here rather than using the names as generated by the 3rd party.

so 3rd party C++ dependency X ends up in our directory structure (committed to svn) as

X/[version_number]/include (header files to use lib)

X/[version_number]/lib/Windows (built manually, tested and renamed libs)

eg

X-vc100.lib X-vc100-gd.lib

etc

(we actually copied our naming from boost names http://www.boost.org/doc/libs/1_49_0/more/getting_started/windows.html#library-naming since they seemed totally sensible)

Then the particular 3rd party dependency can be selected by your build system (for VS we use a inherited property sheet with all the dependencies named as user macro's so can just have $(x_inc) added to the include directories for a project and $(x_lib) added to the libs - these macros then select the version and abi required for that particular project.

4
On

You don't strictly speaking need all your libraries to link to the same runtime. Assuming they're DLL's, it is only a problem if they pass CRT data structures through the DLL boundary. Creating a FILE* in a DLL using one runtime, and using it from a DLL which is linked against another runtime is a recipe for disaster. Calling malloc or new from a DLL using one runtime, and free/delete from another will cause lots of fun problems.

But as long as all the CRT-related stuff is kept internal in the DLL, you can safely link to a DLL which uses another CRT. That also means that your debug build can use a library linked against the release CRT. Again, as long as you don't try to mix CRT data structures across libraries.

Note also that most compiler flags do not affect the ABI, and so they can safely be different between libraries (or files). The ones that do change the ABI are generally obvious, like if you force packing or stack alignment.

So what we do is basically this:

  • wherever possible, we build the library ourselves. For the most part, this means we can, through fairly simple means, control which runtime it should link against. That might require a very small amount of tweaking of the build system, but usually, it's just a matter of specifying whether to build a debug or release build, which most build systems have options for. (And if the library uses Visual Studio for its build system, we try to upgrade it to 2010 where possible )
  • we use the library's own build system. Anything else is just a recipe for misery. You want to be able to trust that the build system is actually kept in sync with the library source code, and the simplest way to ensure that is to use the build system that ships with the source code.
  • if it's not practical to build the library, we just use it as is, and then we just have to distribute whatever runtime it is linked against. We try to avoid this, but it's safe enough to do when no other option is available.
  • when we build a library, we document exactly how it was done on our internal wiki. This is a huge help if we have to upgrade to a newer version of the library, or rebuild it.

We currently depend on three different VS runtimes (2005, 2008 and 2010), which is a pain to deal with, but it works. And for one or two of them, we always use the release version, even in debug builds of our own code.

It's a bit messy to maintain, but it works. And I can't really see a better way to do it.

Of course, you should minimize how many third-party dependencies you have, and when you choose a third-party library, its build system should definitely be a factor to consider. Some are a lot more painful to work with than others.

But in the end, you'll probably end up having to use a few libraries that just don't have well-behaved build systems, or which would be so painful to build that it's just not worth the effort, or where you can't create a debug version of the library. And then just take what you can get. Let it use whatever runtime it likes to use.

0
On

There is no definitive answer, it depends on how the third party code interface is designed. If the interface is tight-couple, for example, shared non-opaque data types, it is better to rebuild them using your own build and options. You have to analyze the interface and determine how to integrate them. On the other hard, if the interface is simple and can be easily decoupled, they can be built as dll and called on demand. Of course you will have different version of the C libraries loaded into the application and they all have different instances of io buffers, memory managment etc.

If you have the source code available to you, the better choice is to invest more time into integrating them into your own build.