How do you manage per-environment data in Docker-based microservices?

10.2k Views Asked by At

In a microservice architecture, I'm having a hard time grasping how one can manage environment-specific config (e.g. IP address and credentials for database or message broker).

Let's say you have three microservices ("A", "B", and "C"), each owned and maintained by a different team. Each team is going to need a team integration environment... where they work with the latest snapshot of their microservice, along with stable versions of all dependency microservices. Of course, you'll also need QA/staging/production environments as well. A simplified view of the big picture would look like this:

"Microservice A" Team Environment

  • Microservice A (SNAPSHOT)
  • Microservice B (STABLE)
  • Microservice C (STABLE)

"Microservice B" Team Environment

  • Microservice A (STABLE)
  • Microservice B (SNAPSHOT)
  • Microservice C (STABLE)

"Microservice C" Team Environment

  • Microservice A (STABLE)
  • Microservice B (STABLE)
  • Microservice C (SNAPSHOT)

QA / Staging / Production

  • Microservice A (STABLE, RELEASE, etc)
  • Microservice B (STABLE, RELEASE, etc)
  • Microservice C (STABLE, RELEASE, etc)

That's a lot of deployments, but that problem can be solved by a continuous integration server and perhaps something like Chef/Puppet/etc. The really hard part is that each microservice would need some environment data particular to each place in which it's deployed.

For example, in the "A" Team Environment, "A" needs one address and set of credentials to interact with "B". However, over in the "B" Team Environment, that deployment of "A" needs a different address and credentials to interact with that deployment of "B".

Also, as you get closer to production, environmental config info like this probably needs security restrictions (i.e. only certain people are able to modify or even view it).

So, with a microservice architecture, how to you maintain environment-specific config info and make it available to the apps? A few approaches come to mind, although they all seem problematic:

  • Have the build server bake them into the application at build-time - I suppose you could create a repo of per-environment properties files or scripts, and have the build process for each microservice reach out and pull in the appropriate script (you could also have a separate, limited-access repo for the production stuff). You would need a ton of scripts, though. Basically a separate one for every microservice in every place that microservice can be deployed.
  • Bake them into base Docker images for each environment - If the build server is putting your microservice applications into Docker containers as the last step of the build process, then you could create custom base images for each environment. The base image would contain a shell script that sets all of the environment variables you need. Your Dockerfile would be set to invoke this script prior to starting your application. This has similar challenges to the previous bullet-point, in that now you're managing a ton of Docker images.
  • Pull in the environment info at runtime from some sort of registry - Lastly, you could store your per-environment config inside something like Apache ZooKeeper (or even just a plain ol' database), and have your application code pull it in at runtime when it starts up. Each microservice application would need a way of telling which environment it's in (e.g. a startup parameter), so that it knows which set of variable to grab from the registry. The advantage of this approach is that now you can use the exact same build artifact (i.e. application or Docker container) all the way from the team environment up to production. On the other hand, you would now have another runtime dependency, and you'd still have to manage all of that data in your registry anyway.

How do people commonly address this issue in a microservice architecture? It seems like this would be a common thing to hear about.

1

There are 1 best solutions below

0
On

Docker compose supports extending compose files, which is very useful for overriding specific parts of your configuration.

This is very useful at least for development environments and may be useful in small deployments too.

The idea is having a base shared compose file you can override for different teams or environments.

You can combine that with environment variables with different settings.

Environment variables are good if you want to replace simple values, if you need to make more complex changes then you use an extension file.

For instance, you can have a base compose file like this:

# docker-compose.yml
version: '3.3'
services:
  service-a:
    image: "image-name-a"
    ports:
     - "${PORT_A}"
  service-b:
    image: "image-name-b"
    ports:
     - "${PORT_B}"
  service-c:
    image: "image-name-c"
    ports:
     - "${PORT_C}"

If you want to change the ports you could just pass different values for variables PORT_X.

For complex changes you can have separate files to override specific parts of the compose file. You can override specific parameters for specific services, any parameter can be overridden.

For instance you can have an override file for service A with a different image and add a volume for development:

# docker-compose.override.yml
services:
  service-a:
    image: "image-alternative-a"
    volumes:
      - /my-dev-data:/var/lib/service-a/data

Docker compose picks up docker-compose.yml and docker-compose.override.yml by default, if you have more files, or files with different names, you need to specify them in order:

docker-compose -f docker-compose.yml -f docker-compose.dev.yml -f docker-compose.dev-service-a.yml up -d

For more complex environments the solution is going to depend on what you use, I know this is a docker question, but nowadays it's hard to find pure docker systems as most people use Kubernetes. In any case you are always going to have some sort of secret management provided by the environment and managed externally, then from the docker side of things you just have variables that are going to be provided by that environment.