LetsEncrypt in a Docker (docker-compose) app container not working

2.7k Views Asked by At

I'm using docker-compose for a rails app to have an app and db container. In order to test some app functionality I need SSL...so I'm going with LetsEncrypt vs self-signed.

The app uses nginx, and the server is ubuntu 14.04 lts, with the phusion passenger docker image as a base image (lightweight debian)

Normally with LetsEncrypt, I run the usual ./certbot-auto certonly --webroot -w /path/to/app/public -d www.example.com

My server runs nginx (proxy passing the app to the container), so I've hopped into the container to run the certbot command without issue.

However, when I try to go to https://test-app.example.com it doesn't work. I can't figure out why.

Error on site (Chrome):

This site can’t be reached

The connection was reset.

Curl gives a bit better error:

curl: (35) Unknown SSL protocol error in connection to test-app.example.com

Server nginx app.conf

upstream test_app { server localhost:4200; }
server {
  listen 80;
  listen 443 default ssl;
  server_name test-app.example.com;

  # for SSL
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_dhparam /etc/ssl/dhparam.pem;
  ssl_prefer_server_ciphers on;
  ssl_ciphers 'ECDHE-RSA-blahblahblah-SHA';

  location / {
    proxy_set_header Host $http_host;
    proxy_pass http://test_app;
  }
}

Container's nginx app.conf

server {
  server_name _;
  root /home/app/test/public;

  ssl_certificate /etc/letsencrypt/live/test-app.example.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/test-app.example.com/privkey.pem;

  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_dhparam /etc/ssl/dhparam.pem;
  ssl_prefer_server_ciphers on;
  ssl_ciphers 'ECDHE-RSA-blahblah-SHA';

  passenger_enabled on;
  passenger_user app;
  passenger_ruby /usr/bin/ruby2.3;
  passenger_app_env staging;

  location /app_test/assets/ {
    passenger_enabled off;
    alias /home/app/test/public/assets/;

    gzip_static on;
    expires +7d;
    add_header Cache-Control public;
    break;
  }
}

In my Dockerfile, I have:

# expose port
EXPOSE 80
EXPOSE 443

In my docker-compose.yml file I have:

test_app_app:
  build: "."
  env_file: config/test_app-application.env
  links:
  - test_app_db:postgres
  environment:
    app_url: https://test-app.example.com
  ports:
  - 4200:80

And with docker ps it shows up as:

Up About an hour    443/tcp, 0.0.0.0:4200->80/tcp 

I am now suspecting it's because the server's nginx - the "front-facing" server - doesn't have the certs, but I can't run the LetsEncrypt command without an app location.

I tried running the manual LetsEncrypt command on the server, but because I presumably have port 80 exposed, I get this: socket.error: [Errno 98] Address already in use Did I miss something here?

What do I do?

2

There are 2 best solutions below

0
On BEST ANSWER

I knew I was missing one small thing. As stated in the question, since the nginx on the server is the 'front-facing' nginx, with the container's nginx specifically for the app, the server's nginx needed to know about the SSL.

The answer was super simple. Copy the certs over! (Kudos to my client's ops lead)

I cat the fullchain.pem and privkey.pem in the docker container and created the associated files in /etc/ssl on the server.

On the server's /etc/nginx/sites-enabled/app.conf I added:

  ssl_certificate /etc/ssl/test-app-fullchain.pem;
  ssl_certificate_key /etc/ssl/test-app-privkey.pem;

Checked configuration and restarted nginx. Boom! Worked like a charm. :)

3
On

Fun one.

I would tend to agree that it's likely due to not getting the certs.

First and foremost read my disclaimer at the end. I would try to use DNS authentication., IMHO it's a better method for something like Docker. A few ideas come to mind. Easiest that answers your question would be a docker entrypoint script that gets the certs first and then starts nginx:

#!/bin/bash
set -ea

#get cert
./certbot-auto certonly --webroot -w /path/to/app/public -d www.example.com

#start nginx
nginx

This is "okay" solution, IMHO, but is not really "automated" (which is part of the lets encrypt goals). It doesn't really address renewing the certificate down the road. If that's not a concern of yours, then there you go.

You could get really involved and create an entrypoint script that detects when the cert expires and then rerun the command to renew it and then reloads nginx.

A much more complicated (but also more scalable solution) would be to create a docker image that's sole purpose in life is to handle lets_encrypt certificates and renewals and then provide a way of distributing those certificates to other containers, eg: nfs (or shared docker volumes if you are really careful).

For anyone in the future reading this: this was written before compose hooks was an available feature, which would be by far the best way of handling something like this.

Please read this disclaimer:

Docker is not really the best solution for this, IMHO. Docker images should be static data. Because lets encrypt certificates expire after 3 months, that means your container should have a shelf-life of three months or less (or, like I said above, account for renewing). "Thats fine!" I hear you say. But that would also mean you are constantly getting a new certificate issued each time you start the container (with the entrypoint method). At the very least, that means that the previous certificate gets revoked every time. I don't know what the ramifications are for doing this with Lets Encrypt. They may only give you so many revokes before they think something fishy is going on.

What I tend to do most often is actually use configuration management and use nginx as the "front" on the host system. Or rely on some other mechanism to handle SSL termination. But that doesn't answer your question of how to get Lets Encrypt to work with docker. :-)

I hope that helps or points you in a better direction. :-)