Mattermost Error in Self Hosted using Docker + Nginx + Cloudflare

49 Views Asked by At

I've used Mattermost docker container for my web app as chat system. My web application is hosted on EC2 server and this is my docker compose file:

version: '3.9'

services:
  br-db:
    container_name: br-db
    platform: linux/amd64
    image: postgres:15.2-alpine
    ports:
      - "54322:5432"
    restart: always
    env_file: './pgsql/local/.env.local'
    volumes:
      - br-pgsql-data:/var/lib/postgresql/data
    networks:
      - br-net

  br-app:
    container_name: br-app
    build:
      context: ./be
      args:
        - DEV=true
    tty: true
    restart: unless-stopped
    working_dir: /app
    env_file: './be/.env.local'
    ports:
      - "8001:8000"
    environment:
      CONTAINER_ROLE: app
      CONTAINER_ENV: local
    volumes:
      - ./be/source:/app
    depends_on:
      - br-db
    networks:
      - br-net

  br-redis:
    image: redis:6.2-alpine
    container_name: br-redis
    ports:
      - "6379:6379"
    restart: always
    volumes:
      - redisdata:/data
    networks:
      - br-net

  br-celery:
    container_name: br-celery
    build:
      context: ./be
      args:
        - DEV=true
    command: celery -A app worker -l info
    tty: true
    restart: unless-stopped
    working_dir: /app
    env_file: './be/.env.local'
    volumes:
      - ./be/source:/app
    depends_on:
      - br-db
      - br-redis
    networks:
      - br-net

  mattermost:
    platform: linux/amd64
    image: mattermost/mattermost-team-edition:latest
    container_name: mattermost
    ports:
      - "8065:8065"
      - "8067:8067"
    volumes:
      - ./mattermost/local/config:/mattermost/config:rw
    restart: unless-stopped
    depends_on:
      - br-db
      - br-app
    env_file: './mattermost/local/.app.env'
    networks:
      - br-net

  mattermost-db:
    container_name: mattermost-db
    platform: linux/amd64
    image: postgres:15.2-alpine
    ports:
      - "54323:5432"
    restart: always
    env_file: './mattermost/local/.db.env'
    volumes:
      - mattermost-db-data:/var/lib/postgresql/data
    networks:
      - br-net

volumes:
  br-pgsql-data:
  redisdata:
  mattermost-db-data:

networks:
  br-net:
    driver: bridge

And This is my nginx configuration in a file named default.conf.tpl:

server {
    listen 80;

    location / {
        uwsgi_pass br-app:9000;
        include /etc/nginx/uwsgi_params;
        client_max_body_size 50M;
        proxy_read_timeout 300s;
    }

   location ~ /api/v4/websocket$ {
        proxy_set_header Upgrade \$http_upgrade\;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host \$http_host\;
        proxy_set_header X-Real-IP \$remote_addr\;
        proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for\;
        proxy_set_header X-Forwarded-Proto \$scheme\;
        proxy_pass http://mattermost:8065;
   }
}
server {
    listen 443;

    location / {
        uwsgi_pass br-app:9000;
        include /etc/nginx/uwsgi_params;
        client_max_body_size 50M;
        proxy_read_timeout 300s;
    }

   location ~ /api/v4/websocket$ {
        proxy_set_header Upgrade \$http_upgrade\;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host \$http_host\;
        proxy_set_header X-Real-IP \$remote_addr\;
        proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for\;
        proxy_set_header X-Forwarded-Proto \$scheme\;
        proxy_pass http://mattermost:8065;
   }
}

Let's assume my domain is this: domain.com I'm trying to connect from the client app to this: wss://domain.com/api/v4/websocket When I see the logs for Nginx, I'm getting 400 error from Mattermost. And When I see Mattermost docker container logs, This is what I'm seeing:

{"timestamp":"2023-12-21 15:48:07.082 Z","level":"info","msg":"Starting websocket hubs","caller":"platform/web_hub.go:95","number_of_hubs":4}
{"timestamp":"2023-12-21 15:53:36.287 Z","level":"debug","msg":"websocket.NextReader: closing websocket","caller":"platform/web_conn.go:849","user_id":"","conn_id":"ugc36pt8fir6tr3fphzdxb7u3h","error":"websocket: close 1001 (going away)"}
{"timestamp":"2023-12-21 15:53:36.287 Z","level":"debug","msg":"Received HTTP request","caller":"web/handlers.go:173","method":"GET","url":"/api/v4/websocket","request_id":"jgsx4asum7dqxyb996twxnp7ko","user_id":""}

{"timestamp":"2023-12-21 16:09:06.191 Z","level":"debug","msg":"websocket.NextReader: client side closed socket","caller":"platform/web_conn.go:845","user_id":"dunbpz13m3bd7yi3pa3hbuj3hh","conn_id":"rwuerwmspfyo3e7e7pwkk83dew"}
{"timestamp":"2023-12-21 16:09:06.191 Z","level":"debug","msg":"Received HTTP request","caller":"web/handlers.go:173","method":"GET","url":"/api/v4/websocket","request_id":"jymam8uchtgxzjn4pht8r7zknw","user_id":"dunbpz13m3bd7yi3pa3hbuj3hh"}
{"timestamp":"2023-12-21 16:09:18.097 Z","level":"debug","msg":"websocket.NextReader: client side closed socket","caller":"platform/web_conn.go:845","user_id":"dunbpz13m3bd7yi3pa3hbuj3hh","conn_id":"rwuerwmspfyo3e7e7pwkk83dew"}
{"timestamp":"2023-12-21 16:09:18.097 Z","level":"debug","msg":"Received HTTP request","caller":"web/handlers.go:173","method":"GET","url":"/api/v4/websocket","request_id":"6er9kbyjdty1dxqstu116sob9y","user_id":"dunbpz13m3bd7yi3pa3hbuj3hh"}

What is the problem you think ? I'm a little bit confused.

Also I need to mention that the domain itself is on Cloudflare, And I'm using my own certificate for the domain.

1

There are 1 best solutions below

0
Mojtaba Michael On

After struggling with it, I've found the problem.

I had a nginx configuration like this:

upstream chatsystem {
    server mattermost:8065;
}

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;

server {
    listen 80;
    server_name domain.com;

    location / {
        uwsgi_pass br-app:9000;
        include /etc/nginx/uwsgi_params;
        client_max_body_size 50M;
        proxy_read_timeout 300s;
    }

 location ~ /mattermost/api/v[0-9]+/(users/)?websocket$ {
       proxy_set_header Upgrade '$http_upgrade';
       proxy_set_header Connection "upgrade";
       client_max_body_size 50M;
       proxy_set_header Host '$http_host';
       proxy_set_header X-Real-IP '$remote_addr';
       proxy_set_header X-Forwarded-For '$proxy_add_x_forwarded_for';
       proxy_set_header X-Forwarded-Proto '$scheme';
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       client_body_timeout 60;
       send_timeout 300;
       lingering_timeout 5;
       proxy_connect_timeout 90;
       proxy_send_timeout 300;
       proxy_read_timeout 90s;
       proxy_http_version 1.1;
       proxy_pass http://chatsystem;
   }

   location /mattermost {
       client_max_body_size 50M;
       proxy_set_header Connection "";
       proxy_set_header Host '$http_host';
       proxy_set_header X-Real-IP '$remote_addr';
       proxy_set_header X-Forwarded-For '$proxy_add_x_forwarded_for';
       proxy_set_header X-Forwarded-Proto '$scheme';
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       proxy_read_timeout 600s;
       proxy_cache mattermost_cache;
       proxy_cache_revalidate on;
       proxy_cache_min_uses 2;
       proxy_cache_use_stale timeout;
       proxy_cache_lock on;
       proxy_http_version 1.1;
       proxy_pass http://chatsystem;
   }
}

This configuration was in a .tpl file "default.conf.tpl" and it was copying to container using this docker entrypoint:

#!/bin/sh

set -e

envsubst < /etc/nginx/default.conf.tpl > /etc/nginx/conf.d/default.conf
nginx -g 'daemon off;'

When It was parsing the default.conf.tpl it was ignoring the nginx variables like $host.

I've changed my nginx configuration file from default.conf.tpl to default.conf and Now it is working fine.

Here is the final nginx configuration file:

upstream chatsystem {
    server mattermost:8065;
}

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;

server {
    listen 80;
    server_name domain.com;

    location / {
        uwsgi_pass br-app:9000;
        include /etc/nginx/uwsgi_params;
        client_max_body_size 50M;
        proxy_read_timeout 300s;
    }

 location ~ /mattermost/api/v[0-9]+/(users/)?websocket$ {
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection "upgrade";
       client_max_body_size 50M;
       proxy_set_header Host $http_host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       client_body_timeout 60;
       send_timeout 300;
       lingering_timeout 5;
       proxy_connect_timeout 90;
       proxy_send_timeout 300;
       proxy_read_timeout 90s;
       proxy_http_version 1.1;
       proxy_pass http://chatsystem;
   }

   location /mattermost {
       client_max_body_size 50M;
       proxy_set_header Connection "";
       proxy_set_header Host $http_host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       proxy_read_timeout 600s;
       proxy_cache mattermost_cache;
       proxy_cache_revalidate on;
       proxy_cache_min_uses 2;
       proxy_cache_use_stale timeout;
       proxy_cache_lock on;
       proxy_http_version 1.1;
       proxy_pass http://chatsystem;
   }
}

And this is the final Dockerfile:

FROM nginxinc/nginx-unprivileged:1.25.3-alpine
LABEL maintainer="[email protected]"

COPY default.conf /etc/nginx/conf.d

COPY ./uwsgi_params /etc/nginx/uwsgi_params

COPY run.sh /run.sh

USER root

RUN chown nginx:nginx /etc/nginx/conf.d/default.conf

RUN chmod +x /run.sh

USER nginx

CMD ["/run.sh"]

And this is the entrypoint:

#!/bin/sh

set -e
nginx -g 'daemon off;'

Note: During my misconfiguration I was getting different kinds of errors from "malformed host header" or "missing host header" or "websocket errors".