Expanding on/Clarifying the title a bit, my questions are related to managing the Kiwi TCMS data that needs to be persistent. After reading the documentation, I did not find any example or configuration steps on how to make Kiwi TCMS work with remote DBs and storage servers, so can you help point me in the right direction concerning the below:

  1. Is it possible to have Kiwi TCMS use a remote database (for example, a mariaDB instance on AWS RDS)?
  2. Is it possible to have Kiwi TCMS use a remote Uploads folder for attachments, pictures... etc on a remote server or storage system (for example, any remote storage server or a simple AWS S3 bucket)?
  3. If either of questions "1" or "2" is possible, can they be configured via the "docker-compose.yml" file provided with the repo or would they need be configured using a different method?
  4. If either of questions "1" or "2" is possible (especially the question related to remote DB), would this setup play well when migrating ... /Kiwi/manage.py migrate or would special steps need to be taken since the DB is running remotely?

Note: the main reason for my questions is that having a standalone remote DB and/or Uploads folder would make it easier to backup/update/restore/restart/reset any server or kubernetes pod that is running the Kiwi TCMS tool without having to worry about the data that needs to be persistent.

1

There are 1 best solutions below

0
On

Note that both DB and file storage volumes are persistent in the default configuration. That's on purpose so that they can survive a docker-compose down and between upgrades! So your question is really how to put those on a different machine.

For the database configuration everything is controlled via environment variables. https://kiwitcms.readthedocs.io/en/latest/configuration.html lists all config settings and https://kiwitcms.readthedocs.io/en/latest/installing_docker.html#customization tells you how you can override them.

# Database settings
DATABASES = {
    "default": {
        "ENGINE": os.environ.get("KIWI_DB_ENGINE", "django.db.backends.mysql"),
        "NAME": os.environ.get("KIWI_DB_NAME", "kiwi"),
        "USER": os.environ.get("KIWI_DB_USER", "kiwi"),
        "PASSWORD": os.environ.get("KIWI_DB_PASSWORD", "kiwi"),
        "HOST": os.environ.get("KIWI_DB_HOST", ""),
        "PORT": os.environ.get("KIWI_DB_PORT", ""),
        "OPTIONS": {},
    },
}

Since these are environment variables you can also configure them directly in your docker-compose.yml file like shown in the upstream file itself:

environment:
    KIWI_DB_HOST: db
    KIWI_DB_PORT: 3306
    KIWI_DB_NAME: kiwi
    KIWI_DB_USER: kiwi
    KIWI_DB_PASSWORD: kiwi

So there's nothing stopping you from pointing the DB connection to a separate host, presumably your database cluster which you use for other apps as well.

From the point of view of the Kiwi TCMS application the database is remote anyway. The DB is accessed via TCP anyway and it doesn't matter if this is another container running alongside the app or a completely different host in a completely different network.

For the volume storing upload files it is a bit different. The volume needs to be mounted inside the app container which is done via the line:

volumes:
    - uploads:/Kiwi/uploads:Z

That maps/mounts a persistent volume from the docker host into the running container. Docker allows various setup for volumes, refer to https://docs.docker.com/storage/volumes/ but probably one of the simplest ones is the following:

  • Mount your NFS (or other) volume under /mnt/nfs/kiwitcms_uploads on the docker host
  • Then mount /mnt/nfs/kiwitcms_uploads to /Kiwi/uploads inside the running container.

IDK what the implications of this are and how stable it is. Refer to the docker documentation and your DevOps admin for more best practices on this.

However at the end of the day if you can make a network storage/block device available to the docker host then you can mount that inside the running container and the Kiwi TCMS application will treat it as a regular filesystem.