Ory Kratos can't start on Kubernetes with Helm

717 Views Asked by At

Whenever I try to install Ory Kratos with Helm on Kubernetes, it doesn't work.

Here is my values.yaml file

kratos:
  config:
    dsn: postgres://admin:[email protected]:5432/postgres_db
    secrets:
      cookie:
        - randomsecret
      cipher:
        - randomsecret
      default:
        - randomsecret
      
    identity:
      default_schema_id: default
      schemas:
        - id: default
          url: file:///etc/config/identity.default.schema.json
    courier:
      smtp:
        connection_uri: smtps://username:[email protected]
        
    selfservice:
      default_browser_return_url: http://127.0.0.1:4455/
  automigration:
    enabled: true
  identitySchemas:
    'identity.default.schema.json': |
      {
        "$id": "https://schemas.ory.sh/presets/kratos/identity.email.schema.json",
        "$schema": "http://json-schema.org/draft-07/schema#",
        "title": "Person",
        "type": "object",
        "properties": {
          "traits": {
            "type": "object",
            "properties": {
              "email": {
                "type": "string",
                "format": "email",
                "title": "E-Mail",
                "ory.sh/kratos": {
                  "credentials": {
                    "password": {
                      "identifier": true
                    }
                  },
                  "recovery": {
                    "via": "email"
                  },
                  "verification": {
                    "via": "email"
                  }
                }
              }
            },
            "required": [
              "email"
            ],
            "additionalProperties": false
          }
        }
      }

I type in the command $helm install kratos -f values.yaml ory/kratos. It pauses for a while and then outputs Error: INSTALLATION FAILED: failed pre-install: timed out waiting for the condition

It then creates one job which repeatedly creates kratos-automigrate pods which crash in a couple of minutes with status "Error" and creates a new pod.

1

There are 1 best solutions below

0
On

I had the same issue when deploying Ory Kratos to a Minikube instance and discovered it was related to Kratos script unable to connect or reach the database defined in kratos.config.dsn.

time=2023-03-03T21:53:26Z level=warning msg=Unable to ping database, retrying. audience=application error=map[message:failed to connect to `host=127.0.0.1 user=foo database=db`: dial error (dial tcp 127.0.0.1:5432: connect: connection refused)

Since 127.0.0.1 in my case resolves to the containers IP rather than the local instance on the host machine, it was unable to complete this pre-requisite step. After changing it my machine's hostname, it was able to resolve the address and connect to the database (Postgresql).

kratos:
  config:
    dsn: postgres://foo:bar@resovable-address-here:5432/db
...

Discovered this after mdaniel's suggestion to use kubectl logs to inspect output of pod.