Issue with opensearch Index pattern, alias and fluentd index name

2.2k Views Asked by At

I have one opensearch cluster that receives logs from fluentd. Now I want to apply the ISM policy to rollover my index as they reach to certain threshold value.

I am following this document to create the ISM policy https://opensearch.org/docs/latest/im-plugin/ism/policies/#sample-policy-with-ism-template-for-auto-rollover.

The current configuration is something like this-

  1. The fluentd sends all the logs to specific index pattern like "mylogs-k8s-namespace", so if there are 10 namespaces, I will get 10 indices created.
logstash_format false
index_name mylogs-${record['kubernetes']['namespace_name']}
  1. Next, I created one ISM policy which includes the rollover condition-
PUT _plugins/_ism/policies/rollover_policy
{
  "policy": {
    "description": "Example rollover policy.",
    "default_state": "rollover",
    "states": [
      {
        "name": "rollover",
        "actions": [
          {
            "rollover": {
              "min_size": "10mb"
            }
          }
        ],
        "transitions": []
      }
    ],
    "ism_template": {
      "index_patterns": ["mylogs-kube-system*"],
      "priority": 100
    }
  }
}
  1. Then I created one template which will apply this policy to all new indices.
PUT _index_template/ism_rollover
{
  "index_patterns": ["mylogs-kube-system*"],
  "template": {
   "settings": {
    "plugins.index_state_management.rollover_alias": "mylogs-kube-system"
   }
 }
}
  1. As per the documentation, the next step is to create index with above alias
PUT mylogs-kube-system-000001
{
  "aliases": {
    "mylogs-kube-system": {
      "is_write_index": true
    }
  }
}

Now here comes the problem-

If the fluentd already started pushing the logs to my index "mylogs-kube-system" then the above step (4) does not work. it gives an error that an index with the same name already exists.

This makes sense as the fluentd has already started pushing the logs and we cannot have the alias, index or streams with the same name.

To overcome from this, I have to stop my fluend, delete the index "mylogs-kube-system, in this case" and then first apply the policy and alias (step 1 to 4) and then start the fluentd again. This way it works fine and the rollover happens.

However, As I understand this is not a good solution, we cannot keep on stopping our fluentd every time a new namespace gets added. I am looking for a concreate solution to make this work.

I have tried following things but got no luck-

  1. Changing the index-name in fluentd (step 1) to logstash_prefix with date however the logs keep on getting added in new index (mylogs-kube-system-27052022) etc, but the rollover does not happen.

  2. Tried by changing the index name in fluentd to mylogs-k8s-namespace-000001 but it sends the logs only to this index forever.

The conclusion that I can draw here is that we have to keep our index name and alias name different, but doing so, the fluentd stops sending the logs to correct alias and we start facing issue in rollover.

1

There are 1 best solutions below

0
On

Try to use Data stream instead of simple Template

PUT _index_template/log
{
  "index_patterns": [
    "log-*"
  ],
  "data_stream": {}
}