Knative logging: log stash - Kibana unable to fetch mapping

768 Views Asked by At

For Knative logging, following the instructions here - https://github.com/knative/docs/blob/master/serving/installing-logging-metrics-traces.md#elasticsearch-kibana-prometheus--grafana-setup, I tried to visualise the logs using Kibana UI (the visualization tool for Elasticsearch) but struck with the following error while configuring an index pattern — “ Unable to fetch mapping. Do you have indices matching the pattern?” logstash enter image description here

Is there any workaround or fix for this?

Update: Here's what I see when I make a cURL GET request as suggested in the commententer image description here

2

There are 2 best solutions below

0
On BEST ANSWER

Here are some additional steps which I have to perform to make this completely work. Posting here so that this may help someone facing the same issue and looking for an answer

Here are the steps, Run the below command to apply a patch to fix the fluentd-ds pods not showing issue

kubectl apply -f https://raw.githubusercontent.com/gevou/knative-blueprint/master/knative-serving-release-0.2.2-patched.yaml

Verify that each of your nodes have the beta.kubernetes.io/fluentd-ds-ready=true label:

kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true

If you receive the No Resources Found response: Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes:

kubectl label nodes — all beta.kubernetes.io/fluentd-ds-ready=”true”

Run the following command to ensure that the fluentd-ds daemonset is ready on at least one node:

kubectl get daemonset fluentd-ds --namespace knative-monitoring

enter image description here

Wait for a while and run this command

kubectl proxy

Navigate to the Kibana UI. It might take a couple of minutes for the proxy to work.

  • Within the “Configure an index pattern” page, enter logstash-* to Index pattern and select @timestamp from Time Filter field name and click on Create button.

  • To create the second index, select Create Index Pattern button on top left of the page. Enter zipkin* to Index pattern and select timestamp_millis from Time Filter field name and click on Create button. enter image description here

If the issue still exists, following the suggestions in the comments above should fix the error

GET _cat/indices?v

enter image description here

Added the end-to-end findings here

4
On

There is a bug in recent versions of KNative which has been documented in this issue https://github.com/knative/serving/issues/2218. There is already an approved but not yet merged PR about it you can see here https://github.com/knative/serving/pull/2560.

In short, the problem is that fluentd pods use system-node-critical priority class which is no longer supported outside of kube-system namespace.

As a result fluentd pods do not get created and therefore do not send any logs to Elasticsearch and consequently no logstash indexes show-up in Kibana.

As a work around for KNative v0.2.2, you can download and delete line 1909 from the release file here: https://github.com/knative/serving/releases/download/v0.2.2/release.yaml.

You can then install the patched version: kubectl apply -f release.yaml

If you don't want to download and edit you can get an already patched version of release 0.2.2 here which you can install with:

kubectl apply -f https://github.com/gevou/knative-blueprint/blob/master/knative-serving-release-0.2.2-patched.yaml

You can do something similar for previous versions of course.