Thanos Receiver - Error on series with out-of-order labels

263 Views Asked by At

Sorry am a beginner and very much new to this tech stack. I installed Thanos using helm and my receiver pod is running but when I try to write to the exposed service I get the following in my pod logs:

level=warn ts=2022-12-19T06:22:59.025976065Z caller=writer.go:164 component=receive component=receive-writer tenant=default-tenant msg="Error on series with out-of-order labels" numDropped=42

level=warn ts=2022-12-19T06:27:59.039168689Z caller=writer.go:164 component=receive component=receive-writer tenant=default-tenant msg="Error on series with out-of-order labels" numDropped=10

My Pods

NAMESPACE           NAME                                                     READY   STATUS    RESTARTS       AGE
cert-manager        cert-manager-58696f7bc5-khljs                            1/1     Running   0              17h
cert-manager        cert-manager-cainjector-7dbdb54686-4l66v                 1/1     Running   0              17h
cert-manager        cert-manager-webhook-5ccf955c77-5jzvj                    1/1     Running   0              17h
default             alertmanager-prometheus-kube-prometheus-alertmanager-0   2/2     Running   1 (16h ago)    17h
default             prometheus-grafana-779d4dc5dd-9lnhg                      3/3     Running   0              17h
default             prometheus-kube-prometheus-operator-f598bc7df-df4xd      1/1     Running   0              17h
default             prometheus-kube-state-metrics-6bdd65d76-4mlf4            1/1     Running   0              17h
default             prometheus-prometheus-kube-prometheus-prometheus-0       2/2     Running   0              17h
default             prometheus-prometheus-node-exporter-kxg27                1/1     Running   0              17h
default             thanos-exporter-query-8569bfcddd-8l6gv                   1/1     Running   0              12h
default             thanos-exporter-query-frontend-f9f49cb6f-dvbjb           1/1     Running   0              12h
default             thanos-exporter-receive-0                                1/1     Running   0              12h
flotta              flotta-controller-manager-7c8b5d99b5-kjkn7               2/2     Running   0              17h
flotta              flotta-edge-api-65bc4cbd87-wsmsx                         2/2     Running   0              17h
kube-system         cilium-k4rww                                             1/1     Running   0              17h
kube-system         cilium-operator-6d6b956798-zqjnx                         1/1     Running   0              16h
kube-system         coredns-787d4945fb-2dzpw                                 1/1     Running   1 (17h ago)    19h
kube-system         coredns-787d4945fb-hnwrn                                 1/1     Running   1 (17h ago)    19h
kube-system         etcd-daringmouse                                         1/1     Running   10 (17h ago)   19h
kube-system         kube-apiserver-daringmouse                               1/1     Running   10 (17h ago)   19h
kube-system         kube-controller-manager-daringmouse                      1/1     Running   12 (17h ago)   19h
kube-system         kube-proxy-xz49g                                         1/1     Running   12 (17h ago)   19h
kube-system         kube-scheduler-daringmouse                               1/1     Running   11 (17h ago)   19h
openshift-ingress   ingress-router-89679f477-bxm99                           1/1     Running   1 (17h ago)    17h

Anyone to help with the issue?

Why does this happens , and how to overcome this

0

There are 0 best solutions below