How to configure istio helm chart to use external kube-prometheus-stack?

1.8k Views Asked by At

I have deployed the istio service mesh on the GKE cluster using base & istiod helm charts using this documents in the istio-system namespace.

I have deployed Prometheus, grafana & alert-manager using kube-prometheus-stack helm chart.

Every pod of this workload is working fine; I didn't see any error. Somehow I didn't get any metrics in Prometheus UI related to istio workload. Because of that, I didn't see any network graph in kiali dashboard.

Can anyone help me resolve this issue?

4

There are 4 best solutions below

0
On BEST ANSWER

Need to add additionalScrapConfigs for istio in kube-prometheus-stack helm chart values.yaml.

prometheus:
  prometheusSpec:
    additionalScrapeConfigs:
      - {{ add your scrap config for istio }} 
0
On

I was able to resolve the above mentioned issue by creating service monitors for data plane and control plane. Follow below link for more details.

https://tetrate.io/blog/how-to-configure-prometheus-operator-scrape-metrics-from-istio-1-6/
0
On

With prometheus deployed via kube-prometheus-stack helm charts, you will need to add servicemonitor for istio metrics since k8s annotations (prometheus.io/scrape) doesn't work. Create these servicemonitor to enable scraping istio metrics:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: prometheus-oper-istio-controlplane
  labels:
    release: prometheus
spec:
  jobLabel: istio
  selector:
    matchExpressions:
      - {key: istio, operator: In, values: [mixer,pilot,galley,citadel,sidecar-injector]}
  namespaceSelector:
    any: true
  endpoints:
  - port: http-monitoring
    interval: 15s
  - port: http-policy-monitoring
    interval: 15s
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: prometheus-oper-istio-dataplane
  labels:
    monitoring: istio-dataplane
    release: prometheus
spec:
  selector:
    matchExpressions:
      - {key: istio-prometheus-ignore, operator: DoesNotExist}
  namespaceSelector:
    any: true
  jobLabel: envoy-stats
  endpoints:
  - path: /stats/prometheus
    targetPort: http-envoy-prom
    interval: 15s
    relabelings:
    - sourceLabels: [__meta_kubernetes_pod_container_port_name]
      action: keep
      regex: '.*-envoy-prom'
    - action: labelmap
      regex: "__meta_kubernetes_pod_label_(.+)"
    - sourceLabels: [__meta_kubernetes_namespace]
      action: replace
      targetLabel: namespace
    - sourceLabels: [__meta_kubernetes_pod_name]
      action: replace
      targetLabel: pod_name

Once these servicemonitors are created, metrics would start appearing in a few minutes. You can check the status under your Prometheus -> Status - Targets There will be entry something like this:
serviceMonitor/yournamespace/prometheus-oper-istio-controlplane/0 (1/1 up) and serviceMonitor/yournamespace/prometheus-oper-istio-dataplane/0 (1/1 up)

0
On

Istio expects Prometheus to discover which pods are exposing metrics through the use of the Kubernetes annotations prometheus.io/scrape, prometheus.io/port, and prometheus.io/path.

The Prometheus community has decided that those annotations, while popular, are insufficiently useful to be enabled by default. Because of this the kube-prometheus-stack helm chart does not discover pods using those annotations.

To get your installation of Prometheus to scrape your Istio metrics you need to either configure Istio to expose metrics in a way that your installation of Prometheus expects (you'll have to check the Prometheus configuration for that, I do not know what it does by default) or add a Prometheus scrape job which will do discovery using the above annotations.

Details about how to integrate Prometheus with Istio are available here and an example Prometheus configuration file is available here.