Not able to get traces in tempo from Grafana loki and OTEL collector

140 Views Asked by At

Its been more than 2 weeks since i am stuck in this issue. I have Fluent bit to send the logs like this ( Fluent Bit -> Loki and Otel Collector -> Tempo). I have done all the changes but not able to get traces in tempo its giving error "failed to get trace with id: 154717fffa19312ec6fea8533b64400f Status: 404 Not Found Body: trace not found" Can someone please help here! ( I have done "temp", "loki", otel-collector" deployment via helm.

fluentbit code

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: monitoring-loki
data:
  fluent-bit.conf: |

   [SERVICE]
     Flush 1
     Log_Level info
     Daemon off 
     Parsers_File parsers.conf

   [INPUT]
     Name tail
     Path /var/log/*.log
     Parser docker
     Tag kube.* 
     Refresh_Interval 5
     Mem_Buf_Limit 5MB
     Skip_Long_Lines On

   [FILTER]
     Name kubernetes
     Match kube.*
     Kube_URL https://kubernetes.default.svc:443
     Kube_Tag_Prefix kube.var.log.containers.
     Merge_Log On
     Merge_Log_Key log_processed
     K8S-Logging.Parser On
     K8S-Logging.Exclude Off

   [OUTPUT] 
     Name loki
     Match kube.*
     Host "loki-svc.monitoring-loki"
     tenant_id ""
     Port "3100"
     label_keys $trace_id
     auto_kubernetes_labels on
   [OUTPUT]
     Name opentelemetry
     Match kube.*
     Host "otel-collector-svc.monitoring-loki"
     Port "55680" 
     Traces_uri /v1/traces
     Logs_uri /v1/logs
config:
  exporters:
    logging:
      loglevel: info
    otlp:
      endpoint: "tempo-monitoring-loki.svc.cluster.local:4317"
  receivers:
    otlp:
      protocols:
        grpc: 
          endpoint: "0.0.0.0:14250"
        http:
          endpoint: "0.0.0.0:4318"
  service:
    pipelines:
      traces:
        exporters: 
          - logging
          - otlp
        receivers:
          - otlp
        processors:
          - memory_limiter
          - batch    
      # metrics:
      #   exporters: [ otlp ]
      # logs:
      #   exporters: [ otlp ]

otl-ds

mode: daemonset

presets:
  # enables the k8sattributesprocessor and adds it to the traces, metrics, and logs pipelines
  kubernetesAttributes:
    enabled: true
  # enables the kubeletstatsreceiver and adds it to the metrics pipelines
  kubeletMetrics:
    enabled: true
  # Enables the filelogreceiver and adds it to the logs pipelines
  logsCollection:
    enabled: true
## The chart only includes the loggingexporter by default
## If you want to send your data somewhere you need to
## configure an exporter, such as the otlpexporter
config:
  exporters:
    logging:
      loglevel: info
    otlp:
      endpoint: "tempo-monitoring-loki.svc.cluster.local:4317"
  receivers:
    otlp:
      protocols:
        grpc: 
          endpoint: "0.0.0.0:14250"
        http:
          endpoint: "0.0.0.0:4318"
  service:
    pipelines:
      traces:
        exporters: 
          - logging
          - otlp
        receivers:
          - otlp
        processors:
          - memory_limiter
          - batch    
      # metrics:
      #   exporters: [ otlp ]
      # logs:
      #   exporters: [ otlp ]

tempo.yml

storage:
  trace:
    backend: local 
    local:
      volume:
        persistentVolumeClaim:
          claimName: storage-tempo-0

minio:
  enabled: false

distributor:
  config:
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: "0.0.0.0:14250"
          http:
            endpoint: "0.0.0.0:4318"
traces:
  otlp:
    grpc:
      enabled: true
    http: 
      enabled: true
  zipkin:
    enabled: false
  jaeger: 
    thriftHttp:
      enabled: false
  opencensus:
    enabled: false
1

There are 1 best solutions below

0
On

Did you check Sever time?(command "date" in Server) In my case, I match Sever time to current date and then Fixed.

Or check log of tempo App(In wal), and data flow in tempo. If you make App in kubernetes, you should check SA(like Security option) to containers.

In some case, They can't call each other because of authority.