I am following this guide monitoring_opencensus_metrics_quickstart-go. Besides, I also tried the way from this answer.
Code here:
package main
import (
"context"
"fmt"
"log"
"os"
"path"
"time"
"google.golang.org/api/option"
"contrib.go.opencensus.io/exporter/stackdriver"
"go.opencensus.io/stats"
"go.opencensus.io/stats/view"
"golang.org/x/exp/rand"
)
var (
// The task latency in milliseconds.
latencyMs = stats.Float64("task_latency", "The task latency in milliseconds", "ms")
)
func main() {
ctx := context.Background()
v := &view.View{
Name: "task_latency_distribution",
Measure: latencyMs,
Description: "The distribution of the task latencies",
Aggregation: view.Distribution(0, 100, 200, 400, 1000, 2000, 4000),
}
if err := view.Register(v); err != nil {
log.Fatalf("Failed to register the view: %v", err)
}
exporter, err := stackdriver.NewExporter(stackdriver.Options{
ProjectID: os.Getenv("GOOGLE_CLOUD_PROJECT"),
MonitoringClientOptions: []option.ClientOption{
option.WithCredentialsFile(path.Join("./.gcp/stackdriver-monitor-admin.json")),
},
})
if err != nil {
log.Fatal(err)
}
view.RegisterExporter(exporter)
view.SetReportingPeriod(60 * time.Second)
// Flush must be called before main() exits to ensure metrics are recorded.
defer exporter.Flush()
if err := exporter.StartMetricsExporter(); err != nil {
log.Fatalf("Error starting metric exporter: %v", err)
}
defer exporter.StopMetricsExporter()
// Record 100 fake latency values between 0 and 5 seconds.
for i := 0; i < 100; i++ {
ms := float64(5*time.Second/time.Millisecond) * rand.Float64()
fmt.Printf("Latency %d: %f\n", i, ms)
stats.Record(ctx, latencyMs.M(ms))
time.Sleep(1 * time.Second)
}
fmt.Println("Done recording metrics")
}
I run above code locally, NOT in GCE, GAE and GKE environments.
At the metrics explorer web UI, here is the metric query condition:
- Resource Type:
Consumed API
- Metric:
custom.googleapis.com/opencensus/task_latency_distribution
Full query:
fetch consumed_api
| metric 'custom.googleapis.com/opencensus/task_latency_distribution'
| align delta(1m)
| every 1m
| group_by [],
[value_task_latency_distribution_aggregate:
aggregate(value.task_latency_distribution)]
The service account has Monitoring Admin
role.
But got No data is available for the selected time frame
.
It works for me but I've tended to do things slightly differently:
export GOOGLE_APPLICATION_CREDENTIALS=path/to/creds
instead ofMonitoringClientOptions{}
;Previously (!?) I had problems with distributions with a 0 bucket; try removing that initial (0) bucket and try again;resource.type
from the metrics explorer; from APIs Explorer, if any, this should beglobal
Google APIs Explorer is an excellent way to diagnose Stackdriver API challenges. You can use it to list metrics and to list timeseries (replace
your-project-id
and update the 2interval
values):https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.metricDescriptors/list?apix=true&apix_params=%7B%22name%22%3A%22projects%2Fyour-project-id%22%2C%22filter%22%3A%22metric.type%3D%5C%22custom.googleapis.com%2Fopencensus%2Ftask_latency_distribution%5C%22%22%7D
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/list?apix=true&apix_params=%7B%22name%22%3A%22projects%2Fyour-project-id%22%2C%22aggregation.alignmentPeriod%22%3A%22%2B300s%22%2C%22aggregation.crossSeriesReducer%22%3A%22REDUCE_MEAN%22%2C%22aggregation.perSeriesAligner%22%3A%22ALIGN_DELTA%22%2C%22filter%22%3A%22metric.type%3D%5C%22custom.googleapis.com%2Fopencensus%2Ftask_latency_distribution%5C%22%22%2C%22interval.endTime%22%3A%222020-09-08T23%3A59%3A59Z%22%2C%22interval.startTime%22%3A%222020-09-08T00%3A00%3A00Z%22%7D
Using Chrome's Developer Console, you can find one of Stackdriver's calls to the API to more easily reproduce it, e.g.