GPU Utilisation Percentage prometheus query

1.5k Views Asked by At

Will I be able to find out the GPU utilization percentage from the below-mentioned metrics from Prometheus? I am not sure how to query it. I don't have a dcgm-exporter image for the PPC64lE environment. You can also share link for making a docker image of dcgm-exporter of ppc64le environment

 HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 8
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.17"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 2.499048e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 2.499048e+06
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 4593
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 761
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 0
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 4.368032e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 2.499048e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 4.13696e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 3.760128e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 5731
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 4.13696e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 7.897088e+06
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 0
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 6492
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 153600
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 163840
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 58752
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 65536
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.473924e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 1.037183e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 491520
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 491520
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.4027792e+07
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 9
# HELP nvidia_gpu_duty_cycle Percent of time over the past sample period during which one or more kernels were executing on the GPU device
# TYPE nvidia_gpu_duty_cycle gauge
nvidia_gpu_duty_cycle{minor_number="0",name="Tesla V100-SXM2-32GB",uuid="GPU-5481fdc1-1b2c-381d-90d9-2df35fc8cecf"} 0
nvidia_gpu_duty_cycle{minor_number="1",name="Tesla V100-SXM2-32GB",uuid="GPU-af66b351-1498-c103-f39e-7592b645dc80"} 0
nvidia_gpu_duty_cycle{minor_number="2",name="Tesla V100-SXM2-32GB",uuid="GPU-95887069-482a-9a95-d02a-7c6e79c47893"} 0
nvidia_gpu_duty_cycle{minor_number="3",name="Tesla V100-SXM2-32GB",uuid="GPU-a38af12e-e2f7-ee15-b064-4628cf1fc5da"} 0
# HELP nvidia_gpu_memory_total_bytes Total memory of the GPU device in bytes
# TYPE nvidia_gpu_memory_total_bytes gauge
nvidia_gpu_memory_total_bytes{minor_number="0",name="Tesla V100-SXM2-32GB",uuid="GPU-5481fdc1-1b2c-381d-90d9-2df35fc8cecf"} 3.4089730048e+10
nvidia_gpu_memory_total_bytes{minor_number="1",name="Tesla V100-SXM2-32GB",uuid="GPU-af66b351-1498-c103-f39e-7592b645dc80"} 3.4089730048e+10
nvidia_gpu_memory_total_bytes{minor_number="2",name="Tesla V100-SXM2-32GB",uuid="GPU-95887069-482a-9a95-d02a-7c6e79c47893"} 3.4089730048e+10
nvidia_gpu_memory_total_bytes{minor_number="3",name="Tesla V100-SXM2-32GB",uuid="GPU-a38af12e-e2f7-ee15-b064-4628cf1fc5da"} 3.4089730048e+10
# HELP nvidia_gpu_memory_used_bytes Memory used by the GPU device in bytes
# TYPE nvidia_gpu_memory_used_bytes gauge
nvidia_gpu_memory_used_bytes{minor_number="0",name="Tesla V100-SXM2-32GB",uuid="GPU-5481fdc1-1b2c-381d-90d9-2df35fc8cecf"} 4.470079488e+09
nvidia_gpu_memory_used_bytes{minor_number="1",name="Tesla V100-SXM2-32GB",uuid="GPU-af66b351-1498-c103-f39e-7592b645dc80"} 2.588934144e+09
nvidia_gpu_memory_used_bytes{minor_number="2",name="Tesla V100-SXM2-32GB",uuid="GPU-95887069-482a-9a95-d02a-7c6e79c47893"} 0
nvidia_gpu_memory_used_bytes{minor_number="3",name="Tesla V100-SXM2-32GB",uuid="GPU-a38af12e-e2f7-ee15-b064-4628cf1fc5da"} 5.640290304e+09
# HELP nvidia_gpu_num_devices Number of GPU devices
# TYPE nvidia_gpu_num_devices gauge
nvidia_gpu_num_devices 4
# HELP nvidia_gpu_power_usage_milliwatts Power usage of the GPU device in milliwatts
# TYPE nvidia_gpu_power_usage_milliwatts gauge
nvidia_gpu_power_usage_milliwatts{minor_number="0",name="Tesla V100-SXM2-32GB",uuid="GPU-5481fdc1-1b2c-381d-90d9-2df35fc8cecf"} 68088
nvidia_gpu_power_usage_milliwatts{minor_number="1",name="Tesla V100-SXM2-32GB",uuid="GPU-af66b351-1498-c103-f39e-7592b645dc80"} 56426
nvidia_gpu_power_usage_milliwatts{minor_number="2",name="Tesla V100-SXM2-32GB",uuid="GPU-95887069-482a-9a95-d02a-7c6e79c47893"} 38826
nvidia_gpu_power_usage_milliwatts{minor_number="3",name="Tesla V100-SXM2-32GB",uuid="GPU-a38af12e-e2f7-ee15-b064-4628cf1fc5da"} 71068
# HELP nvidia_gpu_temperature_celsius Temperature of the GPU device in celsius
# TYPE nvidia_gpu_temperature_celsius gauge
nvidia_gpu_temperature_celsius{minor_number="0",name="Tesla V100-SXM2-32GB",uuid="GPU-5481fdc1-1b2c-381d-90d9-2df35fc8cecf"} 45
nvidia_gpu_temperature_celsius{minor_number="1",name="Tesla V100-SXM2-32GB",uuid="GPU-af66b351-1498-c103-f39e-7592b645dc80"} 46
nvidia_gpu_temperature_celsius{minor_number="2",name="Tesla V100-SXM2-32GB",uuid="GPU-95887069-482a-9a95-d02a-7c6e79c47893"} 37
nvidia_gpu_temperature_celsius{minor_number="3",name="Tesla V100-SXM2-32GB",uuid="GPU-a38af12e-e2f7-ee15-b064-4628cf1fc5da"} 51
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.02
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 22
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.6646144e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.63059385687e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.264910336e+09
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 1
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
1

There are 1 best solutions below

2
On

From the metrics you shared, below are the ones that will provide you with information about GPU utilization:

nvidia_gpu_duty_cycle - Percent of time over the past sample period during which one or more kernels were executing on the GPU device

nvidia_gpu_memory_total_bytes - Total memory available to the GPU device in bytes

nvidia_gpu_memory_used_bytes - Memory used by the GPU device in bytes

nvidia_gpu_num_devices - Number of GPU devices

nvidia_gpu_power_usage_milliwatts - Power usage of the GPU device in milliwatts

nvidia_gpu_temperature_celsius - Temperature of the GPU device in celsius

From the Prometheus UI or Grafana with Prometheus as its data source, these values can be used in your query expressions to retrieve the associated GPU metrics. If you were to execute a simple query like nvidia_gpu_memory_total_bytes for example, it would return all time series matching this metric name.

Also notice that the metrics you shared contain 4 entries for each of the above values, one for each of the GPU devices available, numbered 0-3. If you wanted to only query metrics for a specific device, let's say #2, your query would need to look like this: nvidia_gpu_memory_total_bytes{minor_number="2"}. Take note of the various comma-separated labels between the {} after each of these metric names as they can be used to filter queries more to your liking. More info on Prometheus queries here.

For DCGM itself, you can build a Docker image specifically for PPC64IE using the source from the official github repo. The instructions will first have you create a separate Docker image which will be used to generate a DCGM build. When generating the DCGM build, you'll need to include the --arch ppc option when executing the ./build.sh script to target PPC64IE.

For dcgm-exporter (github), NVIDIA provides a number of pre-built images on their Docker Hub repo with the official documentation found here.