How to calculate the L3 cache bandwidth by using the performance counters linux?

1.1k Views Asked by At

I am trying to use linux perf to profile the L3 cache bandwidth gor a python script. I see that there are no available commands to measure that directly. But I know how to get the llc performance counters using the below command. Can anyone let me know on how to calculate the L3 cache bandwidth using the perf counters or refer me to any tools that are available to measure the l3 cache bandwidth? Thanks in advance for the help.

perf stat -e LLC-loads,LLC-load-misses,LLC-stores,LLC-prefetches python hello.py
1

There are 1 best solutions below

4
On

update: perf has changed, now you want perf stat with
-M tma_info_memory_core_l3_cache_access_bw for L3 bandwidth or
-M tma_info_memory_core_l3_cache_fill_bw for DRAM bandwidth (L3 fill = misses, I think?)

Or better -M tma_info_system_dram_bw_use should be more accurate, but only works system-wide. (perf stat -a -M tma_info_system_dram_bw_use -e task-clock,page-faults,cycles,instructions)

It seems they measure total read+write bandwidth, and I think "access" bandwidth might be counting reads+writes from the cores plus dirty write-back to DRAM. With the test code from There is a huge speed difference between reading and writing in DRAM, is this normal? (with write before read to avoid CoW mapping to the same physical page of zeros) with EPP = performance to avoid downclocking. Actually I commented out read so the process would spend its whole time in the write test, allowing easy use of perf: I measured 22.84 tma_info_memory_core_l3_cache_fill_bw during the write test while intel_gpu_top showed peaks of 14G+ B/s read + 14+ GB/s write, average less including startup. And 37.36 tma_info_memory_core_l3_cache_access_bw during the same test (both metric-groups active in the same perf run.) 29.11 tma_info_system_dram_bw_use seems more like the sum of DRAM read+write bandwidths, so I'd trust that. (All the numbers in this paragraph came from the same run, and run-to-run is fairly consistent, within +- 0.5 GB/s.)

There should be negligible L3 hits during that test, and the rest of my system was idle, like 200MiB/s read, 25 MiB/s write according to intel_gpu_top which measures at the DRAM controllers.

According to perf list on my Skylake, those reports average per-core data access or fill bandwidth in GB/s. (So not counting instruction fetch, and maybe only reads?) I'm not 100% sure exactly what these counters measure, but the metric-groups described in my old answer below don't exist anymore. I have perf 6.5 at the moment.


perf stat has some named "metrics" that it knows how to calculate from other things. According to perf list on my system, those include L3_Cache_Access_BW and L3_Cache_Fill_BW.

  • L3_Cache_Access_BW [Average per-core data access bandwidth to the L3 cache [GB / sec]]
  • L3_Cache_Fill_BW [Average per-core data fill bandwidth to the L3 cache [GB / sec]]

This is from my system with a Skylake (i7-6700k). Other CPUs (especially from other vendors and architectures) might have different support for it, or IDK might not support these metrics at all.

I tried it out for a simplistic sieve of Eratosthenes (using a bool array, not a bitmap), from a recent codereview question since I had a benchmarkable version of that (with a repeat loop) lying around. It measured 52 GB/s total bandwidth (read+write I think). The n=4000000 problem-size I used thus consumes 4 MB total, which is larger than the 256K L2 size but smaller than the 8MiB L3 size.

$ echo 4000000 | 
 taskset -c 3 perf stat --all-user  -M L3_Cache_Access_BW -etask-clock,context-switches,cpu-migrations,page-faults,cycles,instructions  ./sieve 


 Performance counter stats for './sieve-buggy':

     7,711,201,973      offcore_requests.all_requests #  816.916 M/sec                  
                                                  #    52.27 L3_Cache_Access_BW     
     9,441,504,472 ns   duration_time             #    1.000 G/sec                  
          9,439.41 msec task-clock                #    1.000 CPUs utilized          
                 0      context-switches          #    0.000 /sec                   
                 0      cpu-migrations            #    0.000 /sec                   
             1,020      page-faults               #  108.058 /sec                   
    38,736,147,765      cycles                    #    4.104 GHz                    
    53,699,139,784      instructions              #    1.39  insn per cycle         

       9.441504472 seconds time elapsed

       9.432262000 seconds user
       0.000000000 seconds sys

Or with just -M L3_Cache_Access_BW and no -e events, it just shows offcore_requests.all_requests # 54.52 L3_Cache_Access_BW and duration_time. So it overrides the default and doesn't count cycles,instructions and so on.

I think it's just counting all off-core requests by this core, assuming (correctly) that each one involves a 64-byte transfer. It's counted whether it hits or misses in L3 cache. Getting mostly L3 hits will obviously enable a higher bandwidth than if the uncore bottlenecks on the DRAM controllers instead.