I am following this document while creating a new Dataproc cluster. I have set Dataproc properties for Spark driver logs in this manner -
spark:spark.submit.deployMode = cluster
dataproc:dataproc.logging.stackdriver.job.driver.enable = true
dataproc:dataproc.logging.stackdriver.job.yarn.container.enable = true
And I am submitting few PySpark job to it and it's executing successfully. However, I am not getting any dataproc job logs, especially yarn.container logs. How can I get them? I need these 2 metrics: yarn memory seconds and vcore seconds per job in monitoring dashboard for "secured multi-tenant" cluster. Any help would be highly appreciated.
I tried looking at logs of sample PySpark job runs in stagging-bucket created by Dataproc but I could not find those metrics. It would be much better if I could get the native cloud_dataproc_job resource type logs.