I'm using Grails 2.5.4 and trying to use SparkSession instance for generating a Parquet output. Recently, upgraded the spark core and it's related dependencies to their latest version(v3.3.0).
During the SparkSession builder() initialization, I notice that some extra logs are getting displayed:
Using Spark's default log4j profile: org/apache/spark/log4j2-defaults.properties
22/07/13 11:58:54 WARN Utils: Your hostname, XY resolves to a loopback address: 127.0.1.1; using 1XX.1XX.0.1XX instead (on interface wlo1)
22/07/13 11:58:54 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
22/07/13 11:58:54 INFO SparkContext: Running Spark version 3.3.0
22/07/13 11:58:54 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/07/13 11:58:54 INFO ResourceUtils: ==============================================================
22/07/13 11:58:54 INFO ResourceUtils: No custom resources configured for spark.driver.
22/07/13 11:58:54 INFO ResourceUtils: ==============================================================
22/07/13 11:58:54 INFO SparkContext: Submitted application: ABCDE
22/07/13 11:58:54 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
22/07/13 11:58:54 INFO ResourceProfile: Limiting resource is cpu
22/07/13 11:58:54 INFO ResourceProfileManager: Added ResourceProfile id: 0
22/07/13 11:58:54 INFO SecurityManager: Changing view acls to: xy
22/07/13 11:58:54 INFO SecurityManager: Changing modify acls to: xy
22/07/13 11:58:54 INFO SecurityManager: Changing view acls groups to:
22/07/13 11:58:54 INFO SecurityManager: Changing modify acls groups to:
22/07/13 11:58:54 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(xy); groups with view permissions: Set(); users with modify permissions: Set(xy); groups with modify permissions: Set()
22/07/13 11:58:54 INFO Utils: Successfully started service 'sparkDriver' on port 39483.
22/07/13 11:58:54 INFO SparkEnv: Registering MapOutputTracker
22/07/13 11:58:54 INFO SparkEnv: Registering BlockManagerMaster
22/07/13 11:58:54 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
22/07/13 11:58:54 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
22/07/13 11:58:54 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
22/07/13 11:58:55 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-cf39a58e-e5bc-4a26-b92a-d945a0deb8e7
22/07/13 11:58:55 INFO MemoryStore: MemoryStore started with capacity 2004.6 MiB
22/07/13 11:58:55 INFO SparkEnv: Registering OutputCommitCoordinator
22/07/13 11:58:55 INFO Utils: Successfully started service 'SparkUI' on port 4040.
22/07/13 11:58:55 INFO Executor: Starting executor ID driver on host 1XX.1XX.0.1XX
22/07/13 11:58:55 INFO Executor: Starting executor with user classpath (userClassPathFirst = false): ''
22/07/13 11:58:55 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33993.
22/07/13 11:58:55 INFO NettyBlockTransferService: Server created on 192.168.0.135:33993
22/07/13 11:58:55 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
22/07/13 11:58:55 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.0.135, 33993, None)
22/07/13 11:58:55 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.0.135:33993 with 2004.6 MiB RAM, BlockManagerId(driver, 192.168.0.135, 33993, None)
22/07/13 11:58:55 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.0.135, 33993, None)
22/07/13 11:58:55 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.0.135, 33993, None)
Before initializing the SparkSession instance using the builder() method, I've configured the logger level programmatically by:
Configurator.setLevel("org", Level.ERROR)
Configurator.setLevel("org.apache.spark", Level.ERROR)
Configurator.setLevel("akka", Level.ERROR)
Configurator.setLevel("scala", Level.ERROR)
Configurator.setLevel("java", Level.ERROR)
Configurator.setLevel("org.slf4j", Level.ERROR)
Configurator.setLevel("com", Level.ERROR)
Configurator.setLevel("javax", Level.ERROR)
Configurator.setLevel("jakarta", Level.ERROR)
Configurator.setLevel("io", Level.ERROR)
Configurator.setLevel("net", Level.ERROR)
I notice that it's picking the default log4j2.properties file of Spark. Is there a way I can override the logging configuration?