I would like to use a different logging level in development than production environments. To do so, early in my program I set the minimal level for logs to be logged, as was answered to: How to set a minimal logging level with loguru?
import sys
from loguru import logger as log
log.remove() #remove the old handler. Else, the old one will work along with the new one you've added below'
log.add(sys.stdout, level="INFO")
log.debug(f"Spark.Dataframe.count(): {df.count()}") # remarks on this line
log.info("info message")
log.warning("warning message")
log.error("error message")
While we are just logging string/text messages there is no any expensive computational resource involved in logging, is just a write (or just a print to STDOUT/STDERR). All log lines are executed and on output I can see registered only those that meet the level setted, but ALL lines are executed, even when those that not meet the level are "filtered" and won't be writen.
In the example, for logger.debug(df.count()) I want to check the number of rows of an Spark dataframe which could be an expensive computational process on large datasets in production, so I don't want this line executes everytime the process run.
How can I configure loguru to the specified line executes ONLY when the logging level meet then setted one?
Is the only way to control the flow, to use conditional ifs through the entire code just to control the logger execution? Does loguru doesn't support the level setting to avoid the execution of the "content" of the message on any way I could maintain the logging as a shadow/unnoticed into my code?
When I set the filter to "INFO" level, I expect to avoid the execution time for the expensive process on a log.debug(expensive) method, even when the message is not printed this line is executed and then filtered by loguru.
You can use
opt(lazy=True)to tell loguru the operation is expensive and shouldn’t be always run:Note the “If sink…” is just a log message, not code to define any rules. The key is the
lazyoption.See documentation