I'm trying to test this below code with multiple users. But since the class is singleton managed by spring, there is only one object which has been created and multiple threads trying to access the same object.
Due to which facing inconsistency in the output String. Tried to resolve this issue by using the Synchronized keyword in the appLogger method. It is working as expected. But we need to compromise with performance.
How to handle singleton beans in mutithreaded environment with instance variable (StringBuilder)modifications with better performance ?
Edited the code by adding the aggregateLogger method. This method will be aggregating the string to the existing StringBuilder which has been declared in the class level, so that the stringbuilder will be having the aggregated log from appLogger and aggregateLogger methods.
@Aspect
@Slf4j
@Component
public class A{
private StringBuilder builder = new StringBuilder();
public StringBuilder getSb(){
return builder;
}
public void appendToSb(String s){
builder.append(s);
}
@Around("@annotation(execTimeLogger)")
public Object appLogger(ProceedingJointPoint pjp){
long startTime = xxx;
Object object = pjp.proceed();
long endTime = xxx;
String str = "Hello";
appendToSb(str);
return object;
}
@Around("@annotation(logger)")
public Object aggregateLogger(ProceedingJointPoint pjp){
long startTime = xxx;
Object object = pjp.proceed();
long endTime = xxx;
String str = "World";
appendToSb(str);
log.info(getSb().toString()); // o/p will be Aggregated Log
getSb().setLength(0);
return object;
}
}
Your design is just wrong, forgive me for being so blunt. Even within one business transaction you want to log, there could be multiple aplication layers doing multiple things in multiple threads. Why would all of this have to be in a single log message? Usually, you log all messages with some kind of customer and/or transaction ID or whatever correctly identifies the logical entity you want to trace and log. Most logging frameworks use some kind of Mapped Diagnostic Context (MDC) for that. No matter if you log to console or a database or log aggregator like Logstash or Graylog, you can search, filter and aggregate by the entity or transaction you are interested in, finding all corresponding log messages. There is no need to put them all into a single log message.
You can, of course, leverage aspects to do the logging instead of polluting your core code with log statements. Factoring out a cross-cutting concern like logging is one of the main advantages of AOP. But you should do it right. So either your singleton aspect uses MDC or does its own book-keeping in some kind of map. I recommend to use the tooling your log framework already offers and let the aspect focus on what it does best, i.e. intercepting the right joinpoints.
As one of your concerns is also performance, you might want to configure your log framework to log asynchronously, which keeps your application faster and more responsive. Usually, logging to a file is also faster than to console. Maybe this GitHub repository with its interesting read-me and JMH benchmarks provides interesting information to you on top of what we are discussing here. If you happen to speak German, the topic the repository belongs to was discussed a few days ago in this Heise.de blog. The article basically says the same in German as the English read-me, I just wanted to provide my source.
Anyway, when using asynchronous logging, log aggregation IDs become even more important in a multi-threaded context.