How to release a Log4J logger - java

I'm creating a program, that will run 24/7 and will continually process multiple tasks. These tasks can run in parallel. Every task will have it's independent log file. So for every task I want to create own logger like this:
Logger logger = Logger.getLogger("taskID");
How can I correctly release the logger so it is no longer in memory, after the task is done?

There is no way to "release" a Logger object. But that's OK. If it is reachable you can still use it ... and it shouldn't be "released". If it is unreachable, the GC will reclaim it.
By the way, if you are really talking about log4j, then the method you call to get hold of a named logger is Logger.getLogger(String). It is defined to return an existing instance (with the same name) if one exists, so you don't need to worry about creating lots of copies of the same logger.

This is not the way a Logger should be instantiated. You must always make them static and final. Doing so you don't have to worry about it anymore as it's going to be only and only one instance of Logger per class.
Take a look at the official documentation and some manuals online. This book is also very good to get started.
PS: On the other hand, I would recomend you the use of SLF4j as façade.

Related

Performance by using one logger object per application in Java

There are number of method to create logger instance.
one instance of Logger per class
one instance of Logger per thread
one instance of Logger per application
Can any one suggest performance on each method?
Currently i am using one logger object per application so Is this down multithreaded application performance?.
A good tracking resource is Jamon, I guess you know it. Inside an EE application there is a simple way to "hook" it to every method call, in order to trace all method's execution time. In this way, you could analyze the impact of your "added" log calls
Back to your question, I don't think there should be performance issues, as the log output is anyway serialized and instantiating per method, classs or even application is just a matter of used memory

using logback in a client server program

I need to use logback in a client server program, for each request that comes to server, it creates a new service which will run in a separate thread. I need to log actions that happen during service excecution. But i dont want to generate separate logger object for each service thread. I know that one solution would be to set the logger object as static. So it wont be instanciated every time but is there any standard solution for this kind of problem. bellow are some code snippets from my source code:
The server class which creates a separate servcie thread for each request:
1: a logger specific for server class.
2: for each incomming request that comes to server we generate a new thread (new instance of service class),but the question is that we dont want to have a logger instances for each service instance (i guess it is a bad practice!)
and here is the service class :
*:logger is defined static so it wont be instanciated for each service class instance:
i know that one solution would be to set the logger object as static so it wont be instanciated every time but is there any standard solution for this kind of problem.
This is what I do in my application. It works great.
Many of my classes have this as the first line:
public class SomeClass {
private static final Logger LOG = LoggerFactory.getLogger(SomeClass.class);
// the rest of the class
}
Also, if you want the log messages to reflect which overall request is the one doing the logging, you should use MDC:
One of the design goals of logback is to audit and debug complex distributed applications. Most real-world distributed systems need to deal with multiple clients simultaneously. In a typical multithreaded implementation of such a system, different threads will handle different clients. A possible but slightly discouraged approach to differentiate the logging output of one client from another consists of instantiating a new and separate logger for each client. This technique promotes the proliferation of loggers and may increase their management overhead.
Read the entire link, it does a better job of explaining MDC than I ever could.

Log4J - One log file per thread in an environment with dynamic thread creation

Let me begin with a brief explanation of the system I am using:
We have a system that runs as a Daemon, running work-units as they come in. This daemon dynamically creates a new Thread when it is given a new work-unit. For each of these units of work, I need Log4J to create a new log file to append to - the filename will be provided at runtime, when the new log file must be created. This daemon must be able to remain alive indefinitely, which I believe causes some memory concerns, as I will explain.
My first thought was to create a new Logger for each work-unit, naming it after the thread, of course. The work-unit's thread retains a reference to that Logger. When the unit is finished, it will be garbage-collected, but the problem is that Log4J itself retains a reference to the Logger, which will never be used again. It seems likely that all these Loggers will cause the VM to run out of memory.
Another solution: subclass Filter, to filter Appenders by the thread names, and place them on the same Logger. Then, remove the Appenders as the work-units complete. Of course, this necessitates adding the code to remove the appenders. That will be a lot of code changes.
I've looked into NDC and MDC, which appear to be intended for managing interleaved output to the same file. I've considered proposing this as a solution, but I don't think it will be accepted.
I want to say that Log4J appears that it is intended not to function this way, that is, dynamically creating new log files at runtime, as they are required (or desired). So I am not sure which direction to look in next - Is log4j not the solution here, or did I comPLETELY miss something? Have I not looked closely enough at NDC? Or is my concern with Log4J holding onto Loggers a nonissue for reasons I don't see?
You could just create a new log method that wraps the normal log method, and append thread id.
Something like the following (oversimplified, but you get the idea). Log4j is already thread save I believe, so as long as you aren't logging a ton, you should be fine. Then you can easily grep on thread id.
public log(long id, String message)
{
logger.log("ThreadId: id + "message: " + message);
}
ThreadLocal appears to be the solution for you.

Log messages lost in few specific situations

I am using java.util.logging to do all the logging of my application.
Until recently, I was using the logging facility without any specific configuration. Everything worked as expected, all the logs were visible in the console (stderr)
Now, I wanted to customize the configuration for my logs. I want the logs to be displayed on the console, but I want them to be written in a file, too. I came up with the following solution :
public static void main(String[] args) {
System.setProperty("java.util.logging.config.file", "log.config");
Logger defLogger = Logger.getLogger("fr.def"); // all loggers I use begin by "fr.def"
defLogger.setLevel(Level.ALL);
defLogger.addHandler(new ConsoleHandler());
defLogger.addHandler(new FileHandler());
// real code here ...
Here is the content of the log.config file :
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.FileHandler.count=10
java.util.logging.FileHandler.pattern=logs/visiodef2.%g.log
This solution mostly works : I can see the logs in the console, and in the files too. Except that, in some situations, some log messages are simply lost (for both the console and the file). Examples of situations where logs are lost :
on a shutdown hook of the JVM
on the default uncaught exception handler
on the EDT's exception handler
on the windowClosing event of the main JFrame (configured with the default close operation EXIT_ON_CLOSE)
There is no other configuration than what is described above. The log level is not involved : I can see some INFO logs, but some of the lost logs are SEVERE.
I also tried to add a shutdown hook to flush all the Handlers, but with no success.
So, the question : is it safe to configure my logging the way I do ? Can you see any reason why some logs can be lost ?
I found the problem. And this is weird.
Actually, my problem is not related at all with the fact that the log happens in an exception handler or in a Frame event. The problem is that the garbage collector destroys the "fr.def" logger a few seconds after it is created ! Thus, the FileHandler is destroyed too. The GC can do it because the LogManager only keeps weak references to the Loggers it creates.
The javadoc of Logger.getLogger doesn't tell anything about that, but the javadoc of LogManager.addLogger, which is called by the former, explicitly says that :
The application should retain its own reference to the Logger object to avoid it being garbage collected. The LogManager may only retain a weak reference.
So, the workaround was to keep a reference to the object returned by Logger.getLogger("fr.def").
Edit
It seems that the choice of using weak references came from this bug report.
If you dig into LogManager source, you'll see it installs its own shutdown hook LogManager.Cleaner that closes all logger handlers.
Since all shutdown hooks run concurrently there is a race between your hook and one registered by logging. If logging finishes first you will get no output.
There is no clean way around that. If you don't want to change your source, you could hack some sort of non-portable pre-shutdown hook like this: https://gist.github.com/735322
Alternatively use Logger.getAnonymousLogger() which is not registered with LogManager thus not closed in shutdown hook. You will have to add your own handlers and call Logger#setUseParentHandlers(false) to avoid duplicated messages.

Lock Ordering in C3p0

I am trying to log the creation and destruction of database connections in our application using c3p0's ConnectionCustomizer. In it, I have some code that looks like this:
log(C3P0Registry.getPooledDataSources())
I'm running into deadlocks. I'm discovering that c3p0 has at least a couple of objects in its library that use synchronized methods, and don't seem to specify their intended lock ordering. When I log the connections, I'm holding a lock on C3P0Registry and eventually PoolBackedDataSource (simply creating a list of the datasources is accessing the hashcode causing a lock).
Shutting down the connection provider (calling C3P0ConnectionProvider.close()) causes the locks to be called in the opposite order. But while the child datasources are being shut down, my logging is being triggered. The result is a deadlock.
It seems like both calls I am making into the c3p0 library are valid, expected calls:
C3P0ConnectionProvider.close()
C3P0Registry.getPooledDataSources()
It also seems like (unless explicitly stated in the documentation) it should be the library's responsibility to manage it's own locking strategy. (I don't say this to blame anyone.. just to confirm my understanding of best practices)
How should I deal with this issue? Since c3p0 uses synchronized methods rather than a more modern mechanism, I can't really test the locks.
From my DataSource closing code, I could first grab the C3P0Registry lock before closing the DataSource. I would be guessing at the correct lock order, which I don't know if I feel comfortable with.
I don't think I could reverse the lock order for the logging call. I need the C3P0Registry to get the list of DataSources, so I couldn't lock the DataSources without first locking C3P0Registry to get references to them.
Another solution, of course is to provide another, higher level lock above everything c3p0. In the case of a connection pool, that seems to defeat the point.
For now, I'm rolling back my logging. Thanks for any help.
I dont know how to fix the locking issue, but i think you should take a step back here and think about the original problem.
"I am trying to log the creation and destruction of database connections in our application ..."
I would recommend the following.
Create a class and make it implement javax.sql.DataSource.
Create a field of the same type and delegate all methods to it.
In the getConnection() method return your own Connection class wrapping around
java.sql.Connection and so on.
Then wrap this class around your original data source.
In your classes you can now simply create a logger and log all actions you want to see in your log.

Categories