Log messages lost in few specific situations - java

I am using java.util.logging to do all the logging of my application.
Until recently, I was using the logging facility without any specific configuration. Everything worked as expected, all the logs were visible in the console (stderr)
Now, I wanted to customize the configuration for my logs. I want the logs to be displayed on the console, but I want them to be written in a file, too. I came up with the following solution :
public static void main(String[] args) {
System.setProperty("java.util.logging.config.file", "log.config");
Logger defLogger = Logger.getLogger("fr.def"); // all loggers I use begin by "fr.def"
defLogger.setLevel(Level.ALL);
defLogger.addHandler(new ConsoleHandler());
defLogger.addHandler(new FileHandler());
// real code here ...
Here is the content of the log.config file :
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.FileHandler.count=10
java.util.logging.FileHandler.pattern=logs/visiodef2.%g.log
This solution mostly works : I can see the logs in the console, and in the files too. Except that, in some situations, some log messages are simply lost (for both the console and the file). Examples of situations where logs are lost :
on a shutdown hook of the JVM
on the default uncaught exception handler
on the EDT's exception handler
on the windowClosing event of the main JFrame (configured with the default close operation EXIT_ON_CLOSE)
There is no other configuration than what is described above. The log level is not involved : I can see some INFO logs, but some of the lost logs are SEVERE.
I also tried to add a shutdown hook to flush all the Handlers, but with no success.
So, the question : is it safe to configure my logging the way I do ? Can you see any reason why some logs can be lost ?

I found the problem. And this is weird.
Actually, my problem is not related at all with the fact that the log happens in an exception handler or in a Frame event. The problem is that the garbage collector destroys the "fr.def" logger a few seconds after it is created ! Thus, the FileHandler is destroyed too. The GC can do it because the LogManager only keeps weak references to the Loggers it creates.
The javadoc of Logger.getLogger doesn't tell anything about that, but the javadoc of LogManager.addLogger, which is called by the former, explicitly says that :
The application should retain its own reference to the Logger object to avoid it being garbage collected. The LogManager may only retain a weak reference.
So, the workaround was to keep a reference to the object returned by Logger.getLogger("fr.def").
Edit
It seems that the choice of using weak references came from this bug report.

If you dig into LogManager source, you'll see it installs its own shutdown hook LogManager.Cleaner that closes all logger handlers.
Since all shutdown hooks run concurrently there is a race between your hook and one registered by logging. If logging finishes first you will get no output.
There is no clean way around that. If you don't want to change your source, you could hack some sort of non-portable pre-shutdown hook like this: https://gist.github.com/735322
Alternatively use Logger.getAnonymousLogger() which is not registered with LogManager thus not closed in shutdown hook. You will have to add your own handlers and call Logger#setUseParentHandlers(false) to avoid duplicated messages.

Related

Can Java Logger output not get written if an Exception occurs immediately after Log.info call?

I am reading through some code and comparing what I see in production log file content but am concerned that maybe I am not looking at what is really in production ( yes, I know ... )
I expect to see a string from a log.info() call but it is immediately before a database update that may be the culprit of an SQLException.
Is it possible that the exception could mask the logger output ? I.E. execution has terminated before flushing the log output buffer ?
If that is not the case I will need to figure out some other reason for the info not being written.
No it's not possible. When you write down something using log.into(),log.error(),log.debug(),log.wran(),log.fatal(),log.trace() methods, it holds content to write down in the log file.
To get more detail you can catch the exception and print that stacktrace using log.error(). You can easily diagnostic the problem.
The java.util.logging logging framework uses shutdown hooks to make sure the flush() method is called on its handlers. Provided the handler properly implements the flush() method to flush the cached logs to disk, it will work. The handlers provided with the java api do implement this.
It can be verified in the source code that it uses shutdown hooks.

How to release a Log4J logger

I'm creating a program, that will run 24/7 and will continually process multiple tasks. These tasks can run in parallel. Every task will have it's independent log file. So for every task I want to create own logger like this:
Logger logger = Logger.getLogger("taskID");
How can I correctly release the logger so it is no longer in memory, after the task is done?
There is no way to "release" a Logger object. But that's OK. If it is reachable you can still use it ... and it shouldn't be "released". If it is unreachable, the GC will reclaim it.
By the way, if you are really talking about log4j, then the method you call to get hold of a named logger is Logger.getLogger(String). It is defined to return an existing instance (with the same name) if one exists, so you don't need to worry about creating lots of copies of the same logger.
This is not the way a Logger should be instantiated. You must always make them static and final. Doing so you don't have to worry about it anymore as it's going to be only and only one instance of Logger per class.
Take a look at the official documentation and some manuals online. This book is also very good to get started.
PS: On the other hand, I would recomend you the use of SLF4j as façade.

Log4J - One log file per thread in an environment with dynamic thread creation

Let me begin with a brief explanation of the system I am using:
We have a system that runs as a Daemon, running work-units as they come in. This daemon dynamically creates a new Thread when it is given a new work-unit. For each of these units of work, I need Log4J to create a new log file to append to - the filename will be provided at runtime, when the new log file must be created. This daemon must be able to remain alive indefinitely, which I believe causes some memory concerns, as I will explain.
My first thought was to create a new Logger for each work-unit, naming it after the thread, of course. The work-unit's thread retains a reference to that Logger. When the unit is finished, it will be garbage-collected, but the problem is that Log4J itself retains a reference to the Logger, which will never be used again. It seems likely that all these Loggers will cause the VM to run out of memory.
Another solution: subclass Filter, to filter Appenders by the thread names, and place them on the same Logger. Then, remove the Appenders as the work-units complete. Of course, this necessitates adding the code to remove the appenders. That will be a lot of code changes.
I've looked into NDC and MDC, which appear to be intended for managing interleaved output to the same file. I've considered proposing this as a solution, but I don't think it will be accepted.
I want to say that Log4J appears that it is intended not to function this way, that is, dynamically creating new log files at runtime, as they are required (or desired). So I am not sure which direction to look in next - Is log4j not the solution here, or did I comPLETELY miss something? Have I not looked closely enough at NDC? Or is my concern with Log4J holding onto Loggers a nonissue for reasons I don't see?
You could just create a new log method that wraps the normal log method, and append thread id.
Something like the following (oversimplified, but you get the idea). Log4j is already thread save I believe, so as long as you aren't logging a ton, you should be fine. Then you can easily grep on thread id.
public log(long id, String message)
{
logger.log("ThreadId: id + "message: " + message);
}
ThreadLocal appears to be the solution for you.

Optmizing disk writes for Java Logger

I am using Java.util.Logger to log various events of my project. I am using a file handler to create the log. I see that the rate at which events are written to the log ( in the disk ) is almost the pace at which events are happening. This seems to be good and bad at same time. Good as event updates are written quickly, but I am concerned about the IO time. Sometimes there is a lot of data that needs to be written to the logs. So in those cases, my program would run slower because of this logging, which is not desirable.
It would be of great help, if somebody could suggest what I should do in this case. I do not care the rate at which events are logged, they just need to be there in the log file at the end of execution.
Thanks.
A performance loss of 5-10% is expected when running full debug logging. This seems to be acceptable for our customers.
If the code to generate some of the content to log out is expensive, consider using a simple test like this to avoid executing this code when debug is turned off:
if (log.isLoggable(Level.FINEST)) {
// code to generate the log entry
}
You can also create a java.util.logging.MemoryHandler and push out to a file in a regular interval.
Jochen Bedersdorfer's answer is a good one and just4log is a system that will do it automatically for you - via post-processing. Therefore you won't have to ugly up your code with if statements around the log statements.
Pexus has recently released an open source performance logging package - PerfLog, that also includes an application logger based on java.util.logging.* API. It includes an option for asynchronous logging using Common J Work Manager that is availble in all J2EE container (1.4+)
For more information see: http://www.pexus.com/perflog
Use a more-modern logging library such as log4j or slf4j which have support for asynchronous/buffered appenders.
In log4j, you can use AsyncAppender (which provides the buffering facility) and wire up a FileAppender to it:
The AsyncAppender will collect the events sent to it and then dispatch them to all the appenders that are attached to it. You can attach multiple appenders to an AsyncAppender.
The AsyncAppender uses a separate thread to serve the events in its buffer.
This way the events are written to the disk in a controlled manner, and your threads doing actual work are not tied up with disk I/O.
Or as a simpler option, consider if you really need to have the full output of the logs when running this program. It's often overkill to run an application in production with logging at the DEBUG level.
I would suggest you try out another logging solution, like log4j which is widely used (often in combination with commons-logging). It offers a performant approach to logging.
If you however desire even more control you can implement your own appender. Assuming you desire a file appender you can override the append routine of the FileAppender.
E.g.,
public class BatchingFileAppender extends FileAppender {
private List<LoggingEvent> batch = new LinkedList<LoggingEvent>;
public static final int BATCH_SIZE = 10;
#Override
protected void append(LoggingEvent event) {
batch.add(event);
// you can even optionally push ever 10'th or so messages to file
if (batch.size() == BATCH_SIZE) {
appendBatch();
}
}
#Override
protected void reset() {
appendBatch();
}
#Override
protected void closeWriter() {
appendBatch();
}
private void appendBatch() {
for(LoggingEvent event : batch) {
super.append(event);
}
batch.clear();
}
}
You should check out Logback. Same authors as log4j if I'm not mistaken.
Based on our previous work on log4j, logback internals have been
re-written to perform about ten times faster on certain critical
execution paths. Not only are logback components faster, they have a
smaller memory footprint as well.

Log4J rerouting of Log Events

I would like to build an Appender (or something similar) that inspects Events and on certain conditions creates logs new Events.
An example would be and Escalating Appender that checks if a certain amount of identical Events get logged and if so logs the Event with a higher logleve. So you could define something like: If you get more then 10 identical Warnings on this logger, make it an Error.
So my questions are:
Does something like this already exist?
Is an Appender the right class to implement this behavior?
Are there any traps you could think of I should look out for?
Clarification:
I am fine with the algorithm of gathering and analysing the events. I'll do that with a collection inside the appender. Persistence is not necessary for my purpose. My question #2 is: is an appender the right place for this to do? After all it is not normal behaviour to creat logging entries for an appender.
You can create your own appender by implementing the Appender interface provided by log4j.
http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Appender.html
That would be one approach. Another would be to use an existing appender and then write some code that monitors the log. For example, you could log to the database and then write a process that monitors the log entries in the database and creates meta-events based on what it sees.
It depends most on what you're comfortable with. One question you'll have to deal with is how to look back in the log to create your meta-events. Either you'll have to accumulate events in your appender or persist them somewhere that you can query to construct your meta-events. The problem with accumulating them is that if you stop and start your process, you'll either have to dump them somewhere so they get picked back up or start over whenever the process restarts.
For example, let's say that I want to create a log entry every 10th time a NullPointerException is thrown. If I have the log entries in a database of some kind, every time an NPE is thrown I run a query to see how many NPEs have been thrown since the last time I created a log entry for them. If I just count them in memory every time one is thrown, if I restart the application after 5 are thrown, if I don't persist that number I'll lose count.
Logback (log4j's successor) will allow you to enable logging for any event via TurboFilters. For example, assuming the same event occurs N or more times in a given timeframe, you could force the event to be accepted (regardless of its level). See also DuplicateMessageFilter which does the inverse (denying re-occurring events).
However, even logback will not allow the level of the logging event to be incremented. Log4j will not either. Neither framework is designed for this and I would discourage you from attempting to increment the level on the fly and within the same thread. On the other hand, incrementing the level during post processing is a different matter altogether. Signaling another thread to generate a new logging event with a higher level is an additional possibility. (Have your turbo-filter signal another thread to generate a new logging event with a higher level.)
It was not clear from your question why you wished the level to be incremented. Was the increment of the level a reason in itself or was it a means to a goal, that is having the event logged regardless of its level. If the latter, then logback's TurboFilters are the way to go.
HTH,
As Rafe already pointed out, the greatest challenge would be persisting the actual events in the Appender, so that you'll know the time has come to trigger your event (e.g. escalate log level).
Therefore, I propose a following strategy:
Use a custom JDBCAppender. Unlike the one bundled with Log4j, this one can log exceptions.
Set-up an embedded database, like HSQLDB, and set-up a database with one table for event logging. It solves the persistence problem, as you can use SQL to find types of events that occurred.
Run a separate thread that monitors the database, and detects desired event patterns.
Use a LogManager to access desired Loggers and set their level manually.

Categories