I would like to build an Appender (or something similar) that inspects Events and on certain conditions creates logs new Events.
An example would be and Escalating Appender that checks if a certain amount of identical Events get logged and if so logs the Event with a higher logleve. So you could define something like: If you get more then 10 identical Warnings on this logger, make it an Error.
So my questions are:
Does something like this already exist?
Is an Appender the right class to implement this behavior?
Are there any traps you could think of I should look out for?
Clarification:
I am fine with the algorithm of gathering and analysing the events. I'll do that with a collection inside the appender. Persistence is not necessary for my purpose. My question #2 is: is an appender the right place for this to do? After all it is not normal behaviour to creat logging entries for an appender.
You can create your own appender by implementing the Appender interface provided by log4j.
http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Appender.html
That would be one approach. Another would be to use an existing appender and then write some code that monitors the log. For example, you could log to the database and then write a process that monitors the log entries in the database and creates meta-events based on what it sees.
It depends most on what you're comfortable with. One question you'll have to deal with is how to look back in the log to create your meta-events. Either you'll have to accumulate events in your appender or persist them somewhere that you can query to construct your meta-events. The problem with accumulating them is that if you stop and start your process, you'll either have to dump them somewhere so they get picked back up or start over whenever the process restarts.
For example, let's say that I want to create a log entry every 10th time a NullPointerException is thrown. If I have the log entries in a database of some kind, every time an NPE is thrown I run a query to see how many NPEs have been thrown since the last time I created a log entry for them. If I just count them in memory every time one is thrown, if I restart the application after 5 are thrown, if I don't persist that number I'll lose count.
Logback (log4j's successor) will allow you to enable logging for any event via TurboFilters. For example, assuming the same event occurs N or more times in a given timeframe, you could force the event to be accepted (regardless of its level). See also DuplicateMessageFilter which does the inverse (denying re-occurring events).
However, even logback will not allow the level of the logging event to be incremented. Log4j will not either. Neither framework is designed for this and I would discourage you from attempting to increment the level on the fly and within the same thread. On the other hand, incrementing the level during post processing is a different matter altogether. Signaling another thread to generate a new logging event with a higher level is an additional possibility. (Have your turbo-filter signal another thread to generate a new logging event with a higher level.)
It was not clear from your question why you wished the level to be incremented. Was the increment of the level a reason in itself or was it a means to a goal, that is having the event logged regardless of its level. If the latter, then logback's TurboFilters are the way to go.
HTH,
As Rafe already pointed out, the greatest challenge would be persisting the actual events in the Appender, so that you'll know the time has come to trigger your event (e.g. escalate log level).
Therefore, I propose a following strategy:
Use a custom JDBCAppender. Unlike the one bundled with Log4j, this one can log exceptions.
Set-up an embedded database, like HSQLDB, and set-up a database with one table for event logging. It solves the persistence problem, as you can use SQL to find types of events that occurred.
Run a separate thread that monitors the database, and detects desired event patterns.
Use a LogManager to access desired Loggers and set their level manually.
Related
This question has been bugging me for a while, how do popular logging frameworks like Log4j which allow concurrent, async logging order guarantee of log order without performance bottlenecks, i.e if log statement L1 was invoked before log statement L2, L1 is guaranteed to be in the log file before L2.
I know Log4j2 uses a ring buffer and sequence numbers, but it still isn't intuitive how this solves the problem.
Could anyone give an intuitive explanation or point me to a resource doing the same?
This all depends on what you mean by "logging order". When talking about a single thread the logging order is preserved because each logging call results in a write.
When logging asynchronously each log event is added to a queue in the order it was received and is processed in First-in/First-out order, regardless of how it got there. This isn't really very challenging because the writer is single-threaded.
However, if you are talking about logging order across threads, that is never guaranteed - even when logging synchronously - because it can't be. Thread 1 could start to log before Thread 2 but thread 2 could get to the synchronization point in the write ahead of thread 1. Likewise, the same could occur when adding events to the queue. Locking the logging call in the logging method would preserve order, but for little to no benefit and with disastrous performance consequences.
In a multi-threaded environment it is entirely possible that you might see logging events where the timestamp is out of order because Thread 1 resolved the timestamp, was interrupted by thread 2 which then resolved the timestamp and logged the event. However, if you write your logs to something like ElasticSearch you would never notice since it orders them by timestmap.
Is there any logging solution with exception grouping feature? What I want to achieve is when some exception is logged for example 100 times in 10 seconds I don't want to log 100 stack traces. I want to log something like RuntimeException was thrown 100 times: single stack trace here. It'd perfect to have something integrated with log4j.
Ofc there is an option to create some logging facade with exception queue inside but maybe there is something already implemented.
Please take a look at this log handler implementation that groups logs for then sending to an email address.
The solution is basically a log handler that uses a CyclicBuffer to save logs to memory. When a threshold is reached or the system is closes, the handler flushes the buffer.
The solution is JUL based (java.util.logging), but it may serve you as basis for you to build your own log4j solution with a few modifications. It worked nice for me. Hope it helps.
I am working on a service in which I have to perform some events, log them and return the results. I want that user should not wait for logging to complete, and therefore should get immediate results whereas logging can continue. Any suggestions on these?
a()
b()
.
.
.
g()//all these function are required to compute somethings which user wants
logging() //it takes time
return results
If logging is an overhead for you, and you want that to be an asynchronous process, then there are definitely ways to achieve this:
You can create your own handlers to do this, i.e create a FIFO queue to submit all your log strings and another process can read and print these messages as a separate process, so in your original flow you only add the log message to queue and move ahead, of course this involves reinventing the wheel, but you have the freedom to do exactly what you need for your project.
You may want to look at this answer
You can leverage existing framework, like log4j, which provides many options to achieve async logging using specific async appenders.
You can find details about it here
Let me begin with a brief explanation of the system I am using:
We have a system that runs as a Daemon, running work-units as they come in. This daemon dynamically creates a new Thread when it is given a new work-unit. For each of these units of work, I need Log4J to create a new log file to append to - the filename will be provided at runtime, when the new log file must be created. This daemon must be able to remain alive indefinitely, which I believe causes some memory concerns, as I will explain.
My first thought was to create a new Logger for each work-unit, naming it after the thread, of course. The work-unit's thread retains a reference to that Logger. When the unit is finished, it will be garbage-collected, but the problem is that Log4J itself retains a reference to the Logger, which will never be used again. It seems likely that all these Loggers will cause the VM to run out of memory.
Another solution: subclass Filter, to filter Appenders by the thread names, and place them on the same Logger. Then, remove the Appenders as the work-units complete. Of course, this necessitates adding the code to remove the appenders. That will be a lot of code changes.
I've looked into NDC and MDC, which appear to be intended for managing interleaved output to the same file. I've considered proposing this as a solution, but I don't think it will be accepted.
I want to say that Log4J appears that it is intended not to function this way, that is, dynamically creating new log files at runtime, as they are required (or desired). So I am not sure which direction to look in next - Is log4j not the solution here, or did I comPLETELY miss something? Have I not looked closely enough at NDC? Or is my concern with Log4J holding onto Loggers a nonissue for reasons I don't see?
You could just create a new log method that wraps the normal log method, and append thread id.
Something like the following (oversimplified, but you get the idea). Log4j is already thread save I believe, so as long as you aren't logging a ton, you should be fine. Then you can easily grep on thread id.
public log(long id, String message)
{
logger.log("ThreadId: id + "message: " + message);
}
ThreadLocal appears to be the solution for you.
I am using java.util.logging to do all the logging of my application.
Until recently, I was using the logging facility without any specific configuration. Everything worked as expected, all the logs were visible in the console (stderr)
Now, I wanted to customize the configuration for my logs. I want the logs to be displayed on the console, but I want them to be written in a file, too. I came up with the following solution :
public static void main(String[] args) {
System.setProperty("java.util.logging.config.file", "log.config");
Logger defLogger = Logger.getLogger("fr.def"); // all loggers I use begin by "fr.def"
defLogger.setLevel(Level.ALL);
defLogger.addHandler(new ConsoleHandler());
defLogger.addHandler(new FileHandler());
// real code here ...
Here is the content of the log.config file :
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.FileHandler.count=10
java.util.logging.FileHandler.pattern=logs/visiodef2.%g.log
This solution mostly works : I can see the logs in the console, and in the files too. Except that, in some situations, some log messages are simply lost (for both the console and the file). Examples of situations where logs are lost :
on a shutdown hook of the JVM
on the default uncaught exception handler
on the EDT's exception handler
on the windowClosing event of the main JFrame (configured with the default close operation EXIT_ON_CLOSE)
There is no other configuration than what is described above. The log level is not involved : I can see some INFO logs, but some of the lost logs are SEVERE.
I also tried to add a shutdown hook to flush all the Handlers, but with no success.
So, the question : is it safe to configure my logging the way I do ? Can you see any reason why some logs can be lost ?
I found the problem. And this is weird.
Actually, my problem is not related at all with the fact that the log happens in an exception handler or in a Frame event. The problem is that the garbage collector destroys the "fr.def" logger a few seconds after it is created ! Thus, the FileHandler is destroyed too. The GC can do it because the LogManager only keeps weak references to the Loggers it creates.
The javadoc of Logger.getLogger doesn't tell anything about that, but the javadoc of LogManager.addLogger, which is called by the former, explicitly says that :
The application should retain its own reference to the Logger object to avoid it being garbage collected. The LogManager may only retain a weak reference.
So, the workaround was to keep a reference to the object returned by Logger.getLogger("fr.def").
Edit
It seems that the choice of using weak references came from this bug report.
If you dig into LogManager source, you'll see it installs its own shutdown hook LogManager.Cleaner that closes all logger handlers.
Since all shutdown hooks run concurrently there is a race between your hook and one registered by logging. If logging finishes first you will get no output.
There is no clean way around that. If you don't want to change your source, you could hack some sort of non-portable pre-shutdown hook like this: https://gist.github.com/735322
Alternatively use Logger.getAnonymousLogger() which is not registered with LogManager thus not closed in shutdown hook. You will have to add your own handlers and call Logger#setUseParentHandlers(false) to avoid duplicated messages.