Is there any logging solution with exception grouping feature? What I want to achieve is when some exception is logged for example 100 times in 10 seconds I don't want to log 100 stack traces. I want to log something like RuntimeException was thrown 100 times: single stack trace here. It'd perfect to have something integrated with log4j.
Ofc there is an option to create some logging facade with exception queue inside but maybe there is something already implemented.
Please take a look at this log handler implementation that groups logs for then sending to an email address.
The solution is basically a log handler that uses a CyclicBuffer to save logs to memory. When a threshold is reached or the system is closes, the handler flushes the buffer.
The solution is JUL based (java.util.logging), but it may serve you as basis for you to build your own log4j solution with a few modifications. It worked nice for me. Hope it helps.
Related
This question has been bugging me for a while, how do popular logging frameworks like Log4j which allow concurrent, async logging order guarantee of log order without performance bottlenecks, i.e if log statement L1 was invoked before log statement L2, L1 is guaranteed to be in the log file before L2.
I know Log4j2 uses a ring buffer and sequence numbers, but it still isn't intuitive how this solves the problem.
Could anyone give an intuitive explanation or point me to a resource doing the same?
This all depends on what you mean by "logging order". When talking about a single thread the logging order is preserved because each logging call results in a write.
When logging asynchronously each log event is added to a queue in the order it was received and is processed in First-in/First-out order, regardless of how it got there. This isn't really very challenging because the writer is single-threaded.
However, if you are talking about logging order across threads, that is never guaranteed - even when logging synchronously - because it can't be. Thread 1 could start to log before Thread 2 but thread 2 could get to the synchronization point in the write ahead of thread 1. Likewise, the same could occur when adding events to the queue. Locking the logging call in the logging method would preserve order, but for little to no benefit and with disastrous performance consequences.
In a multi-threaded environment it is entirely possible that you might see logging events where the timestamp is out of order because Thread 1 resolved the timestamp, was interrupted by thread 2 which then resolved the timestamp and logged the event. However, if you write your logs to something like ElasticSearch you would never notice since it orders them by timestmap.
I am reading through some code and comparing what I see in production log file content but am concerned that maybe I am not looking at what is really in production ( yes, I know ... )
I expect to see a string from a log.info() call but it is immediately before a database update that may be the culprit of an SQLException.
Is it possible that the exception could mask the logger output ? I.E. execution has terminated before flushing the log output buffer ?
If that is not the case I will need to figure out some other reason for the info not being written.
No it's not possible. When you write down something using log.into(),log.error(),log.debug(),log.wran(),log.fatal(),log.trace() methods, it holds content to write down in the log file.
To get more detail you can catch the exception and print that stacktrace using log.error(). You can easily diagnostic the problem.
The java.util.logging logging framework uses shutdown hooks to make sure the flush() method is called on its handlers. Provided the handler properly implements the flush() method to flush the cached logs to disk, it will work. The handlers provided with the java api do implement this.
It can be verified in the source code that it uses shutdown hooks.
I am working on a service in which I have to perform some events, log them and return the results. I want that user should not wait for logging to complete, and therefore should get immediate results whereas logging can continue. Any suggestions on these?
a()
b()
.
.
.
g()//all these function are required to compute somethings which user wants
logging() //it takes time
return results
If logging is an overhead for you, and you want that to be an asynchronous process, then there are definitely ways to achieve this:
You can create your own handlers to do this, i.e create a FIFO queue to submit all your log strings and another process can read and print these messages as a separate process, so in your original flow you only add the log message to queue and move ahead, of course this involves reinventing the wheel, but you have the freedom to do exactly what you need for your project.
You may want to look at this answer
You can leverage existing framework, like log4j, which provides many options to achieve async logging using specific async appenders.
You can find details about it here
I'm working with threads but after a time, most of them stop doing their job. The first thing I thought was a deadlock, but all are with state RUNNING.
I suppose there is an error in my logic or a new characteristic that I not realized and I must handle (it's a webcrawler).
Is it possible to get the current executing method or operation? I want this to see where my threads are trapped.
EDIT: I think that is something I need to handle or there is error in my logic because this happens after a time executing, not imeddiatly after the start.
A debugger is the way to go. This is what they are designed for.
Java debuggers with threading support are built into both the Eclipse and Netbeans IDEs.
Make VM to dump the threads (Ctrl-Break). Find your threads in the list. Look at the topmost stacktrace method. Done.
You can get the current stack trace in Java. You will get an array of StackTraceElement elements.
The first item in the array is the currently executing method.
See the following question for how to get the stack trace:
Get current stack trace in Java
Code might look like:
StackTraceElement[] trace = Thread.currentThread().getStackTrace();
StackTraceElement yourMethod = trace[1];
System.out.println(yourMethod.getMethodName());
You have 2 options:
Use debug to get some understanding that was executed and what not.
Use a lot of logmessages (you can also produce stacktraces in that messages)
Thread dumps are the right solution for the problem. If you want to do it programmatically within the process (some kind of monitoring logic), then java.lang.management.ThreadMXBean provides access to all threads along with their current stacks at the time.
It is, throw an exception, catch it immediately and save the stack. This is about as performant as asking an elephant to fly overseas but it's possible since it sort of extracts the current call stack to something you can work with.
However, are you sure you haven't run into a livelock?
Do you suppose your web crawler program is in a loop processing the same urls. Add some high level logging so each thread writes what it's processing.
I would like to build an Appender (or something similar) that inspects Events and on certain conditions creates logs new Events.
An example would be and Escalating Appender that checks if a certain amount of identical Events get logged and if so logs the Event with a higher logleve. So you could define something like: If you get more then 10 identical Warnings on this logger, make it an Error.
So my questions are:
Does something like this already exist?
Is an Appender the right class to implement this behavior?
Are there any traps you could think of I should look out for?
Clarification:
I am fine with the algorithm of gathering and analysing the events. I'll do that with a collection inside the appender. Persistence is not necessary for my purpose. My question #2 is: is an appender the right place for this to do? After all it is not normal behaviour to creat logging entries for an appender.
You can create your own appender by implementing the Appender interface provided by log4j.
http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Appender.html
That would be one approach. Another would be to use an existing appender and then write some code that monitors the log. For example, you could log to the database and then write a process that monitors the log entries in the database and creates meta-events based on what it sees.
It depends most on what you're comfortable with. One question you'll have to deal with is how to look back in the log to create your meta-events. Either you'll have to accumulate events in your appender or persist them somewhere that you can query to construct your meta-events. The problem with accumulating them is that if you stop and start your process, you'll either have to dump them somewhere so they get picked back up or start over whenever the process restarts.
For example, let's say that I want to create a log entry every 10th time a NullPointerException is thrown. If I have the log entries in a database of some kind, every time an NPE is thrown I run a query to see how many NPEs have been thrown since the last time I created a log entry for them. If I just count them in memory every time one is thrown, if I restart the application after 5 are thrown, if I don't persist that number I'll lose count.
Logback (log4j's successor) will allow you to enable logging for any event via TurboFilters. For example, assuming the same event occurs N or more times in a given timeframe, you could force the event to be accepted (regardless of its level). See also DuplicateMessageFilter which does the inverse (denying re-occurring events).
However, even logback will not allow the level of the logging event to be incremented. Log4j will not either. Neither framework is designed for this and I would discourage you from attempting to increment the level on the fly and within the same thread. On the other hand, incrementing the level during post processing is a different matter altogether. Signaling another thread to generate a new logging event with a higher level is an additional possibility. (Have your turbo-filter signal another thread to generate a new logging event with a higher level.)
It was not clear from your question why you wished the level to be incremented. Was the increment of the level a reason in itself or was it a means to a goal, that is having the event logged regardless of its level. If the latter, then logback's TurboFilters are the way to go.
HTH,
As Rafe already pointed out, the greatest challenge would be persisting the actual events in the Appender, so that you'll know the time has come to trigger your event (e.g. escalate log level).
Therefore, I propose a following strategy:
Use a custom JDBCAppender. Unlike the one bundled with Log4j, this one can log exceptions.
Set-up an embedded database, like HSQLDB, and set-up a database with one table for event logging. It solves the persistence problem, as you can use SQL to find types of events that occurred.
Run a separate thread that monitors the database, and detects desired event patterns.
Use a LogManager to access desired Loggers and set their level manually.