How does java.util.logging.Handler lifecycle works? - java

I'm trying to implement a custom handler that logs parsed LogRecord objects into a file (basically what FileHandler or StreamHandler does). My currently implementation is shown below:
public final class ErrorHandler extends Handler {
private static final String OUTPUT_FILE = ".output";
private final Formatter formatter = new CustomFormatter();
private BufferedWriter writter;
#Override
public void publish(LogRecord record) {
if (record.getLevel() == SEVERE || record.getLevel() == WARNING) {
writeToOutput(record);
}
}
void writeToOutput(LogRecord log) {
try {
if (writter == null) {
writter = new BufferedWriter(new FileWriter(OUTPUT_FILE, true));
}
writter.write(formatter.format(log));
} catch (Exception e) {
e.printStackTrace();
}
}
#Override
public void flush() {
}
#Override
public void close() {
try {
writter.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
P.S.: I known that we can achieve the same as the code above just by setting filter and formatter on a FileHandler or StreamHandler however I'll need the hookpoints later in the future.
My problem is, if I leave flush() with no implementation, although output file gets created, no log is written there. If I call writter.flush() inside flush(), the log is duplicated. Any though why this might be happening?

Ok, after two days fighting agains that I came to realize that the process was running on a daemon, therefore, handler's close() was only called when daemon was killed. I believe that this was leading to multiples calls to flush() almost at the same time. Running the process with no daemon solved the issue.

My problem is, if I leave flush() with no implementation, although output file gets created, no log is written there.
This is because the bytes are cached in the BufferedWriter. Flush sends those bytes to the wrapped FileWriter. If you collect enough bytes it will flush to the target file but you risk losing that information of you have some sort of process crash or disk issue.
If I call writter.flush() inside flush(), the log is duplicated. Any though why this might be happening?
Perhaps you have added two instances of this handler to the logger and both are appending to the same file. Logger.addHandler works like a List and not like a Set. Add code to Print the logger tree which will tell you how many handler instances are installed.
I'm sure I have no process crash nor disk issue, and I believe that close calls flush. Yet, I don't see why nothing is being logged - and it happens only file is not created yet.
Close is only implicitly called when the Java virtual machine shuts down and the handler is visible from the LogManager. If the shutdown is not clean as described in the documentation then the contents of the buffered writer is not flushed.

Related

flush System.out with own logger

I created my own little logger to use instead of System.out.println
LogManager.getLogManager().reset();
logger = Logger.getLogger(this.toString());
logger.setLevel(loglevel);
Formatter formatter = new Formatter() {
public String format(LogRecord record) {
return new Date().toString().substring(11, 20) + record.getLevel()+" " + formatMessage(record) + System.getProperty("line.separator");
}
};
logger.addHandler(new StreamHandler(System.out, formatter));
LogManager.getLogManager().addLogger(logger);
At the moment, the messages don't get flushed, so they only appear once the application gets terminated. Is there a way to flush a message after printing it out without creating a new class or adding many lines of code? I want to keep this code as short as possible...
The problem is that the StreamHandler.setOutputStream wraps the given stream in a OutputStreamWriter which according to the javadocs:
The resulting bytes are accumulated in a buffer before being written to the underlying output stream. The size of this buffer may be specified, but by default it is large enough for most purposes.
So there is no way around calling StreamHandler.flush to force those bytes to the console.
Since you don't want to create a new class you can use the ConsoleHandler which will flush to console but by default will write to error stream. You can get around that if you haven't started any other threads by doing the following:
//Global resource so ensure single thread.
final PrintStream err = System.err;
System.setErr(System.out);
try {
ConsoleHandler ch = new ConsoleHandler();
ch.setFormatter(formatter);
logger.addHandler(ch);
} finally {
System.setErr(err);
}
A subclass really is your best bet because you can end up accidentally closing the System.out by calling Handler.close() through LogManager.getLogManager().reset() or by calling StreamHandler.setOutputStream which means you won't see any output at all.
public class OutConsoleHandler extends StreamHandler {
public OutConsoleHandler(OutputStream out, Formatter f) {
super(out, f);
}
#Override
public synchronized void publish(LogRecord record) {
super.publish(record);
flush();
}
#Override
public void close() throws SecurityException {
flush();
}
}

Retrieve contents of all stack traces being printed to the console?

I want to individually log every unique error I have, as searching though a dozen log files each +10k lines in length is time wasting and tedious.
I catch all exceptions I possibly can, but oftentimes other threads or libraries will shoot off their own errors without any way to process them myself.
Is there any workaround for this?
(E.G. an event for when printStackTrace() is called.)
Is there any workaround for this?
(E.G. an event for when printStackTrace() is called.)
Remap System.err to intercept throwables. If you look at the source code for Throwable.printStackTrace() you'll see that it indirectly calls System.err.println(this);
For example:
import java.io.PrintStream;
public class SpyPrintStream extends PrintStream {
public static void main(String[] args) {
System.setErr(new SpyPrintStream(System.err));
System.setOut(new SpyPrintStream(System.out));
new Exception().printStackTrace();
}
public SpyPrintStream(PrintStream src) {
super(src);
}
#Override
public void println(Object x) {
if (x instanceof Throwable) {
super.println("Our spies detected "+ x.getClass().getName());
}
super.println(x);
}
}
Keep in mind there is all kinds of issues with using this code and it is not going to work in cases where printStackTrace is called with stream that is not standard stream.
You could always do a deep dive into java.lang.instrument if you really want to trap all exceptions.
I catch all exceptions I possibly can, but oftentimes other threads or libraries will shoot off their own errors without any way to process them myself.
Most libraries either throw exceptions back to the caller or use a logging framework. Capture the exception or configure the logging framework.
I want to individually log every unique error I have, as searching though a dozen log files each +10k lines in length is time wasting and tedious.
Logging frameworks include options to deal with this. DuplicateMessageFilter is an example.
Food for thought:
public class DemoClass {
private Map<String, Exception> myExceptions = new HashMap<>();
public void demoMethod() {
try {
// throwing an exception for illustration
throw new IOException("some message");
} catch (IOException e) {
myExceptions.putIfAbsent(e.getLocalizedMessage(), e);
// actually handle the exception
...
}
}
public void finished() {
for (Exception e : myExceptions.values()) {
e.printStackTrace();
}
}
}
You could store any exception you haven't seen yet. If your specific scenario allows for a better way to ensure you only save an exception only once you should prefer that over mapping by Exception.getLocalizedMessage()

Single program instance

I need to make a program, which can be executed in single instance. I tried to create a temporary file and delete it before exit program.
public static boolean isLocked() {
File f = new File("lock.txt");
return f.exists();
}
public static void lock() {
String fname = "lock.txt";
File f = new File(fname);
try {
f.createNewFile();
} catch (IOException e) {
e.printStackTrace();
}
}
public static void unlock() {
File f = new File("lock.txt");
f.delete();
}
In frame
private void initialize() {
lock();
}
private void setFrameHandler() {
frame.addWindowListener(new java.awt.event.WindowAdapter() {
#Override
public void windowClosing(java.awt.event.WindowEvent windowEvent) {
unlock();
}
});
}
Problem occurs if program is finished with emergency (e.g. electricity cuts). File does not remove, and running a new instance is impossible.
How to make a reliable single-instance verification?
You could check for another instance of the program at startup using the GetProcesses method as described here
But that only works depending on the scenario you have (might not see all processes of other users)
Another thing you could do is simply checking, if a specific file is locked via File.Open
File.Open ("path.lock", FileMode.OpenOrCreate, FileAccess.ReadWrite);
As long as you keep the resulting FileStream open in your program no other program can open the file in that mode either. This is basically how Unix lock files work too. Of course you have to catch an IOException (hinting you to a locked file).
Disclaimer: I did not try that code out so please check if I gave you the right parameters.
Edit: You could also check out this Code-Project article on how to do it with the win32 API
Another attempt using windows messaging has been done here
A simple approach to this on a single machine is to write a 'PID file', which is literally a file containing the operating system's ID of the process currently running. You create this when you start your "critical" work, and remove it on successful completion.
Since it is unlikely that the process would be started again with the same PID, you can simply check to see if the PID file already exists, and if so, if that process is still running.

Workaround for Java bug which causes crash dump

A program that I've developed is crashing the JVM occasionally due to this bug: http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8029516. Unfortunately the bug has not been resolved by Oracle and the bug report says that there are no known workarounds.
I've tried to modify the example code from the bug report by calling .register(sWatchService, eventKinds) in the KeyWatcher thread instead, by adding all pending register request to a list that I loop through in the KeyWatcher thread but it's still crashing. I'm guessing this just had the same effect as synchronizing on sWatchService (like the submitter of the bug report tried).
Can you think of any way to get around this?
From comments:
It appears that we have an issue with I/O cancellation when there is a pending ReadDirectoryChangesW outstanding.
The statement and example code indicate that the bug is triggered when:
There is a pending event that has not been consumed (it may or may not be visible to WatchService.poll() or WatchService.take())
WatchKey.cancel() is called on the key
This is a nasty bug with no universal workaround. The approach depends on the specifics of your application. Consider pooling watches to a single place so you don't need to call WatchKey.cancel(). If at one point the pool becomes too large, close the entire WatchService and start over. Something similar to.
public class FileWatcerService {
static Kind<?>[] allEvents = new Kind<?>[] {
StandardWatchEventKinds.ENTRY_CREATE,
StandardWatchEventKinds.ENTRY_DELETE,
StandardWatchEventKinds.ENTRY_MODIFY
};
WatchService ws;
// Keep track of paths and registered listeners
Map<String, List<FileChangeListener>> listeners = new ConcurrentHashMap<String, List<FileChangeListener>>();
Map<WatchKey, String> keys = new ConcurrentHashMap<WatchKey, String>();
boolean toStop = false;
public interface FileChangeListener {
void onChange();
}
public void addFileChangeListener(String path, FileChangeListener l) {
if(!listeners.containsKey(path)) {
listeners.put(path, new ArrayList<FileChangeListener>());
keys.put(Paths.get(path).register(ws, allEvents), path);
}
listeners.get(path).add(l);
}
public void removeFileChangeListener(String path, FileChangeListener l) {
if(listeners.containsKey(path))
listeners.get(path).remove(l);
}
public void start() {
ws = FileSystems.getDefault().newWatchService();
new Thread(new Runnable() {
public void run() {
while(!toStop) {
WatchKey key = ws.take();
for(FileChangeListener l: listeners.get(keys.get(key)))
l.onChange();
}
}
}).start();
}
public void stop() {
toStop = true;
ws.close();
}
}
I've managed to create a workaround though it's somewhat ugly.
The bug is in JDK method WindowsWatchKey.invalidate() that releases native buffer while the subsequent calls may still access it. This one-liner fixes the problem by delaying buffer clean-up until GC.
Here is a compiled patch to JDK. In order to apply it add the following Java command-line flag:
-Xbootclasspath/p:jdk-8029516-patch.jar
If patching JDK is not an option in your case, there is still a workaround on the application level. It relies on the knowledge of Windows WatchService internal implementation.
public class JDK_8029516 {
private static final Field bufferField = getField("sun.nio.fs.WindowsWatchService$WindowsWatchKey", "buffer");
private static final Field cleanerField = getField("sun.nio.fs.NativeBuffer", "cleaner");
private static final Cleaner dummyCleaner = Cleaner.create(Thread.class, new Thread());
private static Field getField(String className, String fieldName) {
try {
Field f = Class.forName(className).getDeclaredField(fieldName);
f.setAccessible(true);
return f;
} catch (Exception e) {
throw new IllegalStateException(e);
}
}
public static void patch(WatchKey key) {
try {
cleanerField.set(bufferField.get(key), dummyCleaner);
} catch (IllegalAccessException e) {
throw new IllegalStateException(e);
}
}
}
Call JDK_8029516.patch(watchKey) right after the key is registred, and it will prevent watchKey.cancel() from releasing the native buffer prematurely.
You might not be able to work around the problem itself but you could deal with the error and handle it. I don't know your specific situation but I could imagine the biggest issue is the crash of the whole JVM. Putting all in a try block does not work because you cannot catch a JVM crash.
Not knowing more about your project makes it difficult to suggest a good/acceptable solution, but maybe this could be an option: Do all the file watching stuff in a separate JVM process. From your main process start a new JVM (e.g. using ProcessBuilder.start()). When the process terminates (i.e. the newly started JVM crashes), restart it. Obviously you need to be able to recover, i.e. you need to keep track of what files to watch and you need to keep this data in your main process too.
Now the biggest remaining part is to implement some communication between the main process and the file watching process. This could be done using standard input/output of the file watching process or using a Socket/ServerSocket or some other mechanism.

How do you flush a buffered log4j FileAppender?

In log4j, when using a FileAppender with BufferedIO=true and BufferSize=xxx properties (i.e. buffering is enabled), I want to be able to flush the log during normal shutdown procedure. Any ideas on how to do this?
When shutting down the LogManager:
LogManager.shutdown();
all buffered logs get flushed.
public static void flushAllLogs()
{
try
{
Set<FileAppender> flushedFileAppenders = new HashSet<FileAppender>();
Enumeration currentLoggers = LogManager.getLoggerRepository().getCurrentLoggers();
while(currentLoggers.hasMoreElements())
{
Object nextLogger = currentLoggers.nextElement();
if(nextLogger instanceof Logger)
{
Logger currentLogger = (Logger) nextLogger;
Enumeration allAppenders = currentLogger.getAllAppenders();
while(allAppenders.hasMoreElements())
{
Object nextElement = allAppenders.nextElement();
if(nextElement instanceof FileAppender)
{
FileAppender fileAppender = (FileAppender) nextElement;
if(!flushedFileAppenders.contains(fileAppender) && !fileAppender.getImmediateFlush())
{
flushedFileAppenders.add(fileAppender);
//log.info("Appender "+fileAppender.getName()+" is not doing immediateFlush ");
fileAppender.setImmediateFlush(true);
currentLogger.info("FLUSH");
fileAppender.setImmediateFlush(false);
}
else
{
//log.info("fileAppender"+fileAppender.getName()+" is doing immediateFlush");
}
}
}
}
}
}
catch(RuntimeException e)
{
log.error("Failed flushing logs",e);
}
}
public static void flushAll() {
final LoggerContext logCtx = ((LoggerContext) LogManager.getContext());
for(final org.apache.logging.log4j.core.Logger logger : logCtx.getLoggers()) {
for(final Appender appender : logger.getAppenders().values()) {
if(appender instanceof AbstractOutputStreamAppender) {
((AbstractOutputStreamAppender) appender).getManager().flush();
}
}
}
}
Maybe you could override WriterAppender#shouldFlush( LoggingEvent ), so it would return true for a special logging category, like log4j.flush.now, and then you call:
LoggerFactory.getLogger("log4j.flush.now").info("Flush")
http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/WriterAppender.html#shouldFlush%28org.apache.log4j.spi.LoggingEvent%29
Sharing my experience with using "Andrey Kurilov"'s code example, or at least simmilar.
What I actually wanted to achieve was to implement an asynchronous log entries with manual flush (immediateFlush = false) to ensure an idle buffers content will be flushed before the bufferSize is reached.
The initial performance results were actually comparable with the ones achieved with the AsyncAppender - so I think it is a good alternative of it.
The AsyncAppender is using a separate thread (and additional dependency to disruptor jar), which makes it more performant, but with the cost of more CPU and even more disk flushing(no matter with high load flushes are made on batches).
So if you want to save disk IO operations and CPU load, but still want to ensure your buffers will be flushed asynchronously at some point, that is the way to go.
Try:
LogFactory.releaseAll();
I have written an appender that fixes this, see GitHub or use name.wramner.log4j:FlushAppender in Maven. It can be configured to flush on events with high severity and it can make the appenders unbuffered when it receives a specific message, for example "Shutting down". Check the unit tests for configuration examples. It is free, of course.
The only solution that worked for me is waiting for a while:
private void flushAppender(Appender appender) {
// this flush seems to be useless
((AbstractOutputStreamAppender<?>) appender).getManager().flush();
try {
Thread.sleep(500); // wait for log4j to flush logs
} catch (InterruptedException ignore) {
}
}

Categories