How do you flush a buffered log4j FileAppender? - java

In log4j, when using a FileAppender with BufferedIO=true and BufferSize=xxx properties (i.e. buffering is enabled), I want to be able to flush the log during normal shutdown procedure. Any ideas on how to do this?

When shutting down the LogManager:
LogManager.shutdown();
all buffered logs get flushed.

public static void flushAllLogs()
{
try
{
Set<FileAppender> flushedFileAppenders = new HashSet<FileAppender>();
Enumeration currentLoggers = LogManager.getLoggerRepository().getCurrentLoggers();
while(currentLoggers.hasMoreElements())
{
Object nextLogger = currentLoggers.nextElement();
if(nextLogger instanceof Logger)
{
Logger currentLogger = (Logger) nextLogger;
Enumeration allAppenders = currentLogger.getAllAppenders();
while(allAppenders.hasMoreElements())
{
Object nextElement = allAppenders.nextElement();
if(nextElement instanceof FileAppender)
{
FileAppender fileAppender = (FileAppender) nextElement;
if(!flushedFileAppenders.contains(fileAppender) && !fileAppender.getImmediateFlush())
{
flushedFileAppenders.add(fileAppender);
//log.info("Appender "+fileAppender.getName()+" is not doing immediateFlush ");
fileAppender.setImmediateFlush(true);
currentLogger.info("FLUSH");
fileAppender.setImmediateFlush(false);
}
else
{
//log.info("fileAppender"+fileAppender.getName()+" is doing immediateFlush");
}
}
}
}
}
}
catch(RuntimeException e)
{
log.error("Failed flushing logs",e);
}
}

public static void flushAll() {
final LoggerContext logCtx = ((LoggerContext) LogManager.getContext());
for(final org.apache.logging.log4j.core.Logger logger : logCtx.getLoggers()) {
for(final Appender appender : logger.getAppenders().values()) {
if(appender instanceof AbstractOutputStreamAppender) {
((AbstractOutputStreamAppender) appender).getManager().flush();
}
}
}
}

Maybe you could override WriterAppender#shouldFlush( LoggingEvent ), so it would return true for a special logging category, like log4j.flush.now, and then you call:
LoggerFactory.getLogger("log4j.flush.now").info("Flush")
http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/WriterAppender.html#shouldFlush%28org.apache.log4j.spi.LoggingEvent%29

Sharing my experience with using "Andrey Kurilov"'s code example, or at least simmilar.
What I actually wanted to achieve was to implement an asynchronous log entries with manual flush (immediateFlush = false) to ensure an idle buffers content will be flushed before the bufferSize is reached.
The initial performance results were actually comparable with the ones achieved with the AsyncAppender - so I think it is a good alternative of it.
The AsyncAppender is using a separate thread (and additional dependency to disruptor jar), which makes it more performant, but with the cost of more CPU and even more disk flushing(no matter with high load flushes are made on batches).
So if you want to save disk IO operations and CPU load, but still want to ensure your buffers will be flushed asynchronously at some point, that is the way to go.

Try:
LogFactory.releaseAll();

I have written an appender that fixes this, see GitHub or use name.wramner.log4j:FlushAppender in Maven. It can be configured to flush on events with high severity and it can make the appenders unbuffered when it receives a specific message, for example "Shutting down". Check the unit tests for configuration examples. It is free, of course.

The only solution that worked for me is waiting for a while:
private void flushAppender(Appender appender) {
// this flush seems to be useless
((AbstractOutputStreamAppender<?>) appender).getManager().flush();
try {
Thread.sleep(500); // wait for log4j to flush logs
} catch (InterruptedException ignore) {
}
}

Related

How does java.util.logging.Handler lifecycle works?

I'm trying to implement a custom handler that logs parsed LogRecord objects into a file (basically what FileHandler or StreamHandler does). My currently implementation is shown below:
public final class ErrorHandler extends Handler {
private static final String OUTPUT_FILE = ".output";
private final Formatter formatter = new CustomFormatter();
private BufferedWriter writter;
#Override
public void publish(LogRecord record) {
if (record.getLevel() == SEVERE || record.getLevel() == WARNING) {
writeToOutput(record);
}
}
void writeToOutput(LogRecord log) {
try {
if (writter == null) {
writter = new BufferedWriter(new FileWriter(OUTPUT_FILE, true));
}
writter.write(formatter.format(log));
} catch (Exception e) {
e.printStackTrace();
}
}
#Override
public void flush() {
}
#Override
public void close() {
try {
writter.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
P.S.: I known that we can achieve the same as the code above just by setting filter and formatter on a FileHandler or StreamHandler however I'll need the hookpoints later in the future.
My problem is, if I leave flush() with no implementation, although output file gets created, no log is written there. If I call writter.flush() inside flush(), the log is duplicated. Any though why this might be happening?
Ok, after two days fighting agains that I came to realize that the process was running on a daemon, therefore, handler's close() was only called when daemon was killed. I believe that this was leading to multiples calls to flush() almost at the same time. Running the process with no daemon solved the issue.
My problem is, if I leave flush() with no implementation, although output file gets created, no log is written there.
This is because the bytes are cached in the BufferedWriter. Flush sends those bytes to the wrapped FileWriter. If you collect enough bytes it will flush to the target file but you risk losing that information of you have some sort of process crash or disk issue.
If I call writter.flush() inside flush(), the log is duplicated. Any though why this might be happening?
Perhaps you have added two instances of this handler to the logger and both are appending to the same file. Logger.addHandler works like a List and not like a Set. Add code to Print the logger tree which will tell you how many handler instances are installed.
I'm sure I have no process crash nor disk issue, and I believe that close calls flush. Yet, I don't see why nothing is being logged - and it happens only file is not created yet.
Close is only implicitly called when the Java virtual machine shuts down and the handler is visible from the LogManager. If the shutdown is not clean as described in the documentation then the contents of the buffered writer is not flushed.

Java file logger being hijacked and redirected into a file belonging to another logger (from Gigaspaces API) after creation

I'm seeing a rather odd issue.
I created some standard Java loggers (using Logger.getLogger(), a FileHandle, and a SimpleFormatter.)
Those work fine, and output the log file as expected.
Then, I used some classes from the Gigaspaces API (com.gigaspaces.gs-openspaces - included via a Maven dependency), which includes its own logging.
After that, all of the output of my loggers ended up inside the Gigaspaces log file (e.g. ~/.m2/repository/com/gigaspaces/logs/2017-03-27~12.46-gigaspaces-service-135.60.146.142-23534.log) instead of in the appropriate log files that they are supposed to be using.
If I then create more loggers after I've initialised Gigaspaces, these new loggers work as expected. Only loggers created before initialising gigaspaces are affected.
I tried poking around in the code for Gigaspaces a little bit, there's a lot of code in there. I didn't see anything immediately obvious.
Am I doing something wrong with setting up my loggers? It doesn't seem right that a library can steal the output from pre-existing loggers that are unrelated to its classes.
The below short test program demonstrates the problem:
Logger testLog = Logger.getLogger("testlog");
try {
FileHandler fh = new FileHandler("testlog.log");
fh.setFormatter(new SimpleFormatter());
testLog.addHandler(fh);
}
catch (Exception e) {
// Not important
e.printStackTrace();
}
testLog.log(Level.INFO, "This appears in the main log file");
// Spin up gigaspaces, even by trying to connect to a space that doesn't exist
UrlSpaceConfigurer testSpaceConfigurer = new UrlSpaceConfigurer("jini://*/*/testSpace?locators=127.0.01").lookupTimeout(1);
try {
GigaSpace g = new GigaSpaceConfigurer(testSpaceConfigurer).gigaSpace();
}
catch (Exception e) {
// This will throw an exception, just ignore it.
}
testSpaceConfigurer.close();
testLog.log(Level.INFO, "This appears in the (wrong) gigaspaces log file");
You have to pin the "testlog" logger or you risk losing all of the changes you apply to it.
Modifying loggers requires permissions. One option might be to use a custom security manager that doesn't let GigaSpaces redirect your logging.
If GigaSpaces is calling LogManager.reset(), then one hacky, smelly, dirty way to get around removing your handler is to extend FileHandler and override equals.
public class GigaSpaces {
//Pin the logger
private static final Logger testLog = Logger.getLogger("testlog");
static {
try {
FileHandler fh = new FileHandler("testlog.log") {
public boolean equals(Object o) {
return false; //Pure Evil.
}
};
fh.setFormatter(new SimpleFormatter());
testLog.addHandler(fh);
}
catch (Exception e) {
// Not important
e.printStackTrace();
}
}
public void foo() throws Throwable {
testLog.log(Level.INFO, "This appears in the main log file");
// Spin up gigaspaces, even by trying to connect to a space that doesn't exist
UrlSpaceConfigurer testSpaceConfigurer = new UrlSpaceConfigurer("jini://*/*/testSpace?locators=127.0.01").lookupTimeout(1);
try {
GigaSpace g = new GigaSpaceConfigurer(testSpaceConfigurer).gigaSpace();
}
catch (Exception e) {
// This will throw an exception, just ignore it.
} finally {
testSpaceConfigurer.close();
}
testLog.log(Level.INFO, "This appears in the (wrong) gigaspaces log file");
}
}
Overriding the security manager, as suggested by jmehrens, seems to be the way to go.
I was able to stop Gigaspaces from stealing the logging by denying it permission to run the reset() methods on the LogManager, thusly:
// Silly hack to keep gigaspaces from STEALING ALL OUR LOGS
static {
System.setSecurityManager(new SecurityManager() {
#Override
public void checkPermission(Permission p) {
if (p instanceof LoggingPermission) {
for (StackTraceElement stackTraceElement : new Exception().getStackTrace()) {
if (stackTraceElement.getMethodName().equalsIgnoreCase("reset") && stackTraceElement.getClassName().equalsIgnoreCase("java.util.logging.LogManager")) {
throw new SecurityException("No resetting the logger! It is forbidden.");
}
}
}
}
});
}
I this case, I just added the override in a static block of the class that creates my gigaspace instances, but anywhere before initialising them should do.

Retrieve contents of all stack traces being printed to the console?

I want to individually log every unique error I have, as searching though a dozen log files each +10k lines in length is time wasting and tedious.
I catch all exceptions I possibly can, but oftentimes other threads or libraries will shoot off their own errors without any way to process them myself.
Is there any workaround for this?
(E.G. an event for when printStackTrace() is called.)
Is there any workaround for this?
(E.G. an event for when printStackTrace() is called.)
Remap System.err to intercept throwables. If you look at the source code for Throwable.printStackTrace() you'll see that it indirectly calls System.err.println(this);
For example:
import java.io.PrintStream;
public class SpyPrintStream extends PrintStream {
public static void main(String[] args) {
System.setErr(new SpyPrintStream(System.err));
System.setOut(new SpyPrintStream(System.out));
new Exception().printStackTrace();
}
public SpyPrintStream(PrintStream src) {
super(src);
}
#Override
public void println(Object x) {
if (x instanceof Throwable) {
super.println("Our spies detected "+ x.getClass().getName());
}
super.println(x);
}
}
Keep in mind there is all kinds of issues with using this code and it is not going to work in cases where printStackTrace is called with stream that is not standard stream.
You could always do a deep dive into java.lang.instrument if you really want to trap all exceptions.
I catch all exceptions I possibly can, but oftentimes other threads or libraries will shoot off their own errors without any way to process them myself.
Most libraries either throw exceptions back to the caller or use a logging framework. Capture the exception or configure the logging framework.
I want to individually log every unique error I have, as searching though a dozen log files each +10k lines in length is time wasting and tedious.
Logging frameworks include options to deal with this. DuplicateMessageFilter is an example.
Food for thought:
public class DemoClass {
private Map<String, Exception> myExceptions = new HashMap<>();
public void demoMethod() {
try {
// throwing an exception for illustration
throw new IOException("some message");
} catch (IOException e) {
myExceptions.putIfAbsent(e.getLocalizedMessage(), e);
// actually handle the exception
...
}
}
public void finished() {
for (Exception e : myExceptions.values()) {
e.printStackTrace();
}
}
}
You could store any exception you haven't seen yet. If your specific scenario allows for a better way to ensure you only save an exception only once you should prefer that over mapping by Exception.getLocalizedMessage()

Lock or wait cache load

We need to lock a method responsible for loading database date into a HashMap based cache.
A possible situation is that a second thread tries to access the method while the first method is still loading cache.
We consider the second thread's effort in this case to be superfluous. We would therefore like to have that second thread wait until the first thread is finished, and then return (without loading the cache again).
What I have works, but it seems quite inelegant. Are there better solutions?
private static final ReentrantLock cacheLock = new ReentrantLock();
private void loadCachemap() {
if (cacheLock.tryLock()) {
try {
this.cachemap = retrieveParamCacheMap();
} finally {
cacheLock.unlock();
}
} else {
try {
cacheLock.lock(); // wait until thread doing the load is finished
} finally {
try {
cacheLock.unlock();
} catch (IllegalMonitorStateException e) {
logger.error("loadCachemap() finally {}",e);
}
}
}
}
I prefer a more resilient approach using read locks AND write locks. Something like:
private static final ReadWriteLock cacheLock = new ReentrantReadWriteLock();
private static final Lock cacheReadLock = cacheLock.readLock();
private static final Lock cacheWriteLock = cacheLock.writeLock();
private void loadCache() throws Exception {
// Expiry.
while (storeCache.expired(CachePill)) {
/**
* Allow only one in - all others will wait for 5 seconds before checking again.
*
* Eventually the one that got in will finish loading, refresh the Cache pill and let all the waiting ones out.
*
* Also waits until all read locks have been released - not sure if that might cause problems under busy conditions.
*/
if (cacheWriteLock.tryLock(5, TimeUnit.SECONDS)) {
try {
// Got a lock! Start the rebuild if still out of date.
if (storeCache.expired(CachePill)) {
rebuildCache();
}
} finally {
cacheWriteLock.unlock();
}
}
}
}
Note that the storeCache.expired(CachePill) detects a stale cache which may be more than you are wanting but the concept here is the same, establish a write lock before updating the cache which will deny all read attempts until the rebuild is done. Also, manage multiple attempts at write in a loop of some sort or just drop out and let the read lock wait for access.
A read from the cache now looks like this:
public Object load(String id) throws Exception {
Store store = null;
// Make sure cache is fresh.
loadCache();
try {
// Establish a read lock so we do not attempt a read while teh cache is being updated.
cacheReadLock.lock();
store = storeCache.get(storeId);
} finally {
// Make sure the lock is cleared.
cacheReadLock.unlock();
}
return store;
}
The primary benefit of this form is that read access does not block other read access but everything stops cleanly during a rebuild - even other rebuilds.
You didn't say how complicated your structure is and how much concurrency / congestion you need. There are many ways to address your need.
If your data is simple, use a ConcurrentHashMap or similar to hold your data. Then just read and write in threads regardlessly.
Another alternative is to use actor model and put read/write on the same queue.
If all you need is to fill a read-only map which is initialized from database once requested, you could use any form of double-check locking which may be implemented in a number of ways. The easiest variant would be the following:
private volatile Map<T, V> cacheMap;
public void loadCacheMap() {
if (cacheMap == null) {
synchronized (this) {
if (cacheMap == null) {
cacheMap = retrieveParamCacheMap();
}
}
}
}
But I would personally prefer to avoid any form of synchronization here and just make sure that the initialization is done before any other thread can access it (for example in a form of init method in a DI container). In this case you would even avoid overhead of volatile.
EDIT: The answer works only when initial load is expected. In case of multiple updates, you could try to replace the tryLock by some other form of test and test-and-set, for example using something like this:
private final AtomicReference<CountDownLatch> sync =
new AtomicReference<>(new CountDownLatch(0));
private void loadCacheMap() {
CountDownLatch oldSync = sync.get();
if (oldSync.getCount() == 0) { // if nobody updating now
CountDownLatch newSync = new CountDownLatch(1);
if (sync.compareAndSet(oldSync, newSync)) {
cacheMap = retrieveParamCacheMap();
newSync.countDown();
return;
}
}
sync.get().await();
}

Workaround for Java bug which causes crash dump

A program that I've developed is crashing the JVM occasionally due to this bug: http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8029516. Unfortunately the bug has not been resolved by Oracle and the bug report says that there are no known workarounds.
I've tried to modify the example code from the bug report by calling .register(sWatchService, eventKinds) in the KeyWatcher thread instead, by adding all pending register request to a list that I loop through in the KeyWatcher thread but it's still crashing. I'm guessing this just had the same effect as synchronizing on sWatchService (like the submitter of the bug report tried).
Can you think of any way to get around this?
From comments:
It appears that we have an issue with I/O cancellation when there is a pending ReadDirectoryChangesW outstanding.
The statement and example code indicate that the bug is triggered when:
There is a pending event that has not been consumed (it may or may not be visible to WatchService.poll() or WatchService.take())
WatchKey.cancel() is called on the key
This is a nasty bug with no universal workaround. The approach depends on the specifics of your application. Consider pooling watches to a single place so you don't need to call WatchKey.cancel(). If at one point the pool becomes too large, close the entire WatchService and start over. Something similar to.
public class FileWatcerService {
static Kind<?>[] allEvents = new Kind<?>[] {
StandardWatchEventKinds.ENTRY_CREATE,
StandardWatchEventKinds.ENTRY_DELETE,
StandardWatchEventKinds.ENTRY_MODIFY
};
WatchService ws;
// Keep track of paths and registered listeners
Map<String, List<FileChangeListener>> listeners = new ConcurrentHashMap<String, List<FileChangeListener>>();
Map<WatchKey, String> keys = new ConcurrentHashMap<WatchKey, String>();
boolean toStop = false;
public interface FileChangeListener {
void onChange();
}
public void addFileChangeListener(String path, FileChangeListener l) {
if(!listeners.containsKey(path)) {
listeners.put(path, new ArrayList<FileChangeListener>());
keys.put(Paths.get(path).register(ws, allEvents), path);
}
listeners.get(path).add(l);
}
public void removeFileChangeListener(String path, FileChangeListener l) {
if(listeners.containsKey(path))
listeners.get(path).remove(l);
}
public void start() {
ws = FileSystems.getDefault().newWatchService();
new Thread(new Runnable() {
public void run() {
while(!toStop) {
WatchKey key = ws.take();
for(FileChangeListener l: listeners.get(keys.get(key)))
l.onChange();
}
}
}).start();
}
public void stop() {
toStop = true;
ws.close();
}
}
I've managed to create a workaround though it's somewhat ugly.
The bug is in JDK method WindowsWatchKey.invalidate() that releases native buffer while the subsequent calls may still access it. This one-liner fixes the problem by delaying buffer clean-up until GC.
Here is a compiled patch to JDK. In order to apply it add the following Java command-line flag:
-Xbootclasspath/p:jdk-8029516-patch.jar
If patching JDK is not an option in your case, there is still a workaround on the application level. It relies on the knowledge of Windows WatchService internal implementation.
public class JDK_8029516 {
private static final Field bufferField = getField("sun.nio.fs.WindowsWatchService$WindowsWatchKey", "buffer");
private static final Field cleanerField = getField("sun.nio.fs.NativeBuffer", "cleaner");
private static final Cleaner dummyCleaner = Cleaner.create(Thread.class, new Thread());
private static Field getField(String className, String fieldName) {
try {
Field f = Class.forName(className).getDeclaredField(fieldName);
f.setAccessible(true);
return f;
} catch (Exception e) {
throw new IllegalStateException(e);
}
}
public static void patch(WatchKey key) {
try {
cleanerField.set(bufferField.get(key), dummyCleaner);
} catch (IllegalAccessException e) {
throw new IllegalStateException(e);
}
}
}
Call JDK_8029516.patch(watchKey) right after the key is registred, and it will prevent watchKey.cancel() from releasing the native buffer prematurely.
You might not be able to work around the problem itself but you could deal with the error and handle it. I don't know your specific situation but I could imagine the biggest issue is the crash of the whole JVM. Putting all in a try block does not work because you cannot catch a JVM crash.
Not knowing more about your project makes it difficult to suggest a good/acceptable solution, but maybe this could be an option: Do all the file watching stuff in a separate JVM process. From your main process start a new JVM (e.g. using ProcessBuilder.start()). When the process terminates (i.e. the newly started JVM crashes), restart it. Obviously you need to be able to recover, i.e. you need to keep track of what files to watch and you need to keep this data in your main process too.
Now the biggest remaining part is to implement some communication between the main process and the file watching process. This could be done using standard input/output of the file watching process or using a Socket/ServerSocket or some other mechanism.

Categories