write log file to html per thread - java

I have a application that write a log file to .log.
But now i make a html jar file to implement to the application (1 log per request).
The problem is when 2 or more thread running at the same time, the html log is mixed up.
Example:
aaa.log and bbb.log
aaa.log content contain bbb.log content vice versa
how to make it separate log file with own content.
ctx.htmllogger = new HTMLLogger(
ctx.control.getCodeValue(),
ctx.AvailRequest.getTrip().getSegmentProductType()
.getCodeValue(), ctx.OPT_TYPE);
String htmllogdir = System.getProperty("user.dir");
htmllogdir = htmllogs + "\" + ctx.htmllogger.getCurrentTS( "ddMMyyyy" ) + "\" + ctx.OPT_TYPE.toLowerCase();
ctx.htmllogger.MakeDirectories( htmllogdir );
try {
ctx.htmllogger.initLogger(DlgKuoni.class.getCanonicalName(), htmllogdir);
} catch (IOException e) {
ctx.htmllogger = null;
e.printStackTrace();
}
ctx.htmllogger.startHTMLLog();
Appreciated who help me.

You should take a look at log4j (and maybe self4j). There is really no need th handle these things on your own.
That can all be configured with log4j, including html-formatter etc.

This is happening because you have a bug in your program. Most likely you have a global/shared variable used by both threads to access the log.
I suggest you have a resource for your log which is visible to only one thread at a time. This avoids the chance for confusion.

Related

JavaLogger randomly writes to a second file

I have a jar that is called about every minute from another script. In this jar I have created a JavaLogger that logs what happens while the Jar runs. The log file that JavaLogger writes to is called myLog.0. I have the following code to allow it to go to .1/.2/.3/.4.
try {
FileHandler fileHandler = new FileHandler(filePath, 5242880, 5, true);
fileHandler.setFormatter(new java.util.logging.Formatter() {
#Override
public String format(LogRecord logRecord) {
return "[" + logRecord.getLevel() + " " + createDateTimeLog() + "] " + logRecord.getMessage() + "\r\n";
}
});
logger.addHandler(fileHandler);
} catch (IOException e) {}
So I expect the logs to grow. However, every once and a while the log will print to myLog.0.1. I would guess that this is because the file is locked. However, this never happens mid-run of my jar. It logs to .0.1 the entire time the jar runs. Could the file still be locked from my previous run?
If so I have even tried to close the handler before the Jar exits. There is only one exit point from the jar and I put the following code right before it:
MyLogger.logger.getHandlers()[0].close();
I have run this through the debugger and there is only ever one handler (the FileHandler that I add).
As I said, this only happens randomly. The first 3 runs of the jar could be to .0 and then the fourth to .0.1. Then the next 10 could be correct again. It's hard to say. However it does happen fairly often (I would say it writes to .0.1 about every 1/8 of the time).
Any ideas / suggestions would be great. Thanks ahead of time.
Could the file still be locked from my previous run?
Could be that two JVMs are running your jar at the same time. Add code to grab the RuntimeMXBean and then add a single log statement to record the runtime name and the start time. The runtime name usually maps to a process id and a host name.
The FileHandler does everything it can to prevent two concurrently running JVMs from writing to the same log file. If this behavior was allowed the log file would be almost impossible to read and understand.
If you really want to write everything to one log file then you have to do one of the following:
Prevent concurrent JVM processes from starting by changing how it is launched.
Have your code detect if another JVM is running your code and exit before creating a FileHandler.
Have each JVM write to a distinct log file and create code to safely merge the files into one.
Create a proxy Handler that creates and closes a FileHandler for each log record. The proxy handler would use a predefined file name (different from the log file) and a FileLock to serialize access to the log file from different JVMs.
Use a dedicated process to write to the log file and have all the JVMs send log messages to that process.

Monitoring directory for changes from web service

Don't know if it is clear from title, I'll explain it deeper.
First of all limitations: Java 1.5 IBM.
This is the situation:
I have spring web service that receives request with pdf document in it. I need to put this pdf into the some input directory that AFP application (not of the importance) monitors. This AFP application takes that pdf, do something with it and returns it to some output directory that I need to monitor. Monitoring of output directory would take some time, probably 30 seconds. Also, I know what is exact file name that I expect to appear in output directory. If nothing appears in 30 seconds than I would return some fault response.
Because of my poor knowledge of web services and multithreading I don't know in which possible problems I can fall into.
Also, searching the internet I realize that most of people recommend watchservice for directory monitoring, but this is introduced in Java 7.
Any suggestion, link, idea would be helpful.
So, the scenario is simple. In a main method, the following actions are done in order:
call the AFP service;
poll the directory for the output file;
deal with the output file.
We suppose here that outputFile is a File containing the absolute path to the generated file; this method returns void, adapt:
// We poll every second, so...
private static final int SAMPLES = 30;
public void dealWithAFP(whatever, arguments, are, there)
throws WhateverIsNecessary
{
callAfpService(here);
int i = 0;
try {
while (i < SAMPLES) {
TimeUnit.SECONDS.sleep(1);
if (outputFile.exists())
break;
}
throw new WhateverIsNecessary();
} catch (InterruptedException e) {
// Throw it back if the method does, otherwise the minimum is to:
Thread.currentThread().interrupt();
throw new WhateverIsNecessary();
}
dealWithOutputFile(outputFile);
}

Why do I get a FolderNotFoundException after successfully creating a folder?

I am trying to create a folder if it does not exist and then copy a message from another folder to the destination folder. I am finding some strange behaviour that I can not understand. Given the following excerpt:
// messages is an array of Message instances.
// Source is the source folder
// destination is a string of the destination folder.
Folder dest = null;
try {
dest = store.getFolder(destination);
if (!dest.exists()) {
dest.create(Folder.HOLDS_MESSAGES | Folder.HOLDS_FOLDERS);
// Since folder's are not meant to cache I thought I'd get it again
// though this does not work either.
//dest.close(false);
//dest = store.getFolder(destination);
}
dest.open(Folder.READ_WRITE);
// Fails here
source.copyMessages(messages, dest);
source.setFlags(messages, new Flags(Flags.Flag.DELETED), true);
} catch (MessagingException ex) {
throw new MailProcessorException(ex.getMessage(), ex);
} finally {
if (dest != null) {
try {
dest.close(false);
} catch (MessagingException ex) {
System.err.println("Couldn't close destination folder.");
}
}
}
The following behaviour is examined:
If the folder does not exist:
The folder gets created
An exception is thrown at source.copyMessages.
If the folder does exist:
The messages are copied as expected.
Messages are marked for deletion.
I am using JavaMail 1.4.6, also tried with 1.6.5.
This is really strange. Looking at your code and reading the docs, there should be no way that this is happening...
Could it be some problem with the mail server? Some databases use consistency models (see http://en.wikipedia.org/wiki/Eventual_consistency for example) that don't always act the way you'd naively expect. Is there a chance you can try your code on a different mail server? Or, try to put a really long (30 seconds?) Thread.sleep(...) before your copyMessages(...) call and see if that fixes it.
If it does, what is happening is that your server creates the folder in one request, but this creation takes a while to reach the part of the server code that is handling the message copying. Then, unfortunately, I'm not sure if there is much you can do other than a retry if the copying fails or the artificial delay (which sucks).
Aside: The docs seem to say, that you can skip the dest.open(Folder.READ_WRITE); if you like.

JNotify and File Reader conflicting each other

I implemented JNotify to determine when a new file arrives in a particular directory, and, when a file arrives, to send the filename over to another function, as follows:
public class FileDetector {
MessageProcessor mp;
class Listener implements JNotifyListener {
public void fileCreated(int wd, String rootPath, String name) {
print("created " + rootPath + " : " + name);
mp.processMessage(rootPath + "\\" + name);
}
}
}
The function mp.processMessage tries to open the file, but I keep getting an error that the file is in use by another process. However, as the file has just been created, the only other process which might be using it is JNotify.
I put a couple of print statements, and it appears that the function mp.processMessage is being called before the listener's print function. Does anyone have a suggestion for how I might resolve this, beyond putting the entire message processing inside the listener class?
#Eile What I think is As soon as one process is copying the file, you are trying to read it, 100 ms delay will complete the copy first n then you can read the file easily.
Here's what I've done so far - I added into mp.processMessage() a 100 millisecond delay before trying to open the file, and have had no issues with it. However, I am still puzzled as to why that would be necessary, and whether or not there is a better solution to this issue.
I have tried this and have found that an arbitrary delay didn't work well for me. What I did was create a DelayQueue. I added each observed new file to the queue with a 100ms delay. When the delay expired I checked if the file was readable/writable. If is was, I popped it from the queue. If not, I readded it to the queue with another 100ms delay. To check if it was readable/writable I attempt to open a FileInputStream to the file. If no exception, I close the stream and pop the file.
I am hoping that nio.2 (jsr 203) does not have this same issue. If you can use Java 7 you might want to give it a try.

JUnit tests fail when creating new Files

We have several JUnit tests that rely on creating new files and reading them. However there are issues with the files not being created properly. But this fault comes and goes.
This is the code:
#Test
public void test_3() throws Exception {
// Deletes files in tmp test dir
File tempDir = new File(TEST_ROOT, "tmp.dir");
if (tempDir.exists()) {
for (File f : tempDir.listFiles()) {
f.delete();
}
} else {
tempDir.mkdir();
}
File file_1 = new File(tempDir, "file1");
FileWriter out_1 = new FileWriter(file_1);
out_1.append("# File 1");
out_1.close();
File file_2 = new File(tempDir, "file2");
FileWriter out_2 = new FileWriter(file_2);
out_2.append("# File 2");
out_2.close();
File file_3 = new File(tempDir, "fileXXX");
FileWriter out_3 = new FileWriter(file_3);
out_3.append("# File 3");
out_3.close();
....
The fail is that the second file object, file_2, never gets created. Sometimes. Then when we try to write to it a FileNotFoundException is thrown
If we run only this testcase, everything works fine.
If we run this testfile with some ~40 testcases, it can both fail and work depending on the current lunar cycle.
If we run the entire testsuite, consisting of some 10*40 testcases, it always fails.
We have tried
adding sleeps (5sec) after new File, nothing
adding while loop until file_2.exists() is true but the loop never stopped
catching SecurityException, IOException and even throwable when we do the New File(..), but caught nothing.
At one point we got all files to be created, but file_2 was created before file_1 and a test that checked creation time failed.
We've also tried adding file_1.createNewFile() and it always returns true.
So what is going on? How can we make tests that depend on actual files and always be sure they exist?
This has been tested in both java 1.5 and 1.6, also in Windows 7 and Linux. The only difference that can be observed is that sometimes a similar testcase before fails, and sometimes file_1 isn't created instead
Update
We tried a new variation:
File file_2 = new File(tempDir, "file2");
while (!file_2.canRead()) {
Thread.sleep(500);
try {
file_2.createNewFile();
} catch (IOException e) {
e.printStackTrace();
}
}
This results in alot of Exceptions of the type:
java.io.IOException: Access is denied
at java.io.WinNTFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:883)
... but eventually it works, the file is created.
Are there multiple instances of your program running at once?
Check for any extra instances of javaw.exe running. If multiple programs have handles to the same file at once, things can get very wonky very quickly.
Do you have antivirus software or anything else running that could be getting in the way of file creation/deletion, by handle?
Don't hardcode your file names, use random names. It's the only way to abstract yourself from the various external situations that can occur (multiple access to the same file, permissions, file system error, locking problems, etc...).
One thing for sure: using sleep() or retrying is guaranteed to cause weird errors at some point in the future, avoid doing that.
I did some googling and based on this lucene bug and this board question seems to indicate that there could be an issue with file locking and other processes using the file.
Since we are running this on ClearCase it seems plausible that ClearCase does some indexing or something similar when the files are being created. Adding loops that repeat until the file is readable solved the issue, so we are going with that. Very ugly solution though.
Try File#createTempFile, this at least guarantees you that there are no other files by the same name that would still hold a lock.

Categories