Apache Commons IO Tailer delivers old log messages - java

My code is given below.
public static void main(String[] args) {
// TODO code application logic here
File pcounter_log = new File("c:\development\temp\test.log");
try {
Tailer tailer = new Tailer(pcounter_log, new FileListener("c:\development\temp\test.log",getLogPattern()), 5000,true);
Thread thread = new Thread(tailer);
thread.start();
} catch (Exception e) {
System.out.println(e);
}
}
public class FileListener extends TailerListenerAdapter {
public void handle(String line) {
for (String logPattern : pattern) {
if (line.contains(logPattern)) {
logger.info(line);
}
}
}
}
Here getLogPattern() returns an ArrayList containing values like [info,error,abc.catch,warning]. When running this code, I get old log message followed by new log message. I.e. the output is like this:
20 May 2011 07:06:02,305 INFO FileListener:? - 20 May 2011 07:06:01,230 DEBUG - exiting readScriptErrorStream()
20 May 2011 07:06:55,052 INFO FileListener:? - 20 May 2011 07:06:55,016 DEBUG - readScriptErrorStream()
20 May 2011 07:06:56,056 INFO FileListener:? - 20 May 2011 07:06:55,040 DEBUG - exiting readScriptErrorStream()
20 May 2011 07:07:01,241 INFO FileListener:? - 20 May 2011 07:07:01,219 DEBUG - readScriptErrorStream()
20 May 2011 07:07:02,245 INFO FileListener:? - 20 May 2011 07:07:01,230 DEBUG - exiting readScriptErrorStream()
20 May 2011 07:07:55,020 INFO FileListener:? - 20 May 2011 07:07:55,016 DEBUG - readScriptErrorStream()
20 May 2011 07:07:56,024 INFO FileListener:? - 20 2011 07:07:55,030 DEBUG - exiting readScriptErrorStream()
20 May 2011 07:08:01,269 INFO FileListener:? - 20 May 2011 07:08:01,227 DEBUG - readScriptErrorStream()
20 May 2011 07:08:02,273 INFO FileListener:? - 20 May 2011 07:08:01,230 DEBUG - exiting readScriptErrorStream()
20 May 2011 07:08:21,234 INFO FileListener:? - 20 May 2011 06:40:02,461 DEBUG - readScriptErrorStream()
20 May 2011 07:08:22,237 INFO FileListener:? - 20 May 2011 06:40:02,468 DEBUG - exiting readScriptErrorStream()
20 May 2011 07:08:23,242 INFO FileListener:? - 20 May 2011 06:41:01,224 DEBUG - readScriptErrorStream()
20 May 2011 07:08:24,250 INFO FileListener:? - 20 May 2011 06:41:01,232 DEBUG - exiting readScriptErrorStream()
20 May 2011 07:08:25,261 INFO FileListener:? - 20 May 2011 06:42:01,218 DEBUG - readScriptErrorStream()
20 May 2011 07:08:26,265 INFO FileListener:? - 20 May 2011 06:42:01,230 DEBUG - exiting readScriptErrorStream()
20 May 2011 07:08:27,272 INFO FileListener:? - 20 May 2011 06:43:01,223 DEBUG - readScriptErrorStream()
20 May 2011 07:08:28,275 INFO FileListener:? - 20 May 2011 06:43:01,231 DEBUG - exiting readScriptErrorStream()
How to avoid to get old log messages from log file like this?

Oh boy, I have wasted an entire day thinking it was my dodgy threading, but I now see others have shared my pain. Oh well, at least I won't waste another day looking at it.
But I did look at the source code. I am sure the error is occuring here in the Tailer.java file:
boolean newer = FileUtils.isFileNewer(file, last); // IO-279, must be done first
...
...
else if (newer) {
/*
* This can happen if the file is truncated or overwritten with the
* exact same length of information. In cases like this, the file
* position needs to be reset
*/
position = 0;
reader.seek(position);
...
It seems it's possible that the file modification data can change before the data is written. I'm no expert on why this would be. I am getting my log files from the network, so perhaps all sorts of caching is going on that means you are not garunteed that a newer file will contain any more data.
I have updated the source and removed this section. For me, the chances of a file getting truncated/recreated with exactly the same number of bytes is minimal. I'm referencing 10MB rolling log files.
I see that this is a known issue ( IO-279 LINK HERE ). However, it's marked as resolved and that's clearly not the case. I'll contact the developers to see if there's something in the pipeline. It would seem they're of the same opinion as me about the fix.

https://issues.apache.org/jira/browse/IO-279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
What version of commons.io are ( were ) you using?
I experienced this error with 2.0.1. I updated to 2.3, and it seems to be working properly ( so far )

I know that this is a very old thread, but I just ran into a similar issue with Tailer. It turned out that Tailer had two threads reading the file concurrently.
I traced it back to how I had created the Tailer instance. Rather than using one of their 3 recommendations (static helper method, Executor, or Thread) I had created the instance with the static helper method and then fed the instance created into a Thread which seemed to result in two threads reading the file.
Once I corrected this (by removing the call to the static helper method and just using one of the overloaded Tailer constructors and a Thread) the issue went away.
Hope this helps someone.

Related

ParallelStream queue task in CommonPool rather than the custom pool

I wanted to use custom ThreadPool for parallelStream. Reason being I wanted to use MDCContext in the task. This is the code I wrote to use the custom ThreadPool:
final ExecutorService mdcPool = MDCExecutors.newCachedThreadPool();
mdcPool.submit(() -> ruleset.getOperationList().parallelStream().forEach(operation -> {
log.info("Sample log line");
});
When the MDC context was not getting copied to the task, I looked at the logs. These are the logs I found. The first log is executed in "(pool-16-thread-1)" but other tasks are getting executed on "ForkJoinPool.commonPool-worker". The first log also has MdcContextID. But as I am using custom ThreadPool for submitting the task, all tasks should be executing in custom ThreadPool.
16 Oct 2018 12:46:58,298 [INFO] 8fcfa6ee-d141-11e8-b84a-7da6cd73aa0b (pool-16-thread-1) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,298 [INFO] (ForkJoinPool.commonPool-worker-11) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,298 [INFO] (ForkJoinPool.commonPool-worker-4) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,298 [INFO] (ForkJoinPool.commonPool-worker-13) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,298 [INFO] (ForkJoinPool.commonPool-worker-9) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,299 [INFO] (ForkJoinPool.commonPool-worker-2) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,299 [INFO] (ForkJoinPool.commonPool-worker-15) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
Is this supposed to happen or am I missing something?
There is no support for running a parallel stream in a custom thread pool. It happens to be executed in a different Fork/Join pool when the operation is initiated in a worker thread of a different Fork/Join pool, but that does not seem to be a planned feature, as the Stream implementation code will still use artifacts of the common pool internally in some operations then.
In your case, it seems that the ExecutorService returned by MDCExecutors.newCachedThreadPool() is not a Fork/Join pool, so it does not exhibit this undocumented behavior at all.
There is a feature request, JDK-8032512, regarding more thread control. It’s open and, as far as I can see, without much activity.

getting 500 internal server error on rest call

getting this error after hibernate outputs the data, any idea why this would be happening. please help!
Sep 07, 2016 12:07:00 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl stop
INFO: HHH000030: Cleaning up connection pool [jdbc:postgresql://localhost:5432/bendb]
Sep 07, 2016 12:07:00 PM org.glassfish.jersey.filter.LoggingFilter log
INFO: 3 * Server responded with a response on thread http-nio-8080-exec-5
3 < 200
3 < Access-Control-Allow-Methods: GET, POST, DELETE, PUT
3 < Access-Control-Allow-Origin: *
3 < Allow: OPTIONS
3 < Content-Type: application/json
Sep 07, 2016 12:07:00 PM org.glassfish.jersey.filter.LoggingFilter log
INFO: 4 * Server responded with a response on thread http-nio-8080-exec-5
4 < 500
Sorry found a bug in my code so apparently the code change i made was trying to map all the junction tables (collections) in that get user rest call hence jersey just breaks out while attempting to do that. un Commenting that code and just passing the normal data sets solved the issue.

Why is java util logger printing certain messages in German?

I have used java util logging for my application. I noticed that certain properties such as the month name and even certain log levels are getting printed in German.
Dez 10, 2015 8:50:26 AM com.kube.common.HidCommunication readFromHidDevice
SCHWERWIEGEND: Time - Barcode Message read from the Device: 2015/12/10 08:50:26:992
Though I have specified the level as Level.SEVERE, why is it printing it as "SCHWERWIEGEND" in German? Please advice.

Apache Cayenne: NullPointerException when commitChanges

I'm trying commitChanges, but catch java.lang.NullPointerException. log:
...
INFO: --- transaction started.
авг 04, 2015 12:33:59 PM org.apache.cayenne.access.dbsync.CreateIfNoSchemaStrategy processSchemaUpdate
INFO: Full or partial schema detected, skipping tables creation
авг 04, 2015 12:33:59 PM org.apache.cayenne.log.CommonsJdbcEventLogger logQuery
INFO: SELECT NEXT_ID FROM AUTO_PK_SUPPORT WHERE TABLE_NAME = 'ARTIST'
авг 04, 2015 12:33:59 PM org.apache.cayenne.log.CommonsJdbcEventLogger logSelectCount
INFO: === returned 1 row. - took 16 ms.
авг 04, 2015 12:33:59 PM org.apache.cayenne.log.CommonsJdbcEventLogger logQueryError
INFO: *** error.
java.lang.NullPointerException
at com.relx.jdbc.jdbc2.LinterStatementImpl.getUpdateCount(LinterStatementImpl.java:419)
at org.apache.cayenne.access.jdbc.SQLTemplateAction.execute(SQLTemplateAction.java:190)
at org.apache.cayenne.access.jdbc.SQLTemplateAction.performAction(SQLTemplateAction.java:124)
at org.apache.cayenne.access.DataNodeQueryAction.runQuery(DataNodeQueryAction.java:87)
at org.apache.cayenne.access.DataNode.performQueries(DataNode.java:280)
at org.apache.cayenne.dba.JdbcPkGenerator.longPkFromDatabase(JdbcPkGenerator.java:310)
at org.apache.cayenne.dba.JdbcPkGenerator.generatePk(JdbcPkGenerator.java:268)
at org.apache.cayenne.access.DataDomainInsertBucket.createPermIds(DataDomainInsertBucket.java:171)
at org.apache.cayenne.access.DataDomainInsertBucket.appendQueriesInternal(DataDomainInsertBucket.java:76)
at org.apache.cayenne.access.DataDomainSyncBucket.appendQueries(DataDomainSyncBucket.java:78)
at org.apache.cayenne.access.DataDomainFlushAction.preprocess(DataDomainFlushAction.java:188)
at org.apache.cayenne.access.DataDomainFlushAction.flush(DataDomainFlushAction.java:144)
at org.apache.cayenne.access.DataDomain.onSyncFlush(DataDomain.java:853)
at org.apache.cayenne.access.DataDomain$2.transform(DataDomain.java:817)
at org.apache.cayenne.access.DataDomain.runInTransaction(DataDomain.java:877)
at org.apache.cayenne.access.DataDomain.onSyncNoFilters(DataDomain.java:814)
at org.apache.cayenne.access.DataDomain$DataDomainSyncFilterChain.onSync(DataDomain.java:1031)
at org.apache.cayenne.access.DataDomain.onSync(DataDomain.java:785)
at org.apache.cayenne.access.DataContext.flushToParent(DataContext.java:817)
at org.apache.cayenne.access.DataContext.commitChanges(DataContext.java:756)
at CayenneTest2.main(CayenneTest2.java:61)
Table AUTO_PK_SUPPORT was created and filled Apache Cayenne.
Why throw the Exception?
From the stack trace you are working with Cayenne v. 3.1. The code in question is here. Cayenne SQLTemplateAction checks whether the result of the query is a ResultSet and with the answer being "no", assumes the result is an update count. So it tries to read the update count on line 190:
int updateCount = statement.getUpdateCount();
Somehow the underlying statement object (LinterStatementImpl) is not happy about that. I don't have access to source code of the Linter DB driver, so I can't say what exactly is wrong, but the driver is not behaving the way Cayenne expects it to.
Perhaps Linter is special enough to warrant its own Cayenne DbAdapter (??) Feel free to join Cayenne dev mailing list to discuss what it takes to write one.

Jersey - Massive Latency Calling Service Method From Resource

I have a Jersey REST service with a resource class that calls methods on a service class. During testing we've noticed a latency between the time the resource method's "Entering" log statement and the service's. This latency can be as much as 5 minutes although normally it's in the 2 minute range. Once in awhile, the latency is minimal (milliseconds).
Here's what our classes look like:
Resource
#Stateless
#Path("/provision")
public class ProvisionResource
{
private final Logger logger = LoggerFactory.getLogger(ProvisionResource.class);
#EJB
private ProvisionService provisionService;
#GET
#Produces(MediaType.APPLICATION_XML)
#Path("/subscriber")
public SubscriberAccount querySubscriberAccount(
#QueryParam("accountNum") String accountNum)
{
logger.debug("Entering querySubscriberAccount()");
final SubscriberAccount account;
try
{
account = provisionService.querySubscriber(accountNum);
}
catch (IllegalArgumentException ex)
{
logger.error("Illegal argument while executing query for subscriber account",
ex);
throw new WebApplicationException(Response.Status.BAD_REQUEST);
}
catch (Exception ex)
{
logger.error("Unexpected exception while executing query for subscriber account",
ex);
throw new WebApplicationException(Response.Status.INTERNAL_SERVER_ERROR);
}
logger.debug("Exiting querySubscriberAccount()");
return account;
}
}
Service:
#Singleton
public class ProvisionService
{
private final Logger logger = LoggerFactory.getLogger(ProvisionService.class);
public SubscriberAccount querySubscriber(final String accountNum) throws IllegalArgumentException, Exception
{
logger.debug("Entering querySubscriber()");
if (null == accountNum)
{
throw new IllegalArgumentException("The argument {accountNum} must not be NULL");
}
SubscriberAccount subscriberAccount = null;
try
{
// do stuff to get subscriber account
}
catch (Exception ex)
{
throw new Exception("Caught exception querying {accountNum}=["
+ accountNum + "]", ex);
}
finally
{
logger.debug("Exiting querySubscriber()");
}
return subscriberAccount;
}
Here's some samples from our logs showing the timestamps of when we enter the methods.
2012 Feb 07 15:31:06,303 MST [http-thread-pool-80(1)] DEBUG my.package.ProvisionResource - Entering querySubscriberAccount()
2012 Feb 07 15:31:06,304 MST [http-thread-pool-80(1)] DEBUG my.package.ProvisionService - Entering querySubscriber()
2012 Feb 07 15:35:06,359 MST [http-thread-pool-80(1)] DEBUG my.package.ProvisionResource - Entering querySubscriberAccount()
2012 Feb 07 15:40:33,395 MST [http-thread-pool-80(1)] DEBUG my.package.ProvisionService - Entering querySubscriber()
2012 Feb 07 15:34:06,345 MST [http-thread-pool-80(2)] DEBUG my.package.ProvisionResource - Entering querySubscriberAccount()
2012 Feb 07 15:37:24,372 MST [http-thread-pool-80(2)] DEBUG my.package.ProvisionService - Entering querySubscriber()
2012 Feb 07 15:33:06,332 MST [http-thread-pool-80(4)] DEBUG my.package.ProvisionResource - Entering querySubscriberAccount()
2012 Feb 07 15:34:15,349 MST [http-thread-pool-80(4)] DEBUG my.package.ProvisionService - Entering querySubscriber()
2012 Feb 07 15:37:24,371 MST [http-thread-pool-80(4)] DEBUG my.package.ProvisionResource - Entering querySubscriberAccount()
2012 Feb 07 15:40:36,004 MST [http-thread-pool-80(4)] DEBUG my.package.ProvisionService - Entering querySubscriber()
2012 Feb 07 15:32:06,317 MST [http-thread-pool-80(5)] DEBUG my.package.ProvisionResource - Entering querySubscriberAccount()
2012 Feb 07 15:34:15,325 MST [http-thread-pool-80(5)] DEBUG my.package.ProvisionService - Entering querySubscriber()
2012 Feb 07 15:36:06,373 MST [http-thread-pool-80(5)] DEBUG my.package.ProvisionResource - Entering querySubscriberAccount()
2012 Feb 07 15:40:34,956 MST [http-thread-pool-80(5)] DEBUG my.package.ProvisionService - Entering querySubscriber()
As you can see, the first one called the querySubscriber method in the service almost immediately after entering the resource's querySubscriberAccount. However, subsequent calls to the webservice take ~1 to 5 minutes. There's really nothing happening in the resource that would hold up processing/calling the service.
The webservice is deployed on a Linux server in Glassfish 3.1.1.
Has anyone seen anything like this before?? Any suggestions on what's going on?
EDIT
Just a little more information...
The domain into which the web service war is deployed has 4 applications deployed in it:
the Jersey webservice war which is having issues
an ear which uses a client to the Jersey webservice
a servlet war used to test connections etc. used by the ear (including the Jersey webservice)
another servlet war which does not use the webservice
When we disabled the ear and "other" war file (only the Jersey war and the test servlet were enabled), the latency issue goes away. We re-enabled the war and ear and things still continued to respond in a timely manner. When we redeployed Jersey webservice war (made some logging changes), the latency problem immediately came back.
Thread dump can be used to find out what code (including stack traces) is running and the current moment inside Java process. jps tool will help in get getting PID of the required JVM instance.

Categories