Log files are being overwritten on tomcat shutdown - java

I am facing a weird issue. When I shutdown the tomcat first time on a day it is overwriting log file contents. However on 2nd or any subsequent restart I don't face that issue.
I am seeing following errors in log on tomcat shutdown;
23:08:03,390 [] [] INFO XmlWebApplicationContext:873 - Closing Root WebApplicationContext: startup date [Wed Apr 29 23:47:05 BST 2015]; root of context hierarchy
23:08:03,397 [] [] INFO ThreadPoolTaskExecutor:203 - Shutting down ExecutorService 'org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor#1d7b51e8'
23:11:33,880 [] [] [] INFO PropertiesFactoryBean:172 - Loading properties file from class path resource [apppname/application.properties]
23:11:41,413 [] [] [] INFO Reflections:238 - Reflections took 5894 ms to scan 112 urls, producing 5518 keys and 32092 values
23:11:42,242 [] [] [] INFO ThreadPoolTaskExecutor:165 - Initializing ExecutorService 'org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor#28a50da4'
23:11:42,596 [] [] [] INFO ContextLoader:325 - Root WebApplicationContext: initialization completed in 11465 ms
23:11:48,525 [] [] [] INFO PropertiesFactoryBean:172 - Loading properties file from class path resource [apppname/application.properties]
23:11:55,130 [] [] [] INFO Reflections:238 - Reflections took 5765 ms to scan 112 urls, producing 5518 keys and 32092 values
23:11:55,807 [] [] [] INFO ThreadPoolTaskExecutor:165 - Initializing ExecutorService 'org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor#1a46a171'
23:11:56,081 [] [] [] INFO ContextLoader:325 - Root WebApplicationContext: initialization completed in 9491 ms
23:12:01,469 [] [] [] INFO PropertiesFactoryBean:172 - Loading properties file from class path resource [apppname/application.properties]
23:12:08,106 [] [] [] INFO Reflections:238 - Reflections took 5757 ms to scan 112 urls, producing 5518 keys and 32092 values
23:12:08,793 [] [] [] INFO ThreadPoolTaskExecutor:165 - Initializing ExecutorService 'org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor#7213bc54'
23:12:09,062 [] [] [] INFO ContextLoader:325 - Root WebApplicationContext: initialization completed in 9260 ms
Log configuration
log4j.rootLogger=INFO, file
log4j.appender.file=org.apache.log4j.DailyRollingFileAppender
log4j.appender.file.File=/logs/logfilename.log
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
What can be the possible reason?
I have the same log4j configuration for other application. But they work perfectly fine. It looks like somehow tomcat is writing logs to the the application log instead of catalina.
It happens only on first restart in a day and when log level is set to INFO or DEBUG not ERROR.

Use the log4j Append variable. By default it should be true though...
log4j.appender.LOGFILE.Append=true
I also see you are using Rolling Appender but its not in your root logger
log4j.rootLogger=INFO, file, RollingAppender

Related

Flink Taskmanager crashed while accessing S3 FileSource (on Kubernetes/OpenShift)

I have a job receiving events with a s3 link. It attempts to load the resource using the following snippet
// value of s3 source is "s3://bucket_id/path/to/object.json"
List<String> collect = ExecutionEnvironment.getExecutionEnvironment().readTextFile(s3_source.toString()).collect();
Flink is configured accordingly in flink-conf.yaml
s3.access-key: XXX
s3.secret-key: XXX
s3.endpoint: s3.openshift-storage.svc
s3.path.style.access: true
The library flink-s3-fs-hadoop-1.16.0.jar is in path /opt/flink/plugins/flink-s3-fs-hadoop. I had some issues setting up the self-signed certificates (see this Gist for my config), but it seems to be working.
When starting the job through the JobManager's WebUI, I get the following logs
Job is starting
2023-01-26 10:33:09,891 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Receive slot request f9f291f0e4b74471e59a74602212060b for job c62504dec97d185a3e86fc390256e3f9 from resource manager with leader id 00000000000000000000000000000000.
2023-01-26 10:33:09,894 DEBUG org.apache.flink.runtime.memory.MemoryManager [] - Initialized MemoryManager with total memory size 178956973 and page size 32768.
2023-01-26 10:33:09,895 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Allocated slot for f9f291f0e4b74471e59a74602212060b.
2023-01-26 10:33:09,896 INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Add job c62504dec97d185a3e86fc390256e3f9 for job leader monitoring.
2023-01-26 10:33:09,897 DEBUG org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - New leader information for job c62504dec97d185a3e86fc390256e3f9. Address: akka.tcp://flink#flink-jobmanager:6123/user/rpc/jobmanager_10, leader id: 00000000000000000000000000000000.
2023-01-26 10:33:09,897 INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Try to register at job manager akka.tcp://flink#flink-jobmanager:6123/user/rpc/jobmanager_10 with leader id 00000000-0000-0000-0000-000000000000.
2023-01-26 10:33:09,898 DEBUG org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - Try to connect to remote RPC endpoint with address akka.tcp://flink#flink-jobmanager:6123/user/rpc/jobmanager_10. Returning a org.apache.flink.runtime.jobmaster.JobMasterGateway gateway.
2023-01-26 10:33:09,910 INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Resolved JobManager address, beginning registration
2023-01-26 10:33:09,910 DEBUG org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Registration at JobManager attempt 1 (timeout=100ms)
2023-01-26 10:33:09,991 DEBUG org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Registration with JobManager at akka.tcp://flink#flink-jobmanager:6123/user/rpc/jobmanager_10 was successful.
2023-01-26 10:33:09,993 INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Successful registration at job manager akka.tcp://flink#flink-jobmanager:6123/user/rpc/jobmanager_10 for job c62504dec97d185a3e86fc390256e3f9.
2023-01-26 10:33:09,993 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Establish JobManager connection for job c62504dec97d185a3e86fc390256e3f9.
2023-01-26 10:33:09,995 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Offer reserved slots to the leader of job c62504dec97d185a3e86fc390256e3f9.
2023-01-26 10:33:10,011 INFO org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Activate slot f9f291f0e4b74471e59a74602212060b.
Event is received from RabbitMQ Stream
2023-01-26 10:33:11,193 INFO io.av360.maverick.insights.rmqstreams.RMQStreamSource [] - Running event consumer as stream source
2023-01-26 10:33:11,194 INFO io.av360.maverick.insights.rmqstreams.config.RMQStreamsConfig [] - Creating consumer for stream 'events'
2023-01-26 10:33:11,195 INFO io.av360.maverick.insights.rmqstreams.config.RMQStreamsConfig [] - Creating environment required to connect to a RabbitMQ Stream.
2023-01-26 10:33:11,195 DEBUG io.av360.maverick.insights.rmqstreams.config.StreamsClientFactory [] - Building environment
2023-01-26 10:33:11,195 INFO io.av360.maverick.insights.rmqstreams.config.StreamsClientFactory [] - Valid configuration for host 'rabbitmq'
....
2023-01-26 10:33:14,907 INFO XXX [] - Event of type 'crawled.source.channel' with source 's3://bucket-d9c5a56e-c4a9-4b48-82dc-04241cb2b72c/scraped/source/channel/channel_UCucD43ut3DEx6QDK2JOEI1w.json'
Attempting to read from s3
2023-01-26 10:33:15,303 DEBUG org.apache.flink.fs.s3.common.AbstractS3FileSystemFactory [] - Creating S3 file system backed by Hadoop s3a file system
2023-01-26 10:33:15,303 DEBUG org.apache.flink.fs.s3.common.AbstractS3FileSystemFactory [] - Loading Hadoop configuration for Hadoop s3a file system
2023-01-26 10:33:15,500 DEBUG org.apache.flink.fs.s3hadoop.common.HadoopConfigLoader [] - Adding Flink config entry for s3.secret-key as fs.s3a.secret-key to Hadoop config
2023-01-26 10:33:15,500 DEBUG org.apache.flink.fs.s3hadoop.common.HadoopConfigLoader [] - Adding Flink config entry for s3.endpoint as fs.s3a.endpoint to Hadoop config
2023-01-26 10:33:15,500 DEBUG org.apache.flink.fs.s3hadoop.common.HadoopConfigLoader [] - Adding Flink config entry for s3.access-key as fs.s3a.access-key to Hadoop config
2023-01-26 10:33:15,500 DEBUG org.apache.flink.fs.s3hadoop.common.HadoopConfigLoader [] - Adding Flink config entry for s3.path.style.access as fs.s3a.path.style.access to Hadoop config
2023-01-26 10:33:15,705 DEBUG org.apache.flink.fs.s3hadoop.S3FileSystemFactory [] - Using scheme s3://bucket-d9c5a56e-c4a9-4b48-82dc-04241cb2b72c/scraped/source/channel/channel_UCucD43ut3DEx6QDK2JOEI1w.json for s3a file system backing the S3 File System
a few hadoop exceptions (relevant?)
2023-01-26 10:33:15,800 DEBUG org.apache.hadoop.util.Shell [] - Failed to detect a valid hadoop home directory
java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.
...
2023-01-26 10:33:16,108 DEBUG org.apache.hadoop.metrics2.impl.MetricsConfig [] - Could not locate file hadoop-metrics2-s3a-file-system.properties
org.apache.commons.configuration2.ex.ConfigurationException: Could not locate: org.apache.commons.configuration2.io.FileLocator#4ac69a5d[fileName=hadoop-metrics2-s3a-file-system.properties,basePath=<null>,sourceURL=,encoding=<null>,fileSystem=<null>,locationStrategy=<null>]
...
2023-01-26 10:33:16,111 DEBUG org.apache.hadoop.metrics2.impl.MetricsConfig [] - Could not locate file hadoop-metrics2.properties
org.apache.commons.configuration2.ex.ConfigurationException: Could not locate: org.apache.commons.configuration2.io.FileLocator#4faa9611[fileName=hadoop-metrics2.properties,basePath=<null>,sourceURL=,encoding=<null>,fileSystem=<null>,locationStrategy=<null>]
...
2023-01-26 10:33:16,508 DEBUG org.apache.hadoop.util.NativeCodeLoader [] - Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path: [/usr/java/packages/lib, /usr/lib64, /lib64, /lib, /usr/lib]
2023-01-26 10:33:16,508 DEBUG org.apache.hadoop.util.NativeCodeLoader [] - java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
2023-01-26 10:33:16,508 WARN org.apache.hadoop.util.NativeCodeLoader [] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2023-01-26 10:33:16,508 DEBUG org.apache.hadoop.util.PerformanceAdvisory [] - Falling back to shell based
...
2023-01-26 10:33:17,119 DEBUG org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory [] - Initializing SSL Context to channel mode Default_JSSE
2023-01-26 10:33:17,710 DEBUG org.apache.hadoop.fs.s3a.impl.NetworkBinding [] - Unable to create class org.apache.hadoop.fs.s3a.impl.ConfigureShadedAWSSocketFactory, value of fs.s3a.ssl.channel.mode will be ignored
java.lang.NoClassDefFoundError: com/amazonaws/thirdparty/apache/http/conn/socket/ConnectionSocketFactory
Connection attempt
2023-01-26 10:33:17,997 DEBUG org.apache.hadoop.fs.s3a.DefaultS3ClientFactory [] - Creating endpoint configuration for "s3.openshift-storage.svc"
2023-01-26 10:33:17,998 DEBUG org.apache.hadoop.fs.s3a.DefaultS3ClientFactory [] - Endpoint URI = https://s3.openshift-storage.svc
2023-01-26 10:33:18,096 DEBUG org.apache.hadoop.fs.s3a.DefaultS3ClientFactory [] - Endpoint https://s3.openshift-storage.svc is not the default; parsing
2023-01-26 10:33:18,097 DEBUG org.apache.hadoop.fs.s3a.DefaultS3ClientFactory [] - Region for endpoint s3.openshift-storage.svc, URI https://s3.openshift-storage.svc is determined as openshift-storage
...
2023-01-26 10:33:18,806 DEBUG com.amazonaws.request [] - Sending Request: HEAD https://s3.openshift-storage.svc /bucket-d9c5a56e-c4a9-4b48-82dc-04241cb2b72c/scraped/source/channel/channel_UCucD43ut3DEx6QDK2JOEI1w.json Headers: (amz-sdk-invocation-id: xxx, Content-Type: application/octet-stream, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.11.951 Linux/4.18.0-372.26.1.el8_6.x86_64 OpenJDK_64-Bit_Server_VM/11.0.17+8 java/11.0.17 vendor/Eclipse_Adoptium, )
...
2023-01-26 10:33:20,108 DEBUG org.apache.hadoop.fs.s3a.Invoker [] - Starting: open s3a://bucket-d9c5a56e-c4a9-4b48-82dc-04241cb2b72c/scraped/source/channel/channel_UCucD43ut3DEx6QDK2JOEI1w.json at 0
...
2023-01-26 10:33:19,598 DEBUG com.amazonaws.request [] - Received successful response: 200, AWS Request ID: ldcyiwlo-6w4j96-om3
Everything looks fine for now, last logs are
2023-01-26 10:33:24,299 INFO org.apache.flink.runtime.taskexecutor.TaskManagerServices [] - Temporary file directory '/tmp': total 119 GB, usable 48 GB (40.34% usable)
2023-01-26 10:33:24,299 DEBUG org.apache.flink.runtime.io.disk.FileChannelManagerImpl [] - FileChannelManager uses directory /tmp/flink-io-15f45aea-fa25-4f90-be7e-ad49e8722980 for spill files.
2023-01-26 10:33:24,299 INFO org.apache.flink.runtime.io.disk.iomanager.IOManager [] - Created a new FileChannelManager for spilling of task related data to disk (joins, sorting, ...). Used directories:
/tmp/flink-io-15f45aea-fa25-4f90-be7e-ad49e8722980
2023-01-26 10:33:24,301 DEBUG org.apache.flink.runtime.io.disk.FileChannelManagerImpl [] - FileChannelManager uses directory /tmp/flink-netty-shuffle-ff78f4af-d02b-412b-b305-414b570917a8 for spill files.
2023-01-26 10:33:24,301 INFO org.apache.flink.runtime.io.network.NettyShuffleServiceFactory [] - Created a new FileChannelManager for storing result partitions of BLOCKING shuffles. Used directories:
/tmp/flink-netty-shuffle-ff78f4af-d02b-412b-b305-414b570917a8
These are the last logs. From this point on, the job fails and the task manager is gone (and restarted through K8S).
My two questions are:
Can I tune the log levels to find out more (root and hadoop are on trace)?
What am I missing here?

How to override/increase Java logging level for a package?

I'd like to override the logging level for a specific package.
It works when the level is more restrictive, but it does not work when the level is less restrictive.
Here is an example:
public class Main {
private static final java.util.logging.Logger JDK_LOGGER = java.util.logging.Logger.getLogger(Main.class.getName());
public static void main(String[] args) {
JDK_LOGGER.fine("Hello fine (jdk)...");
JDK_LOGGER.info("Hello info (jdk)...");
JDK_LOGGER.severe("Hello severe (jdk)...");
If the specified package has a more restrictive level, it works:
handlers = java.util.logging.ConsoleHandler
.level = FINE
java.util.logging.ConsoleHandler.level = FINE
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
com.ice.level = SEVERE
It prints:
août 19, 2019 10:44:25 PM com.ice.foo.Main main
GRAVE: Hello severe (jdk)...
But if the specified package has a less restrictive level, it does not work as what I was expecting:
handlers = java.util.logging.ConsoleHandler
.level = INFO
java.util.logging.ConsoleHandler.level = INFO
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
com.ice.level = FINE
It prints:
août 19, 2019 10:55:50 PM com.ice.foo.Main main
INFOS: Hello info (jdk)...
août 19, 2019 10:55:50 PM com.ice.foo.Main main
GRAVE: Hello severe (jdk)...
But, because FINE > INFO > SEVERE, I was expecting to see the 3 logs (fine, info and severe).
Where is my mistake?
Thx.
Your mistake is in not realizing that the package filter and the handler filter are applied independently, i.e. as-if they were AND'd.
The purpose of the ConsoleHandler.level filter is to reduce the console output, while allowing e.g. a file handler to log everything. It is mainly used when you are logging to both console and file at the same time.
As such, the handler level cannot be overridden.
The reason console output is usually filtered more than file output, is that console output is relatively expensive, performance-wise. You don't want large volume of output to the console, so finer-grained log messages are usually filtered out.

How to fix logger randomly logging on the same line, even though it is set to start each log with a new line

I am using log4j to log events in java code. I have it set to start each log line as new line, with timestamp, thread, log level and the class where the log runs. So the configuration looks like this:
LoggerContext loggerContext = (LoggerContext) LoggerFactory.getILoggerFactory();
logger = loggerContext.getLogger("com.asdf");
logger.setAdditive(true);
PatternLayoutEncoder encoder = new PatternLayoutEncoder();
encoder.setContext(loggerContext);
encoder.setPattern("%-5level %d [%thread:%M:%caller{1}]: %message%n");
encoder.start();
cucumberAppender = new CucumberAppender();
cucumberAppender.setName("cucumber-appender");
cucumberAppender.setContext(loggerContext);
cucumberAppender.setScenario(scenario);
cucumberAppender.setEncoder(encoder);
cucumberAppender.start();
logger.addAppender(cucumberAppender);
loggerContext.start();
logger().info("*********************************************");
logger().info("* Starting Scenario - {}", scenario.getName());
logger().info("*********************************************\n");
}
#After
public void showScenarioResult(Scenario scenario) throws InterruptedException {
logger().info("**************************************************************");
logger().info("* {} Scenario - {} ", scenario.getStatus(), scenario.getName());
logger().info("**************************************************************\n");
cucumberAppender.writeToScenario();
cucumberAppender.stop();
logger.detachAppender(cucumberAppender);
logger.detachAndStopAllAppenders();
}
which most of the times outputs the log correctly, as so:
15:59:25.448 [main] INFO com.asdf.runner.steps.StepHooks -
********************************************* 15:59:25.449 [main] INFO com.asdf.runner.steps.StepHooks - * Starting Scenario - Check Cache 15:59:25.450 [main] INFO com.asdf.runner.steps.StepHooks -
********************************************* 15:59:25.558 [main] DEBUG org.cache2k.core.util.Log - New instance, using SLF4J logging 15:59:25.575 [main] INFO org.cache2k.core.Cache2kCoreProviderImpl - cache2k starting. version=1.0.1.Final, build=undefined, defaultImplementation=HeapCache 15:59:25.629 [main] DEBUG org.cache2k.CacheManager:default - open name=default, id=wvl973, classloaderId=6us14y
However, sometimes the next line of the logger is written on the above one, without using the new line, like below:
15:59:27.353 [main] INFO com.asdf.cache.CacheService - Creating a cache for [Kafka] service with specific types.15:59:27.354 [main] INFO com.asdf.runner.steps.StepHooks - **************************************************************
15:59:27.354 [main] INFO com.asdf.runner.steps.StepHooks - * PASSED Scenario - Check Cache
15:59:27.354 [main] INFO com.asdf.runner.steps.StepHooks - **************************************************************
As you can see, the first StepHooks line goes on the same line as CacheService, which is unaesthetic.
What can i change in order for the log to always log in new line, without exceptions like this?

Java Mail api, copy message sucessfully but email deleted from exchange server

Using java mail api, we monitor a the Inbox folder and process emails. If an error occurs while processing an email, we move it to an error folder.
If that is successful we delete the email from the inbox folder. Following is a snippet of mail debugging. It shows the copy as successful, but the email is never found in the error directory and its also deleted from inbox.
Why would this happen? Also why would java mail api report a success even though the mail is not copied.
2013-10-04 14:25:20,886 [] [] [] INFO [monitorScheduler-1] monitor.EmailMonitor monitor.EmailMonitor (EmailMonitor.java:393) - Copy error message to error folder
2013-10-04 14:25:20,889 [] [] [] INFO [monitorScheduler-1] STDOUT util.LoggerStream (LoggerStream.java:156) - A10 COPY 1 Inbox/error
2013-10-04 14:25:20,896 [] [] [] INFO [monitorScheduler-1] STDOUT util.LoggerStream (LoggerStream.java:156) - A10 OK COPY completed.
2013-10-04 14:25:20,897 [] [] [] INFO [monitorScheduler-1] monitor.EmailMonitor monitor.EmailMonitor (EmailMonitor.java:400) - Mark message as deleted from monitored folder
2013-10-04 14:25:20,897 [] [] [] INFO [monitorScheduler-1] STDOUT util.LoggerStream (LoggerStream.java:156) - A11 STORE 1 +FLAGS (\Deleted)
2013-10-04 14:25:20,907 [] [] [] INFO [monitorScheduler-1] STDOUT util.LoggerStream (LoggerStream.java:156) - * 1 FETCH (FLAGS (\Seen \Deleted \Recent))
A11 OK STORE completed.
2013-10-04 14:25:20,907 [] [] [] INFO [monitorScheduler-1] monitor.EmailMonitor monitor.EmailMonitor (EmailMonitor.java:404) - Expunge the monitored folder
2013-10-04 14:25:20,908 [] [] [] INFO [monitorScheduler-1] STDOUT util.LoggerStream (LoggerStream.java:156) - A12 EXPUNGE
2013-10-04 14:25:20,922 [] [] [] INFO [monitorScheduler-1] STDOUT util.LoggerStream (LoggerStream.java:156) - * 1 EXPUNGE
* 0 EXISTS
A12 OK EXPUNGE completed.
It's your server that's reporting success.
Try using a different name for the error folder, not something named under Inbox, in case that helps.

ServletContextListener.contextInitialized doesnt get called when context is initialised

I am creating a war file (progressReporter.war) and i am deploying it on Jetty7.2.2.v20101205. I have a sysout on contextInitialized method which i should see when jetty starts up. I am starting jetty using
java -jar start.jar
Java version is 1.6
Same thing when i am running on tomcat, its running absolutely fine. For jetty i have included
jetty-client-7.2.2.v20101205.jar
jetty-continuation-7.2.2.v20101205.jar
jetty-http-7.2.2.v20101205.jar
jetty-io-7.2.2.v20101205.jar
jetty-servlets-7.2.2.v20101205.jar
jetty-util-7.2.2.v20101205.jar
Following is what i have in
#Override
public void contextInitialized(ServletContextEvent servletContextEvent) {
ApplicationContext applicationContext = new ClassPathXmlApplicationContext(new String[] {"spring-http-config.xml", "test-spring-http-config.xml", "spring-ibatis.xml" });
System.out.println("setting attribute now ............... " + servletContextEvent.getServletContext());
}
Following is what i have in web.xml
<listener>
<listener-class>org.springframework.web.context.ContextLoaderListener
</listener-class>
<listener-class>com.client.BatchProgressorContextListener
</listener-class>
</listener>
Can you please help me what's going wrong here?
Following is the output when i start jetty:
C:\bkup\trialLearning\jetty>java -jar start.jar
2011-01-01 20:04:10.510:INFO::jetty-7.2.2.v20101205
2011-01-01 20:04:10.525:INFO::Deployment monitor C:\bkup\trialLearning\jetty\web
apps at interval 1
2011-01-01 20:04:10.525:INFO::Deployable added: C:\bkup\trialLearning\jetty\weba
pps\progressReporter.war
2011-01-01 20:04:10.666:INFO::Copying WEB-INF/lib jar:file:/C:/bkup/trialLearnin
g/jetty/webapps/progressReporter.war!/WEB-INF/lib/ to C:\Documents and Settings\
i143628\Local Settings\Temp\jetty-0.0.0.0-8080-progressReporter.war-_progressRep
orter-any-\webinf\WEB-INF\lib
2011-01-01 20:04:12.213:INFO:progressReporter:Initializing Spring root WebApplic
ationContext
0 [main] INFO org.springframework.web.context.ContextLoader - Root WebAppli
cationContext: initialization started
31 [main] INFO org.springframework.web.context.support.XmlWebApplicationConte
xt - Refreshing Root WebApplicationContext: startup date [Sat Jan 01 20:04:12 I
ST 2011]; root of context hierarchy
93 [main] INFO org.springframework.beans.factory.xml.XmlBeanDefinitionReader
- Loading XML bean definitions from class path resource [spring-http-config.xml
]
203 [main] INFO org.springframework.beans.factory.xml.XmlBeanDefinitionReader
- Loading XML bean definitions from class path resource [test-spring-http-confi
g.xml]
218 [main] INFO org.springframework.beans.factory.xml.XmlBeanDefinitionReader
- Loading XML bean definitions from class path resource [spring-ibatis.xml]
390 [main] INFO org.springframework.beans.factory.config.PropertyPlaceholderCo
nfigurer - Loading properties file from class path resource [qpr-config.propert
ies]
406 [main] INFO org.springframework.beans.factory.support.DefaultListableBeanF
actory - Pre-instantiating singletons in org.springframework.beans.factory.supp
ort.DefaultListableBeanFactory#e66f56: defining beans [org.springframework.conte
xt.annotation.internalConfigurationAnnotationProcessor,org.springframework.conte
xt.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.a
nnotation.internalRequiredAnnotationProcessor,org.springframework.context.annota
tion.internalCommonAnnotationProcessor,batchProgressUpdater,batchProgressMetrics
,progressReporterResultsQueue,cbbEventProcessorThread,timerEventProcessorThread,
eventThreadManager,eventListener,metricsAggregator,southbeachDummyClient,cbbPric
er,dummyCbb,processorThread,counterGenerater,imntInfoDao,imntStatsInfoDao,org.sp
ringframework.beans.factory.config.PropertyPlaceholderConfigurer#0,dataSource,sq
lMapClient]; root of factory hierarchy
593 [main] INFO org.springframework.web.context.ContextLoader - Root WebAppli
cationContext: initialization completed in 593 ms
2011-01-01 20:04:12.947:INFO::Deployment monitor C:\bkup\trialLearning\jetty\con
texts at interval 1
2011-01-01 20:04:12.947:INFO::Deployable added: C:\bkup\trialLearning\jetty\cont
exts\test.xml
2011-01-01 20:04:12.978:INFO::Extract jar:file:/C:/bkup/trialLearning/jetty/weba
pps/test.war!/ to C:\Documents and Settings\i143628\Local Settings\Temp\jetty-0.
0.0.0-8080-test.war-_-any-\webapp
2011-01-01 20:04:13.572:INFO:org.eclipse.jetty.servlets.TransparentProxy:Transpa
rentProxy # /javadoc to http://download.eclipse.org/jetty/stable-7/apidocs
2011-01-01 20:04:13.572:INFO::Deployable added: C:\bkup\trialLearning\jetty\cont
exts\javadoc.xml
2011-01-01 20:04:13.588:INFO::Started SelectChannelConnector#0.0.0.0:8080
2011-01-01 20:12:59.369:INFO::Graceful shutdown SelectChannelConnector#0.0.0.0:8
080
2011-01-01 20:12:59.447:INFO::Graceful shutdown o.e.j.w.WebAppContext{/progressR
eporter,[file:/C:/Documents%20and%20Settings/i143628/Local%20Settings/Temp/jetty
-0.0.0.0-8080-progressReporter.war-_progressReporter-any-/webinf/, jar:file:/C:/
bkup/trialLearning/jetty/webapps/progressReporter.war!/]},C:\bkup\trialLearning\
jetty\webapps\progressReporter.war
2011-01-01 20:12:59.447:INFO::Graceful shutdown o.e.j.w.WebAppContext{/,file:/C:
/Documents%20and%20Settings/i143628/Local%20Settings/Temp/jetty-0.0.0.0-8080-tes
t.war-_-any-/webapp/},C:\bkup\trialLearning\jetty/webapps/test.war
2011-01-01 20:12:59.478:INFO::Graceful shutdown o.e.j.s.h.ContextHandler{/javado
c,file:/C:/bkup/trialLearning/jetty/javadoc}
2011-01-01 20:13:00.666:INFO:progressReporter:Closing Spring root WebApplication
Context
528453 [Thread-1] INFO org.springframework.web.context.support.XmlWebApplicatio
nContext - Closing Root WebApplicationContext: startup date [Sat Jan 01 20:04:1
2 IST 2011]; root of context hierarchy
528453 [Thread-1] INFO org.springframework.beans.factory.support.DefaultListabl
eBeanFactory - Destroying singletons in org.springframework.beans.factory.suppo
rt.DefaultListableBeanFactory#e66f56: defining beans [org.springframework.contex
t.annotation.internalConfigurationAnnotationProcessor,org.springframework.contex
t.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.an
notation.internalRequiredAnnotationProcessor,org.springframework.context.annotat
ion.internalCommonAnnotationProcessor,batchProgressUpdater,batchProgressMetrics,
progressReporterResultsQueue,cbbEventProcessorThread,timerEventProcessorThread,e
ventThreadManager,eventListener,metricsAggregator,southbeachDummyClient,cbbPrice
r,dummyCbb,processorThread,counterGenerater,imntInfoDao,imntStatsInfoDao,org.spr
ingframework.beans.factory.config.PropertyPlaceholderConfigurer#0,dataSource,sql
MapClient]; root of factory hierarchy
Can you please help me as to what i am doing wrong and what needs to be done to run on Jetty please?
Please do let me know if you need any other details as well.
Thanks in advance.
Few things:
I see in log that context is going up:
3 [main] INFO org.springframework.beans.factory.xml.XmlBeanDefinitionReader
- Loading XML bean definitions from class path resource
[spring-http-config.xml ] 203 [main]
INFO
org.springframework.beans.factory.xml.XmlBeanDefinitionReader
- Loading XML bean definitions from class path resource
[test-spring-http-confi g.xml] 218
[main] INFO
org.springframework.beans.factory.xml.XmlBeanDefinitionReader
- Loading XML bean definitions from class path resource
[spring-ibatis.xml]
So your code is working.
You write to console instead of writing to log file. Probably this is the reason why you don't see it. Try to use commons-logging to log your message.
Why you have both ContextLoaderListener and BatchProgressorContextListener in web.xml? (I assume that BatchProgressorContextListener is the one with code you gave in the beginning of post, right?)
Probably you need only one context listener that will load your context and ContextLoaderListener will be sufficient.
Remove BatchProgressorContextListener from web.xml and add instead
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>
classpath:spring-http-config.xml
classpath:test-spring-http-config.xml
classpath:spring-ibatis.xml
</param-value>
</context-param>

Categories