I have a job receiving events with a s3 link. It attempts to load the resource using the following snippet
// value of s3 source is "s3://bucket_id/path/to/object.json"
List<String> collect = ExecutionEnvironment.getExecutionEnvironment().readTextFile(s3_source.toString()).collect();
Flink is configured accordingly in flink-conf.yaml
s3.access-key: XXX
s3.secret-key: XXX
s3.endpoint: s3.openshift-storage.svc
s3.path.style.access: true
The library flink-s3-fs-hadoop-1.16.0.jar is in path /opt/flink/plugins/flink-s3-fs-hadoop. I had some issues setting up the self-signed certificates (see this Gist for my config), but it seems to be working.
When starting the job through the JobManager's WebUI, I get the following logs
Job is starting
2023-01-26 10:33:09,891 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Receive slot request f9f291f0e4b74471e59a74602212060b for job c62504dec97d185a3e86fc390256e3f9 from resource manager with leader id 00000000000000000000000000000000.
2023-01-26 10:33:09,894 DEBUG org.apache.flink.runtime.memory.MemoryManager [] - Initialized MemoryManager with total memory size 178956973 and page size 32768.
2023-01-26 10:33:09,895 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Allocated slot for f9f291f0e4b74471e59a74602212060b.
2023-01-26 10:33:09,896 INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Add job c62504dec97d185a3e86fc390256e3f9 for job leader monitoring.
2023-01-26 10:33:09,897 DEBUG org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - New leader information for job c62504dec97d185a3e86fc390256e3f9. Address: akka.tcp://flink#flink-jobmanager:6123/user/rpc/jobmanager_10, leader id: 00000000000000000000000000000000.
2023-01-26 10:33:09,897 INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Try to register at job manager akka.tcp://flink#flink-jobmanager:6123/user/rpc/jobmanager_10 with leader id 00000000-0000-0000-0000-000000000000.
2023-01-26 10:33:09,898 DEBUG org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - Try to connect to remote RPC endpoint with address akka.tcp://flink#flink-jobmanager:6123/user/rpc/jobmanager_10. Returning a org.apache.flink.runtime.jobmaster.JobMasterGateway gateway.
2023-01-26 10:33:09,910 INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Resolved JobManager address, beginning registration
2023-01-26 10:33:09,910 DEBUG org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Registration at JobManager attempt 1 (timeout=100ms)
2023-01-26 10:33:09,991 DEBUG org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Registration with JobManager at akka.tcp://flink#flink-jobmanager:6123/user/rpc/jobmanager_10 was successful.
2023-01-26 10:33:09,993 INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Successful registration at job manager akka.tcp://flink#flink-jobmanager:6123/user/rpc/jobmanager_10 for job c62504dec97d185a3e86fc390256e3f9.
2023-01-26 10:33:09,993 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Establish JobManager connection for job c62504dec97d185a3e86fc390256e3f9.
2023-01-26 10:33:09,995 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Offer reserved slots to the leader of job c62504dec97d185a3e86fc390256e3f9.
2023-01-26 10:33:10,011 INFO org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Activate slot f9f291f0e4b74471e59a74602212060b.
Event is received from RabbitMQ Stream
2023-01-26 10:33:11,193 INFO io.av360.maverick.insights.rmqstreams.RMQStreamSource [] - Running event consumer as stream source
2023-01-26 10:33:11,194 INFO io.av360.maverick.insights.rmqstreams.config.RMQStreamsConfig [] - Creating consumer for stream 'events'
2023-01-26 10:33:11,195 INFO io.av360.maverick.insights.rmqstreams.config.RMQStreamsConfig [] - Creating environment required to connect to a RabbitMQ Stream.
2023-01-26 10:33:11,195 DEBUG io.av360.maverick.insights.rmqstreams.config.StreamsClientFactory [] - Building environment
2023-01-26 10:33:11,195 INFO io.av360.maverick.insights.rmqstreams.config.StreamsClientFactory [] - Valid configuration for host 'rabbitmq'
....
2023-01-26 10:33:14,907 INFO XXX [] - Event of type 'crawled.source.channel' with source 's3://bucket-d9c5a56e-c4a9-4b48-82dc-04241cb2b72c/scraped/source/channel/channel_UCucD43ut3DEx6QDK2JOEI1w.json'
Attempting to read from s3
2023-01-26 10:33:15,303 DEBUG org.apache.flink.fs.s3.common.AbstractS3FileSystemFactory [] - Creating S3 file system backed by Hadoop s3a file system
2023-01-26 10:33:15,303 DEBUG org.apache.flink.fs.s3.common.AbstractS3FileSystemFactory [] - Loading Hadoop configuration for Hadoop s3a file system
2023-01-26 10:33:15,500 DEBUG org.apache.flink.fs.s3hadoop.common.HadoopConfigLoader [] - Adding Flink config entry for s3.secret-key as fs.s3a.secret-key to Hadoop config
2023-01-26 10:33:15,500 DEBUG org.apache.flink.fs.s3hadoop.common.HadoopConfigLoader [] - Adding Flink config entry for s3.endpoint as fs.s3a.endpoint to Hadoop config
2023-01-26 10:33:15,500 DEBUG org.apache.flink.fs.s3hadoop.common.HadoopConfigLoader [] - Adding Flink config entry for s3.access-key as fs.s3a.access-key to Hadoop config
2023-01-26 10:33:15,500 DEBUG org.apache.flink.fs.s3hadoop.common.HadoopConfigLoader [] - Adding Flink config entry for s3.path.style.access as fs.s3a.path.style.access to Hadoop config
2023-01-26 10:33:15,705 DEBUG org.apache.flink.fs.s3hadoop.S3FileSystemFactory [] - Using scheme s3://bucket-d9c5a56e-c4a9-4b48-82dc-04241cb2b72c/scraped/source/channel/channel_UCucD43ut3DEx6QDK2JOEI1w.json for s3a file system backing the S3 File System
a few hadoop exceptions (relevant?)
2023-01-26 10:33:15,800 DEBUG org.apache.hadoop.util.Shell [] - Failed to detect a valid hadoop home directory
java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.
...
2023-01-26 10:33:16,108 DEBUG org.apache.hadoop.metrics2.impl.MetricsConfig [] - Could not locate file hadoop-metrics2-s3a-file-system.properties
org.apache.commons.configuration2.ex.ConfigurationException: Could not locate: org.apache.commons.configuration2.io.FileLocator#4ac69a5d[fileName=hadoop-metrics2-s3a-file-system.properties,basePath=<null>,sourceURL=,encoding=<null>,fileSystem=<null>,locationStrategy=<null>]
...
2023-01-26 10:33:16,111 DEBUG org.apache.hadoop.metrics2.impl.MetricsConfig [] - Could not locate file hadoop-metrics2.properties
org.apache.commons.configuration2.ex.ConfigurationException: Could not locate: org.apache.commons.configuration2.io.FileLocator#4faa9611[fileName=hadoop-metrics2.properties,basePath=<null>,sourceURL=,encoding=<null>,fileSystem=<null>,locationStrategy=<null>]
...
2023-01-26 10:33:16,508 DEBUG org.apache.hadoop.util.NativeCodeLoader [] - Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path: [/usr/java/packages/lib, /usr/lib64, /lib64, /lib, /usr/lib]
2023-01-26 10:33:16,508 DEBUG org.apache.hadoop.util.NativeCodeLoader [] - java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
2023-01-26 10:33:16,508 WARN org.apache.hadoop.util.NativeCodeLoader [] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2023-01-26 10:33:16,508 DEBUG org.apache.hadoop.util.PerformanceAdvisory [] - Falling back to shell based
...
2023-01-26 10:33:17,119 DEBUG org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory [] - Initializing SSL Context to channel mode Default_JSSE
2023-01-26 10:33:17,710 DEBUG org.apache.hadoop.fs.s3a.impl.NetworkBinding [] - Unable to create class org.apache.hadoop.fs.s3a.impl.ConfigureShadedAWSSocketFactory, value of fs.s3a.ssl.channel.mode will be ignored
java.lang.NoClassDefFoundError: com/amazonaws/thirdparty/apache/http/conn/socket/ConnectionSocketFactory
Connection attempt
2023-01-26 10:33:17,997 DEBUG org.apache.hadoop.fs.s3a.DefaultS3ClientFactory [] - Creating endpoint configuration for "s3.openshift-storage.svc"
2023-01-26 10:33:17,998 DEBUG org.apache.hadoop.fs.s3a.DefaultS3ClientFactory [] - Endpoint URI = https://s3.openshift-storage.svc
2023-01-26 10:33:18,096 DEBUG org.apache.hadoop.fs.s3a.DefaultS3ClientFactory [] - Endpoint https://s3.openshift-storage.svc is not the default; parsing
2023-01-26 10:33:18,097 DEBUG org.apache.hadoop.fs.s3a.DefaultS3ClientFactory [] - Region for endpoint s3.openshift-storage.svc, URI https://s3.openshift-storage.svc is determined as openshift-storage
...
2023-01-26 10:33:18,806 DEBUG com.amazonaws.request [] - Sending Request: HEAD https://s3.openshift-storage.svc /bucket-d9c5a56e-c4a9-4b48-82dc-04241cb2b72c/scraped/source/channel/channel_UCucD43ut3DEx6QDK2JOEI1w.json Headers: (amz-sdk-invocation-id: xxx, Content-Type: application/octet-stream, User-Agent: Hadoop 3.3.2, aws-sdk-java/1.11.951 Linux/4.18.0-372.26.1.el8_6.x86_64 OpenJDK_64-Bit_Server_VM/11.0.17+8 java/11.0.17 vendor/Eclipse_Adoptium, )
...
2023-01-26 10:33:20,108 DEBUG org.apache.hadoop.fs.s3a.Invoker [] - Starting: open s3a://bucket-d9c5a56e-c4a9-4b48-82dc-04241cb2b72c/scraped/source/channel/channel_UCucD43ut3DEx6QDK2JOEI1w.json at 0
...
2023-01-26 10:33:19,598 DEBUG com.amazonaws.request [] - Received successful response: 200, AWS Request ID: ldcyiwlo-6w4j96-om3
Everything looks fine for now, last logs are
2023-01-26 10:33:24,299 INFO org.apache.flink.runtime.taskexecutor.TaskManagerServices [] - Temporary file directory '/tmp': total 119 GB, usable 48 GB (40.34% usable)
2023-01-26 10:33:24,299 DEBUG org.apache.flink.runtime.io.disk.FileChannelManagerImpl [] - FileChannelManager uses directory /tmp/flink-io-15f45aea-fa25-4f90-be7e-ad49e8722980 for spill files.
2023-01-26 10:33:24,299 INFO org.apache.flink.runtime.io.disk.iomanager.IOManager [] - Created a new FileChannelManager for spilling of task related data to disk (joins, sorting, ...). Used directories:
/tmp/flink-io-15f45aea-fa25-4f90-be7e-ad49e8722980
2023-01-26 10:33:24,301 DEBUG org.apache.flink.runtime.io.disk.FileChannelManagerImpl [] - FileChannelManager uses directory /tmp/flink-netty-shuffle-ff78f4af-d02b-412b-b305-414b570917a8 for spill files.
2023-01-26 10:33:24,301 INFO org.apache.flink.runtime.io.network.NettyShuffleServiceFactory [] - Created a new FileChannelManager for storing result partitions of BLOCKING shuffles. Used directories:
/tmp/flink-netty-shuffle-ff78f4af-d02b-412b-b305-414b570917a8
These are the last logs. From this point on, the job fails and the task manager is gone (and restarted through K8S).
My two questions are:
Can I tune the log levels to find out more (root and hadoop are on trace)?
What am I missing here?
Related
Route loadfile is getting started automatically when I start main class.
On exception, when process should finish. It starts loadfile again and again.
It should get start from timer and then should call loadfile route, but loadfile is starting independent as well as from timer.
CamelContext context = new DefaultCamelContext(sr);
try {
context.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
onException(Exception.class)
.log(LoggingLevel.INFO, "Extype:${exception.message}")
.stop();
from("timer://alertstrigtimer?period=60s&repeatCount=1")
.startupOrder(1)
.log(LoggingLevel.INFO, "*******************************Job-Alert-System: Started: alertstrigtimer******************************")
.to("direct:loadFile").stop();
from("direct:loadFile").routeId("loadfile")
.log(LoggingLevel.INFO, "*******************************Job-Alert-System: Started: direct:loadFile******************************")
.from(getTriggerFileURI(getWorkFilePath(), getWorkFileName())).choice()
.
.
});
context.start();
Thread.sleep(40000);
Following is log:
[main] INFO org.apache.camel.impl.DefaultCamelContext - Apache Camel 2.21.1 (CamelContext: camel-1) is starting
[main] INFO org.apache.camel.management.ManagedManagementStrategy - JMX is enabled
[main] INFO org.apache.camel.impl.converter.DefaultTypeConverter - Type converters loaded (core: 194, classpath: 14)
[main] INFO org.apache.camel.impl.DefaultCamelContext - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
[main] INFO org.apache.camel.impl.DefaultCamelContext - Route: route1 started and consuming from: timer://alertstrigtimer?period=60s&repeatCount=1
[main] INFO org.apache.camel.impl.DefaultCamelContext - Skipping starting of route loadfile as its configured with autoStartup=false
[main] INFO org.apache.camel.impl.DefaultCamelContext - Route: loadDataAndAlerts started and consuming from: direct://loadDataAndAlerts
[main] INFO org.apache.camel.impl.DefaultCamelContext - Total 4 routes, of which 2 are started
[main] INFO org.apache.camel.impl.DefaultCamelContext - Apache Camel 2.21.1 (CamelContext: camel-1) started in 0.761 seconds
[Camel (camel-1) thread #1 - timer://alertstrigtimer] INFO route1 - *******************************Job-Alert-System: Started: alertstrigtimer******************************
[Camel (camel-1) thread #2 - timer://alertstrigtimer] INFO loadfile - *******************************Job-Alert-System: Started: direct:loadFile******************************
[Camel (camel-1) thread #1 - file://null] INFO loadfile - *******************************Job-Alert-System: Started: direct:loadFile******************************
The problem could be cause by this line .from(getTriggerFileURI(getWorkFilePath(), getWorkFileName())) in loadfile route. Route with multiple from endpoint is known as Multiple Input and this pattern is removed in Camel 3.x.
From RedHat,
from("URI1").from("URI2").from("URI3").to("DestinationUri");
..., exchanges from each of the input endpoints,
URI1, URI2, and URI3, are processed independently of each other and in
separate threads. In fact, you can think of the preceding route as
being equivalent to the following three separate routes:
from("URI1").to("DestinationUri");
from("URI2").to("DestinationUri");
from("URI3").to("DestinationUri");
Rather than using multiple from endpoint (extra independent input), try content enricher pattern (pollEnrich for file component).
I am trying to test rest api by Spring 5 webclient with jetty connector. I am getting data from api call but main program continues to run even after main completes execution. How to resolve the issue.? What configuration needed so that Jetty connector stops after main completes its execution?
My connector initialization code:
SslContextFactory.Client sslContextFactory = new SslContextFactory.Client();
HttpClient httpClient = new HttpClient();
httpClient.setIdleTimeout(DefaultIdleTimeout);
ClientHttpConnector clientConnector = new JettyClientHttpConnector(httpClient, jettyResourceFactory);
webClient = WebClient.builder().baseUrl(getBaseUrl()).clientConnector(clientConnector).build();
Using webclient in main class :
webClient.get().uri("/getUri").exchange().flatMap(response.bodyToMono(String.class)).subscribe(di -> {
System.out.println(di);
}, error -> {
System.out.println(error.getStackTrace());
}, () -> {
System.out.println("Execution complete");
});
getting below in log:
13:39:11.225 [HttpClient#1165b38-scheduler-1] DEBUG org.eclipse.jetty.io.AbstractConnection - HttpConnectionOverHTTP#4eb423a5::DecryptedEndPoint#4345fb54{<hostname>/<targetIp>:443<->/<sourceIp>:65106,CLOSED,fill=-,flush=-,to=15016/15000} onFillInterestedFailed {}
13:39:11.225 [HttpClient#1165b38-scheduler-1] DEBUG org.eclipse.jetty.io.ManagedSelector - Wakeup ManagedSelector#75ed9710{STARTED} id=1 keys=0 selected=0 updates=0
13:39:11.226 [HttpClient#1165b38-24] DEBUG org.eclipse.jetty.io.ManagedSelector - Selector sun.nio.ch.WindowsSelectorImpl#3d003adb woken with none selected
13:39:11.226 [HttpClient#1165b38-scheduler-1] DEBUG org.eclipse.jetty.util.thread.QueuedThreadPool - queue org.eclipse.jetty.io.ManagedSelector$DestroyEndPoint#74c9b13c startThread=0
13:39:11.226 [HttpClient#1165b38-24] DEBUG org.eclipse.jetty.io.ManagedSelector - Selector sun.nio.ch.WindowsSelectorImpl#3d003adb woken up from select, 0/0/0 selected
13:39:11.226 [HttpClient#1165b38-scheduler-1] DEBUG org.eclipse.jetty.io.FillInterest - onClose FillInterest#52ac96eb{null}
13:39:11.226 [HttpClient#1165b38-21] DEBUG org.eclipse.jetty.util.thread.QueuedThreadPool - run org.eclipse.jetty.io.ManagedSelector$DestroyEndPoint#74c9b13c in QueuedThreadPool[HttpClient#1165b38]#45efc20d{STARTED,8<=8<=200,i=1,r=8,q=0}[ReservedThreadExecutor#4bef0fe3{s=2/8,p=0}]
13:39:11.226 [HttpClient#1165b38-scheduler-1] DEBUG org.eclipse.jetty.client.http.HttpConnectionOverHTTP - Closed HttpConnectionOverHTTP#4eb423a5::DecryptedEndPoint#4345fb54{<hostname>/<targetIp>:443<->/<sourceIp>:65106,CLOSED,fill=-,flush=-,to=15017/15000}
13:39:11.226 [HttpClient#1165b38-24] DEBUG org.eclipse.jetty.io.ManagedSelector - Selector sun.nio.ch.WindowsSelectorImpl#3d003adb processing 0 keys, 0 updates
13:39:11.226 [HttpClient#1165b38-24] DEBUG org.eclipse.jetty.io.ManagedSelector - updateable 0
13:39:11.226 [HttpClient#1165b38-24] DEBUG org.eclipse.jetty.io.ManagedSelector - updates 0
13:39:11.226 [HttpClient#1165b38-24] DEBUG org.eclipse.jetty.io.ManagedSelector - Selector sun.nio.ch.WindowsSelectorImpl#3d003adb waiting with 0 keys
13:39:11.226 [HttpClient#1165b38-21] DEBUG org.eclipse.jetty.io.ManagedSelector - Destroyed SocketChannelEndPoint#7a9db40c{<hostname>/<targetIp>:443<->/<sourceIp>:65106,CLOSED,fill=-,flush=-,to=0/15000}{io=0/0,kio=-1,kro=-1}->SslConnection#53046985{NEED_UNWRAP,eio=-1/-1,di=-1,fill=IDLE,flush=IDLE}~>DecryptedEndPoint#4345fb54{<hostname>/<targetIp>:443<->/<sourceIp>:65106,CLOSED,fill=-,flush=-,to=15017/15000}=>HttpConnectionOverHTTP#4eb423a5(l:/<sourceIp>:65106 <-> r:<hostname>/<targetIp>:443,closed=true)=>HttpChannelOverHTTP#7dafb76e(exchange=null)[send=HttpSenderOverHTTP#4491419(req=QUEUED,snd=COMPLETED,failure=null)[HttpGenerator#22805291{s=START}],recv=HttpReceiverOverHTTP#52385f05(rsp=IDLE,failure=null)[HttpParser{s=START,0 of -1}]]
13:39:11.227 [HttpClient#1165b38-21] DEBUG org.eclipse.jetty.io.AbstractConnection - onClose HttpConnectionOverHTTP#4eb423a5::DecryptedEndPoint#4345fb54{<hostname>/<targetIp>:443<->/<sourceIp>:65106,CLOSED,fill=-,flush=-,to=15018/15000}
13:39:11.227 [HttpClient#1165b38-21] DEBUG org.eclipse.jetty.io.AbstractConnection - onClose SslConnection#53046985::SocketChannelEndPoint#7a9db40c{<hostname>/<targetIp>:443<->/<sourceIp>:65106,CLOSED,fill=-,flush=-,to=0/15000}{io=0/0,kio=-1,kro=-1}->SslConnection#53046985{NEED_UNWRAP,eio=-1/-1,di=-1,fill=IDLE,flush=IDLE}~>DecryptedEndPoint#4345fb54{<hostname>/<targetIp>:443<->/<sourceIp>:65106,CLOSED,fill=-,flush=-,to=15018/15000}=>HttpConnectionOverHTTP#4eb423a5(l:/<sourceIp>:65106 <-> r:<hostname>/<targetIp>:443,closed=true)=>HttpChannelOverHTTP#7dafb76e(exchange=null)[send=HttpSenderOverHTTP#4491419(req=QUEUED,snd=COMPLETED,failure=null)[HttpGenerator#22805291{s=START}],recv=HttpReceiverOverHTTP#52385f05(rsp=IDLE,failure=null)[HttpParser{s=START,0 of -1}]]
13:39:11.227 [HttpClient#1165b38-21] DEBUG org.eclipse.jetty.util.thread.QueuedThreadPool - ran org.eclipse.jetty.io.ManagedSelector$DestroyEndPoint#74c9b13c in QueuedThreadPool[HttpClient#1165b38]#45efc20d{STARTED,8<=8<=200,i=1,r=8,q=0}[ReservedThreadExecutor#4bef0fe3{s=2/8,p=0}]
and continue to get log in console & program continues to run..................................
Taken from the official docs
https://docs.spring.io/spring-framework/docs/current/spring-framework-reference/web-reactive.html#webflux-client-builder-jetty
You can share resources between multiple instances of the Jetty client (and server) and ensure that the resources are shut down when the Spring ApplicationContext is closed by declaring a Spring-managed bean of type JettyResourceFactory
Your JettyHttpClientConnector should be instantiated with a jettyResourceFactory bean whose lifecycle should end as the test ends.
I am facing a weird issue. When I shutdown the tomcat first time on a day it is overwriting log file contents. However on 2nd or any subsequent restart I don't face that issue.
I am seeing following errors in log on tomcat shutdown;
23:08:03,390 [] [] INFO XmlWebApplicationContext:873 - Closing Root WebApplicationContext: startup date [Wed Apr 29 23:47:05 BST 2015]; root of context hierarchy
23:08:03,397 [] [] INFO ThreadPoolTaskExecutor:203 - Shutting down ExecutorService 'org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor#1d7b51e8'
23:11:33,880 [] [] [] INFO PropertiesFactoryBean:172 - Loading properties file from class path resource [apppname/application.properties]
23:11:41,413 [] [] [] INFO Reflections:238 - Reflections took 5894 ms to scan 112 urls, producing 5518 keys and 32092 values
23:11:42,242 [] [] [] INFO ThreadPoolTaskExecutor:165 - Initializing ExecutorService 'org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor#28a50da4'
23:11:42,596 [] [] [] INFO ContextLoader:325 - Root WebApplicationContext: initialization completed in 11465 ms
23:11:48,525 [] [] [] INFO PropertiesFactoryBean:172 - Loading properties file from class path resource [apppname/application.properties]
23:11:55,130 [] [] [] INFO Reflections:238 - Reflections took 5765 ms to scan 112 urls, producing 5518 keys and 32092 values
23:11:55,807 [] [] [] INFO ThreadPoolTaskExecutor:165 - Initializing ExecutorService 'org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor#1a46a171'
23:11:56,081 [] [] [] INFO ContextLoader:325 - Root WebApplicationContext: initialization completed in 9491 ms
23:12:01,469 [] [] [] INFO PropertiesFactoryBean:172 - Loading properties file from class path resource [apppname/application.properties]
23:12:08,106 [] [] [] INFO Reflections:238 - Reflections took 5757 ms to scan 112 urls, producing 5518 keys and 32092 values
23:12:08,793 [] [] [] INFO ThreadPoolTaskExecutor:165 - Initializing ExecutorService 'org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor#7213bc54'
23:12:09,062 [] [] [] INFO ContextLoader:325 - Root WebApplicationContext: initialization completed in 9260 ms
Log configuration
log4j.rootLogger=INFO, file
log4j.appender.file=org.apache.log4j.DailyRollingFileAppender
log4j.appender.file.File=/logs/logfilename.log
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
What can be the possible reason?
I have the same log4j configuration for other application. But they work perfectly fine. It looks like somehow tomcat is writing logs to the the application log instead of catalina.
It happens only on first restart in a day and when log level is set to INFO or DEBUG not ERROR.
Use the log4j Append variable. By default it should be true though...
log4j.appender.LOGFILE.Append=true
I also see you are using Rolling Appender but its not in your root logger
log4j.rootLogger=INFO, file, RollingAppender
I'm using QuickFIXJ 1.5 version. I set logging level to INFO in my logj.properties file like that:
log4j.logger.quickfixj.msg.incoming=INFO
log4j.logger.quickfixj.msg.outgoing=INFO
log4j.logger.quickfixj.event=INFO
But in the application logs I see DEBUG logs of QuickFIXJ.
27-11-2014 10:20:34.172 8372 [SocketConnectorIoProcessor-0.0] DEBUG (FIXMessageDecoder.java:157) detected header: pos=0,lim=1695,rem=1695,offset=0,state=1
27-11-2014 10:20:34.172 8372 [SocketConnectorIoProcessor-0.0] DEBUG (FIXMessageDecoder.java:176) body length = 1671: pos=0,lim=1695,rem=1695,offset=17,state=3
27-11-2014 10:20:34.172 8372 [SocketConnectorIoProcessor-0.0] DEBUG (FIXMessageDecoder.java:200) message body found: pos=0,lim=1695,rem=1695,offset=1688,state=4
27-11-2014 10:20:34.172 8372 [SocketConnectorIoProcessor-0.0] DEBUG (FIXMessageDecoder.java:207) found checksum: pos=0,lim=1695,rem=1695,offset=1688,state=4
27-11-2014 10:20:34.173 8373 [SocketConnectorIoProcessor-0.0] DEBUG (FIXMessageDecoder.java:226) parsed message: pos=1695,lim=1695,rem=0,offset=1695,state=4
Why does it ignore the level setting in the log4j.properties?
Thanks
Using java mail api, we monitor a the Inbox folder and process emails. If an error occurs while processing an email, we move it to an error folder.
If that is successful we delete the email from the inbox folder. Following is a snippet of mail debugging. It shows the copy as successful, but the email is never found in the error directory and its also deleted from inbox.
Why would this happen? Also why would java mail api report a success even though the mail is not copied.
2013-10-04 14:25:20,886 [] [] [] INFO [monitorScheduler-1] monitor.EmailMonitor monitor.EmailMonitor (EmailMonitor.java:393) - Copy error message to error folder
2013-10-04 14:25:20,889 [] [] [] INFO [monitorScheduler-1] STDOUT util.LoggerStream (LoggerStream.java:156) - A10 COPY 1 Inbox/error
2013-10-04 14:25:20,896 [] [] [] INFO [monitorScheduler-1] STDOUT util.LoggerStream (LoggerStream.java:156) - A10 OK COPY completed.
2013-10-04 14:25:20,897 [] [] [] INFO [monitorScheduler-1] monitor.EmailMonitor monitor.EmailMonitor (EmailMonitor.java:400) - Mark message as deleted from monitored folder
2013-10-04 14:25:20,897 [] [] [] INFO [monitorScheduler-1] STDOUT util.LoggerStream (LoggerStream.java:156) - A11 STORE 1 +FLAGS (\Deleted)
2013-10-04 14:25:20,907 [] [] [] INFO [monitorScheduler-1] STDOUT util.LoggerStream (LoggerStream.java:156) - * 1 FETCH (FLAGS (\Seen \Deleted \Recent))
A11 OK STORE completed.
2013-10-04 14:25:20,907 [] [] [] INFO [monitorScheduler-1] monitor.EmailMonitor monitor.EmailMonitor (EmailMonitor.java:404) - Expunge the monitored folder
2013-10-04 14:25:20,908 [] [] [] INFO [monitorScheduler-1] STDOUT util.LoggerStream (LoggerStream.java:156) - A12 EXPUNGE
2013-10-04 14:25:20,922 [] [] [] INFO [monitorScheduler-1] STDOUT util.LoggerStream (LoggerStream.java:156) - * 1 EXPUNGE
* 0 EXISTS
A12 OK EXPUNGE completed.
It's your server that's reporting success.
Try using a different name for the error folder, not something named under Inbox, in case that helps.