P6Spy Spring Boot starter decorator produces empty output - java

I configured a Spring Boot starter P6Spy decorator as per the instructions on their site:
## p6spy ###
# Register P6LogFactory to log JDBC events
decorator.datasource.p6spy.enable-logging=true
decorator.datasource.datasource-proxy.query.log-level=debug
decorator.datasource.datasource-proxy.slow-query.enable-logging=true
decorator.datasource.datasource-proxy.slow-query.log-level=warn
decorator.datasource.datasource-proxy.slow-query.logger-name=
# Use com.p6spy.engine.spy.appender.MultiLineFormat instead of com.p6spy.engine.spy.appender.SingleLineFormat
decorator.datasource.p6spy.multiline=true
# Use logging for default listeners [slf4j, sysout, file]
decorator.datasource.p6spy.logging=slf4j
# Log file to use (only with logging=file)
decorator.datasource.p6spy.log-file=spy.log
# Custom log format, if specified com.p6spy.engine.spy.appender.CustomLineFormat will be used with this log format
decorator.datasource.p6spy.log-format=
<dependency>
<groupId>com.github.gavlyukovskiy</groupId>
<artifactId>p6spy-spring-boot-starter</artifactId>
<version>1.5.8</version>
</dependency>
but getting only empty output from p6spy:
2019-12-24 16:22:13.103 DEBUG 11672 --- [ntainer#0-0-C-1] o.s.jdbc.core.JdbcTemplate : Executing prepared SQL query
2019-12-24 16:22:13.103 DEBUG 11672 --- [ntainer#0-0-C-1] o.s.jdbc.core.JdbcTemplate : Executing prepared SQL statement [SELECT COMIT_ID FROM DPL_PARTY.USR_DTL WHERE INDVDL_ID = (SELECT INDVDL_ID FROM DPL_PARTY.INDVDL_TLCMMNCTN WHERE EML_VAL = ?)]
2019-12-24 16:22:13.181 INFO 11672 --- [ntainer#0-0-C-1] p6spy :
2019-12-24 16:22:13.182 DEBUG 11672 --- [ntainer#0-0-C-1] c.s.a.repository.UserRepository : Obtained comitId=XBBKRHL for email of niren.sinha#bnymellon.com from the DB.
2019-12-24 16:22:13.286 INFO 11672 --- [ntainer#0-0-C-1] p6spy :
The query themselves execute fine. What am I missing here? Thanks.

Try to leave out the empty configuration for the log format:
# Custom log format, if specified com.p6spy.engine.spy.appender.CustomLineFormat will be used with this log format
# decorator.datasource.p6spy.log-format=
I think if you have the property without a value, it will be passed as an empty string.

Related

Spring Batch - My Batch seems executing two steps at the same time?

I can't really understand what's going on. I'm studying Spring Batch and I'd like to execute two steps for some reasons, one after the other.
Now please don't mind what the steps are currently doing, just keep in mind that I would like to perform two steps sequentially.
This is the code:
#Configuration
#EnableBatchProcessing
public class JobConfiguration {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
private List<Employee> employeesToSave = new ArrayList<Employee>();
public JsonItemReader<Employee> jsonReader() {
System.out.println("Try to read JSON");
final ObjectMapper mapper = new ObjectMapper();
final JacksonJsonObjectReader<Employee> jsonObjectReader = new JacksonJsonObjectReader<>(
Employee.class);
jsonObjectReader.setMapper(mapper);
return new JsonItemReaderBuilder<Employee>().jsonObjectReader(jsonObjectReader)
.resource(new ClassPathResource("input.json"))
.name("myReader")
.build();
}
public ListItemReader<Employee> listReader() {
System.out.println("Read from list");
return new ListItemReader<Employee>(employeesToSave);
*/
}
public ItemProcessor<Employee,Employee> filterProcessor() {
return employee -> {
System.out.println("Processing JSON");
return employee;
};
}
public ItemWriter<Employee> filterWriter() {
return listEmployee -> {
employeesToSave.addAll(listEmployee);
System.out.println("Save on list " + listEmployee.toString());
};
}
public ItemWriter<Employee> insertToDBWriter() {
System.out.println("Try to save on DB");
return listEmployee -> {
System.out.println("Save on DB " + listEmployee.toString());
};
}
public Step filterStep() {
StepBuilder stepBuilder = stepBuilderFactory.get("filterStep");
SimpleStepBuilder<Employee, Employee> simpleStepBuilder = stepBuilder.chunk(5);
return simpleStepBuilder.reader(jsonReader()).processor(filterProcessor()).writer(filterWriter()).build();
}
public Step insertToDBStep() {
StepBuilder stepBuilder = stepBuilderFactory.get("insertToDBStep");
SimpleStepBuilder<Employee, Employee> simpleStepBuilder = stepBuilder.chunk(5);
return simpleStepBuilder.reader(listReader()).writer(insertToDBWriter()).build();
}
#Bean
public Job myJob(JobRepository jobRepository, PlatformTransactionManager platformTransactionManager) {
return jobBuilderFactory.get("myJob").incrementer(new RunIdIncrementer())
.start(filterStep())
.next(insertToDBStep())
.build();
}
}
Why doesn't the insertToDBStep starts at the end of the filterStep and it actually looks like the filter is running at the same time? And why it looks like the job starts after the initialization of Root WebApplicationContext?
This is the output.
2022-05-23 15:40:49.418 INFO 14008 --- [ restartedMain] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1024 ms
Try to read JSON
Read from list
Try to save on DB
2022-05-23 15:40:49.882 INFO 14008 --- [ restartedMain] o.s.b.d.a.OptionalLiveReloadServer : LiveReload server is running on port 35729
2022-05-23 15:40:49.917 INFO 14008 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2022-05-23 15:40:49.926 INFO 14008 --- [ restartedMain] c.marco.firstbatch.TestBatchApplication : Started TestBatchApplication in 1.985 seconds (JVM running for 2.789)
2022-05-23 15:40:49.927 INFO 14008 --- [ restartedMain] o.s.b.a.b.JobLauncherApplicationRunner : Running default command line with: []
2022-05-23 15:40:49.928 WARN 14008 --- [ restartedMain] o.s.b.c.c.a.DefaultBatchConfigurer : No datasource was provided...using a Map based JobRepository
2022-05-23 15:40:49.928 WARN 14008 --- [ restartedMain] o.s.b.c.c.a.DefaultBatchConfigurer : No transaction manager was provided, using a ResourcelessTransactionManager
2022-05-23 15:40:49.943 INFO 14008 --- [ restartedMain] o.s.b.c.l.support.SimpleJobLauncher : No TaskExecutor has been set, defaulting to synchronous executor.
2022-05-23 15:40:49.972 INFO 14008 --- [ restartedMain] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=myJob]] launched with the following parameters: [{run.id=1}]
2022-05-23 15:40:50.003 INFO 14008 --- [ restartedMain] o.s.batch.core.job.SimpleStepHandler : Executing step: [filterStep]
Processing JSON
Processing JSON
Processing JSON
Processing JSON
Processing JSON
Save on list [com.marco.firstbatch.Employee#958d6e7, com.marco.firstbatch.Employee#464d17f8, com.marco.firstbatch.Employee#705520ac, com.marco.firstbatch.Employee#1a9f8e93, com.marco.firstbatch.Employee#55bf8cc9]
Processing JSON
Processing JSON
Save on list [com.marco.firstbatch.Employee#55d706c0, com.marco.firstbatch.Employee#1bc46dd4]
2022-05-23 15:40:50.074 INFO 14008 --- [ restartedMain] o.s.batch.core.step.AbstractStep : Step: [filterStep] executed in 70ms
2022-05-23 15:40:50.081 INFO 14008 --- [ restartedMain] o.s.batch.core.job.SimpleStepHandler : Executing step: [insertToDBStep]
2022-05-23 15:40:50.084 INFO 14008 --- [ restartedMain] o.s.batch.core.step.AbstractStep : Step: [insertToDBStep] executed in 3ms
2022-05-23 15:40:50.088 INFO 14008 --- [ restartedMain] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=myJob]] completed with the following parameters: [{run.id=1}] and the following status: [COMPLETED] in 96ms
Thanks in advance.
The steps are executed correctly in sequence. You are putting System.out.println statements in two "kind" of places:
In the bean definition methods executed by Spring Framework when configuring the application context
In the code of batch artefacts (item processor, item writer) which are called by Spring Batch when running your job
In your case, Spring Framework will call the following bean definition methods in order to define the first step, filterStep():
jsonReader(): prints Try to read JSON. The file is not read at this time, only the json reader bean is defined. A more accurate log message would be: json reader bean created.
listReader(): prints Read from list. Same here, the file reading has not started yet. A more accurate log message would be: list reader bean created.
filterProcessor(): prints nothing. The log statement is in the ItemProcessor#process method. This will be called by Spring Batch at runtime, not at this point in time which is a configuration time
filterWriter(): same here, the print statement is in the write method called at runtime and not at configuration time
This results in the following output for filterStep():
Try to read JSON
Read from list
Now Spring Framework moves to defining the next step, insertToDBStep(). For this, it will call the following methods in order, according to your step definition:
listReader(): this bean has already bean defined, Spring will reuse the same instance (by default, Spring beans are singletons). Hence, there is no output from this method.
insertToDBWriter(): prints Try to save on DB. Same here, there is no actual save to DB here. A more accurate log message would be insertToDBWriter bean created (or even more accurate, attempting to create insertToDBWriter bean, in case the code that follows throws an exception).
You now have the following cumulative output:
Try to read JSON
Read from list
Try to save on DB
At this point, Spring Framework has finished its job of configuring the application context, Spring Batch takes over and starts the job. The actual processing of filterStep() begins:
The reader (ListItemReader) does not have any output in the read method.
The processor prints Processing JSON
The writer prints Save on list ...
You seem to have two chunks (the first with 5 items and the second with 2 items), which leads to the following output:
2022-05-23 15:40:50.003 INFO 14008 --- [ restartedMain] o.s.batch.core.job.SimpleStepHandler : Executing step: [filterStep]
Processing JSON
Processing JSON
Processing JSON
Processing JSON
Processing JSON
Save on list [com.marco.firstbatch.Employee#958d6e7, com.marco.firstbatch.Employee#464d17f8, com.marco.firstbatch.Employee#705520ac, com.marco.firstbatch.Employee#1a9f8e93, com.marco.firstbatch.Employee#55bf8cc9]
Processing JSON
Processing JSON
Save on list [com.marco.firstbatch.Employee#55d706c0, com.marco.firstbatch.Employee#1bc46dd4]
2022-05-23 15:40:50.074 INFO 14008 --- [ restartedMain] o.s.batch.core.step.AbstractStep : Step: [filterStep] executed in 70ms
Then, the next step starts its execution and you get the following output:
2022-05-23 15:40:50.081 INFO 14008 --- [ restartedMain] o.s.batch.core.job.SimpleStepHandler : Executing step: [insertToDBStep]
2022-05-23 15:40:50.084 INFO 14008 --- [ restartedMain] o.s.batch.core.step.AbstractStep : Step: [insertToDBStep] executed in 3ms
Here you might be asking why there are no items written by insertToDBWriter() (ie why there are no Save on DB .. logs). This is because the listReader() is a singleton bean and you are using it in both steps, so when the second step calls its read method, it will still return null, because the same instance is used and which has already exhausted the list of items in step 1. Hence, this step ends immediately since there are no items to process. If you want to re-read the items from the list in the second step, you can annotate the reader method with #StepScope. This will create a distinct instance of the reader for each step.

Zipkins(in spring boot Application) is not storing all the trace id's when i am sending 10 request from an application to other application

I am using ZIPKINS for distributed tracing the problem is when I am trying to test ZIPKINS by sending 10 request at a time to that service from other service by using loop, checked the UI for the logs of that, I had got only 2 logs i.e for first and last, I haven't received logs of the remaining requests. Can you help in figuring out what is the problem in that. Trace ids and span ids are generated for all the request, I am unable to see that logs for the same. Logs that are received:
2020-03-04 17:38:57.379 INFO [,7c8075c14691f988,43521ecc69b84d84,true] 10576 --- [nio-8081-exec-7] c.i.f.service.ProducerServiceImpl : Received Message ='ServiceInvocation [communicationID=COMM_0121, 2020-03-04 17:38:57.438 INFO [,7552e8c3d87d013a,89769451aafec094,false] –
10576 --- [nio-8081-exec-8] c.i.f.service.ProducerServiceImpl : Received Message ='ServiceInvocation [communicationID=COMM_0122, 2020-03-04 17:38:57.519 INFO [,79f38c25211dfab8,49ea12575eab0bcf,false] 10576 --- [nio-8081-exec-2] c.i.f.service.ProducerServiceImpl : Received –
Message ='ServiceInvocation [communicationID=COMM_0123, 2020-03-04 17:38:57.626 INFO [,294da34664fac032,ad98ed1fbce485df,false] 10576 --- [io-8081-exec-10] c.i.f.service.ProducerServiceImpl : Received Message ='ServiceInvocation [communicationID=COMM_0124, 2020-03-04 17:38:57.879 INFO [,8763a2ca3d6dfc44,9871d046cd7eacf1,false] 10576 --- [nio-8081-exec-1] c.i.f.service.ProducerServiceImpl : Received Message ='ServiceInvocation [communicationID=COMM_0125, 2020-03-04 17:38:57.923 INFO [,be1e3a490e114e92,2435ee34d215459c,false] –
10576 --- [nio-8081-exec-6] c.i.f.service.ProducerServiceImpl : Received Message ='ServiceInvocation [communicationID=COMM_0126, 2020-03-04 17:38:57.980 INFO [,21855ca20670de31,6213a3fdc0a23189,false] 10576 --- [nio-8081-exec-3] c.i.f.service.ProducerServiceImpl : Received Message ='ServiceInvocation [communicationID=COMM_0127, 2020-03-04 17:38:58.043 INFO [,4d9795e7d2dbf50c,21f83b3384381833,false] 10576 --- [nio-8081-exec-4] c.i.f.service.ProducerServiceImpl : Receive
Log format is: [application name, traceId, spanId, export]
So the last value true/false is actually the export value which means:
Export – This property is a boolean that indicates whether or not this log was exported to an aggregator like Zipkin. Zipkin is beyond the scope of this article but plays an important role in analyzing logs created by Sleuth.
As export values are false, zipkin is not receiving those values. That is expected behavior.
Root Cause: Some of the logs are exporting and some are not is because of sampler rate. Not all of the logs are meant to be sent. Anyway if you want all of the logs to be sampled try adding this property:
spring.sleuth.sampler.probability=1.0
Reference: sleuth-not-sending-trace-information-to-zipkin

spring cloud dataflow input source kafka

I have problem with a common task and i can find any solutions or help (maybe some properties i need to pass for this to work ?)
I use local server 1.3.0.M2 and create simple stream
dataflow:>stream create --name test --definition ":bosstds > log" --deploy
In log i got this :
2017-09-28 12:31:00.644 INFO 5156 --- [ -C-1]
o.a.k.c.c.internals.AbstractCoordinator : Successfully joined group
test with generation 1
2017-09-28 12:31:00.646 INFO 5156 --- [ -C-1] o.a.k.c.c.internals.ConsumerCoordinator : Setting newly assigned
partitions [bosstds-0] for group test
2017-09-28 12:31:00.671 INFO 5156 --- [ -C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$3 : partitions
assigned:[bosstds-0]
2017-09-28 12:37:08.898 ERROR 5156 --- [ -L-1] o.s.c.s.b.k.KafkaMessageChannelBinder : Could not convert message:
74657374
java.lang.StringIndexOutOfBoundsException: String index out of range: 103
at java.lang.String.checkBounds(String.java:385) ~[na:1.8.0_144]
at java.lang.String.(String.java:425) ~[na:1.8.0_144]
at org.springframework.cloud.stream.binder.EmbeddedHeaderUtils.oldExtractHeaders(EmbeddedHeaderUtils.java:154)
~[spring-cloud-stream-1.3.0.M2.jar!/:1.3.0.M2]
at org.springframework.cloud.stream.binder.EmbeddedHeaderUtils.extractHeaders(EmbeddedHeaderUtils.java:115)
~[spring-cloud-stream-1.3.0.M2.jar!/:1.3.0.M2]
message is produced with kafka-console-producer.sh --broker-list localhost:9092 --topic bosstds and simply send line "test"
any suggestions ?
SCS embed headers on kafka in order to get this working set header mode to raw. You need to do that when interfacing with external apps not using SCS
tnx for help. I fixed this with:
--spring.cloud.stream.bindings.input.content-type=text/plain
--spring.cloud.stream.bindings.input.consumer.headerMode=raw

Springboot + h2 + spring.jpa.hibernate.ddl-auto + create or update

I'm working with Spring Boot and I have this configuration in properties in order to persist data in h2:
spring.datasource.url = jdbc:h2:file:./db/testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
spring.jpa.hibernate.ddl-auto: update
spring.h2.console.enabled = true
spring.datasource.driverClassName=org.h2.Driver
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
spring.datasource.username=userName
spring.datasource.password=
spring.jpa.database: H2
spring.jpa.show-sql: true
And everything is working well, the data is persisted each time I shut down the service and I started again. But the thing is I see on the console a info message which I would like to fix, but I don't have idea how, I've already search a lot. This is the message:
2016-02-16 18:36:05.042 INFO 20793 --- [ost-startStop-1] java.sql.DatabaseMetaData : HHH000262: Table not found: Employ
2016-02-16 18:36:05.044 INFO 20793 --- [ost-startStop-1] java.sql.DatabaseMetaData : HHH000262: Table not found: User
2016-02-16 18:36:05.045 INFO 20793 --- [ost-startStop-1] java.sql.DatabaseMetaData : HHH000262: Table not found: Employ
2016-02-16 18:36:05.047 INFO 20793 --- [ost-startStop-1] java.sql.DatabaseMetaData : HHH000262: Table not found: User
2016-02-16 18:36:05.048 INFO 20793 --- [ost-startStop-1] java.sql.DatabaseMetaData : HHH000262: Table not found: Employ
This happens only first time, because the file and tables weren't created, someone knows if there is a possibility to have since first time the file and tables created using configuration properties? I mean instead this line:
spring.jpa.hibernate.ddl-auto: update
Do something like this or maybe a trick:
spring.jpa.hibernate.ddl-auto: create-update
Please will be too much helpful too me. Thanks in advance =)

how to remove JDBC debug logs [duplicate]

This question already has an answer here:
Java tomcat - how to remove JDBC debug logs
(1 answer)
Closed 8 years ago.
The following log is constantly thrown to the console:
09:36:53.456 [CloseConnectionsTimer] DEBUG o.s.jdbc.datasource.DataSourceUtils - Fetching JDBC Connection from DataSource
09:36:53.456 [CloseConnectionsTimer] DEBUG o.s.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 1, parameter value [0], value class [java.lang.Long], SQL type -5
09:36:53.456 [CloseConnectionsTimer] DEBUG o.s.jdbc.datasource.DataSourceUtils - Returning JDBC Connection to DataSource
09:36:53.472 [CloseConnectionsTimer] DEBUG o.s.jdbc.core.JdbcTemplate - Executing prepared SQL query
09:36:53.472 [CloseConnectionsTimer] DEBUG o.s.jdbc.core.JdbcTemplate - Executing prepared SQL statement..
09:36:53.472 [CloseConnectionsTimer] DEBUG o.s.jdbc.datasource.DataSourceUtils - Fetching JDBC Connection from DataSource
09:36:53.472 [CloseConnectionsTimer] DEBUG o.s.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 1, parameter value [0], value class [java.lang.Integer], SQL type 2
09:36:53.472 [CloseConnectionsTimer] DEBUG o.s.jdbc.datasource.DataSourceUtils - Returning JDBC Connection to DataSource
How can i stop these logs/Change the logging level to INFO or ERROR?
Anything less than debug level will make those logs disappear. eg. INFO, WARN, ERROR
First of all, those are not JDBC logs, those are Spring logs.
Anyway, this should do the job in slf4j or log4j:
<logger name="org.spring.jdbc" level="OFF"/>

Categories