I have a single step springbatch application. The job is as follows:
#Bean
public Job databaseCursorJob(#Qualifier("databaseCursorStep") Step exampleJobStep,
JobBuilderFactory jobBuilderFactory) {
return jobBuilderFactory.get("databaseCursorJob")
.incrementer(new RunIdIncrementer())
.flow(exampleJobStep)
.end()
.build();
}
I start the job from a springboot application. This afternoon, I attempted to add a second step to the job. Essentially as follows:
#Bean
public Job databaseCursorJob(#Qualifier("databaseCursorStep") Step exampleJobStep,
JobBuilderFactory jobBuilderFactory) {
return jobBuilderFactory.get("databaseCursorJob")
.incrementer(new RunIdIncrementer())
.flow(exampleJobStep).next(partitionStep())
.end()
.build();
}
In other words, just adding the "next(partitionStep()). However, ever since I did this, the job finishes without executing any step (see shell output below). In fact, even after removing the second step and going back to the original job, it refuses to execute the step. Before attempting to add the second step, I never once encountered this problem. I have gone so far as restarting my VM and it still skips the step. I am rather dead in the water until I resolved this. Grateful for any insights. thanks.
2020-09-01 14:49:00.260 INFO 6913 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8087 (http) with context path ''
2020-09-01 14:49:00.263 INFO 6913 --- [ main] f.p.r.Application : Started Application in 7.752 seconds (JVM running for 9.092)
2020-09-01 14:49:00.268 INFO 6913 --- [ main] o.s.b.a.b.JobLauncherCommandLineRunner : Running default command line with: []
2020-09-01 14:49:00.579 INFO 6913 --- [ main] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=databaseCursorJob]] launched with the following parameters: [{}]
2020-09-01 14:49:00.698 INFO 6913 --- [ main] o.s.batch.core.job.SimpleStepHandler : Step already complete or not restartable, so no action to execute: StepExecution: id=120, version=4, name=databaseCursorStep, status=COMPLETED, exitStatus=COMPLETED, readCount=1, filterCount=0, writeCount=1 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=2, rollbackCount=0, exitDescription=
2020-09-01 14:49:00.730 INFO 6913 --- [ main] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=databaseCursorJob]] completed with the following parameters: [{}] and the following status: [COMPLETED]
My issue was that my job had no way recover if there was an error or stuck in an unknown state. The step was not "already complete", it never completed. Its status was still "STARTED", and exit code "UNKNOWN" because it never exited. Anyway, my job repository is not in memory, but captured to a local DB, which is why it never resolved itself even after restarting VM (shame on me for not remembering this). So, I was able to fix by wiping out the job instance history, however that was a band-aid. I still have to fix my code to prevent it from happening again.
I also learned I could diagnose by examining the job repository in the database (its all there).
I really resolved this thanks Mr Hassine who responded above several times and pointed me in the right direction. The solution to prevent in the future is indeed addressed in the link he provided in his first response: Spring Batch error (A Job Instance Already Exists) and RunIdIncrementer generates only once
Related
I can't really understand what's going on. I'm studying Spring Batch and I'd like to execute two steps for some reasons, one after the other.
Now please don't mind what the steps are currently doing, just keep in mind that I would like to perform two steps sequentially.
This is the code:
#Configuration
#EnableBatchProcessing
public class JobConfiguration {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
private List<Employee> employeesToSave = new ArrayList<Employee>();
public JsonItemReader<Employee> jsonReader() {
System.out.println("Try to read JSON");
final ObjectMapper mapper = new ObjectMapper();
final JacksonJsonObjectReader<Employee> jsonObjectReader = new JacksonJsonObjectReader<>(
Employee.class);
jsonObjectReader.setMapper(mapper);
return new JsonItemReaderBuilder<Employee>().jsonObjectReader(jsonObjectReader)
.resource(new ClassPathResource("input.json"))
.name("myReader")
.build();
}
public ListItemReader<Employee> listReader() {
System.out.println("Read from list");
return new ListItemReader<Employee>(employeesToSave);
*/
}
public ItemProcessor<Employee,Employee> filterProcessor() {
return employee -> {
System.out.println("Processing JSON");
return employee;
};
}
public ItemWriter<Employee> filterWriter() {
return listEmployee -> {
employeesToSave.addAll(listEmployee);
System.out.println("Save on list " + listEmployee.toString());
};
}
public ItemWriter<Employee> insertToDBWriter() {
System.out.println("Try to save on DB");
return listEmployee -> {
System.out.println("Save on DB " + listEmployee.toString());
};
}
public Step filterStep() {
StepBuilder stepBuilder = stepBuilderFactory.get("filterStep");
SimpleStepBuilder<Employee, Employee> simpleStepBuilder = stepBuilder.chunk(5);
return simpleStepBuilder.reader(jsonReader()).processor(filterProcessor()).writer(filterWriter()).build();
}
public Step insertToDBStep() {
StepBuilder stepBuilder = stepBuilderFactory.get("insertToDBStep");
SimpleStepBuilder<Employee, Employee> simpleStepBuilder = stepBuilder.chunk(5);
return simpleStepBuilder.reader(listReader()).writer(insertToDBWriter()).build();
}
#Bean
public Job myJob(JobRepository jobRepository, PlatformTransactionManager platformTransactionManager) {
return jobBuilderFactory.get("myJob").incrementer(new RunIdIncrementer())
.start(filterStep())
.next(insertToDBStep())
.build();
}
}
Why doesn't the insertToDBStep starts at the end of the filterStep and it actually looks like the filter is running at the same time? And why it looks like the job starts after the initialization of Root WebApplicationContext?
This is the output.
2022-05-23 15:40:49.418 INFO 14008 --- [ restartedMain] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1024 ms
Try to read JSON
Read from list
Try to save on DB
2022-05-23 15:40:49.882 INFO 14008 --- [ restartedMain] o.s.b.d.a.OptionalLiveReloadServer : LiveReload server is running on port 35729
2022-05-23 15:40:49.917 INFO 14008 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2022-05-23 15:40:49.926 INFO 14008 --- [ restartedMain] c.marco.firstbatch.TestBatchApplication : Started TestBatchApplication in 1.985 seconds (JVM running for 2.789)
2022-05-23 15:40:49.927 INFO 14008 --- [ restartedMain] o.s.b.a.b.JobLauncherApplicationRunner : Running default command line with: []
2022-05-23 15:40:49.928 WARN 14008 --- [ restartedMain] o.s.b.c.c.a.DefaultBatchConfigurer : No datasource was provided...using a Map based JobRepository
2022-05-23 15:40:49.928 WARN 14008 --- [ restartedMain] o.s.b.c.c.a.DefaultBatchConfigurer : No transaction manager was provided, using a ResourcelessTransactionManager
2022-05-23 15:40:49.943 INFO 14008 --- [ restartedMain] o.s.b.c.l.support.SimpleJobLauncher : No TaskExecutor has been set, defaulting to synchronous executor.
2022-05-23 15:40:49.972 INFO 14008 --- [ restartedMain] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=myJob]] launched with the following parameters: [{run.id=1}]
2022-05-23 15:40:50.003 INFO 14008 --- [ restartedMain] o.s.batch.core.job.SimpleStepHandler : Executing step: [filterStep]
Processing JSON
Processing JSON
Processing JSON
Processing JSON
Processing JSON
Save on list [com.marco.firstbatch.Employee#958d6e7, com.marco.firstbatch.Employee#464d17f8, com.marco.firstbatch.Employee#705520ac, com.marco.firstbatch.Employee#1a9f8e93, com.marco.firstbatch.Employee#55bf8cc9]
Processing JSON
Processing JSON
Save on list [com.marco.firstbatch.Employee#55d706c0, com.marco.firstbatch.Employee#1bc46dd4]
2022-05-23 15:40:50.074 INFO 14008 --- [ restartedMain] o.s.batch.core.step.AbstractStep : Step: [filterStep] executed in 70ms
2022-05-23 15:40:50.081 INFO 14008 --- [ restartedMain] o.s.batch.core.job.SimpleStepHandler : Executing step: [insertToDBStep]
2022-05-23 15:40:50.084 INFO 14008 --- [ restartedMain] o.s.batch.core.step.AbstractStep : Step: [insertToDBStep] executed in 3ms
2022-05-23 15:40:50.088 INFO 14008 --- [ restartedMain] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=myJob]] completed with the following parameters: [{run.id=1}] and the following status: [COMPLETED] in 96ms
Thanks in advance.
The steps are executed correctly in sequence. You are putting System.out.println statements in two "kind" of places:
In the bean definition methods executed by Spring Framework when configuring the application context
In the code of batch artefacts (item processor, item writer) which are called by Spring Batch when running your job
In your case, Spring Framework will call the following bean definition methods in order to define the first step, filterStep():
jsonReader(): prints Try to read JSON. The file is not read at this time, only the json reader bean is defined. A more accurate log message would be: json reader bean created.
listReader(): prints Read from list. Same here, the file reading has not started yet. A more accurate log message would be: list reader bean created.
filterProcessor(): prints nothing. The log statement is in the ItemProcessor#process method. This will be called by Spring Batch at runtime, not at this point in time which is a configuration time
filterWriter(): same here, the print statement is in the write method called at runtime and not at configuration time
This results in the following output for filterStep():
Try to read JSON
Read from list
Now Spring Framework moves to defining the next step, insertToDBStep(). For this, it will call the following methods in order, according to your step definition:
listReader(): this bean has already bean defined, Spring will reuse the same instance (by default, Spring beans are singletons). Hence, there is no output from this method.
insertToDBWriter(): prints Try to save on DB. Same here, there is no actual save to DB here. A more accurate log message would be insertToDBWriter bean created (or even more accurate, attempting to create insertToDBWriter bean, in case the code that follows throws an exception).
You now have the following cumulative output:
Try to read JSON
Read from list
Try to save on DB
At this point, Spring Framework has finished its job of configuring the application context, Spring Batch takes over and starts the job. The actual processing of filterStep() begins:
The reader (ListItemReader) does not have any output in the read method.
The processor prints Processing JSON
The writer prints Save on list ...
You seem to have two chunks (the first with 5 items and the second with 2 items), which leads to the following output:
2022-05-23 15:40:50.003 INFO 14008 --- [ restartedMain] o.s.batch.core.job.SimpleStepHandler : Executing step: [filterStep]
Processing JSON
Processing JSON
Processing JSON
Processing JSON
Processing JSON
Save on list [com.marco.firstbatch.Employee#958d6e7, com.marco.firstbatch.Employee#464d17f8, com.marco.firstbatch.Employee#705520ac, com.marco.firstbatch.Employee#1a9f8e93, com.marco.firstbatch.Employee#55bf8cc9]
Processing JSON
Processing JSON
Save on list [com.marco.firstbatch.Employee#55d706c0, com.marco.firstbatch.Employee#1bc46dd4]
2022-05-23 15:40:50.074 INFO 14008 --- [ restartedMain] o.s.batch.core.step.AbstractStep : Step: [filterStep] executed in 70ms
Then, the next step starts its execution and you get the following output:
2022-05-23 15:40:50.081 INFO 14008 --- [ restartedMain] o.s.batch.core.job.SimpleStepHandler : Executing step: [insertToDBStep]
2022-05-23 15:40:50.084 INFO 14008 --- [ restartedMain] o.s.batch.core.step.AbstractStep : Step: [insertToDBStep] executed in 3ms
Here you might be asking why there are no items written by insertToDBWriter() (ie why there are no Save on DB .. logs). This is because the listReader() is a singleton bean and you are using it in both steps, so when the second step calls its read method, it will still return null, because the same instance is used and which has already exhausted the list of items in step 1. Hence, this step ends immediately since there are no items to process. If you want to re-read the items from the list in the second step, you can annotate the reader method with #StepScope. This will create a distinct instance of the reader for each step.
On an legacy production application we were having an issue where the application crashed because it ran out of connections (the default was 100 connections) as a temporal solution we decided to increase the available connections to 500 but when the application reached 200 connections it just stopped itself, with no errors on the logs, just like a simple shut down.
I added a couple of logs that are generated each 15 secs for clearly seeing the behavior of the connections, these logs prints the idle connection and active connection as well as the full object of the Datasource properties. Before the application shut down the following logs where added:
Datasource idle connections: 0, active connections: 200
Datasource properties: org.apache.tomcat.jdbc.pool.DataSource#20b2475a{ConnectionPool[defaultAutoCommit=null; defaultReadOnly=null; defaultTransactionIsolation=-1; defaultCatalog=null; driverClassName=com.mysql.jdbc.Driver; maxActive=500; maxIdle=500; minIdle=10; initialSize=10; maxWait=30000; testOnBorrow=true; testOnReturn=false; timeBetweenEvictionRunsMillis=5000; numTestsPerEvictionRun=0; minEvictableIdleTimeMillis=60000; testWhileIdle=false; testOnConnect=false; password=********; url=jdbc:mysql://127.0.0.1:3306/db_name?createDatabaseIfNotExist=true; username=username; validationQuery=SELECT 1; validationQueryTimeout=-1; validatorClassName=null; validationInterval=3000; accessToUnderlyingConnectionAllowed=true; removeAbandoned=false; removeAbandonedTimeout=60; logAbandoned=false; connectionProperties=null; initSQL=null; jdbcInterceptors=null; jmxEnabled=true; fairQueue=true; useEquals=true; abandonWhenPercentageFull=0; maxAge=0; useLock=false; dataSource=null; dataSourceJNDI=null; suspectTimeout=0; alternateUsernameAllowed=false; commitOnReturn=false; rollbackOnReturn=false; useDisposableConnectionFacade=true; logValidationErrors=false; propagateInterruptState=false; ignoreExceptionOnPreLoad=false; }
After that the application shut itself down and I found following logs with no errors before:
2021-02-03 20:23:02.618 INFO 1 --- [ Thread-4] ationConfigEmbeddedWebApplicationContext : Closing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#8807e25: startup date [Wed Feb 03 19:49:09 GMT 2021]; root of context hierarchy
2021-02-03 20:23:02.623 INFO 1 --- [ Thread-4] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 0
2021-02-03 20:23:02.643 INFO 1 --- [ Thread-4] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown
2021-02-03 20:23:02.647 INFO 1 --- [ Thread-4] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
A couple of relevant dependencies and their versions:
org.springframework:spring-webmvc:jar:4.3.6.RELEASE:compile
org.springframework.boot:spring-boot-starter-data-jpa:jar:1.5.1.RELEASE:compile
org.springframework.boot:spring-boot-starter-jdbc:jar:1.5.1.RELEASE:compile
org.apache.tomcat:tomcat-jdbc:jar:8.5.11:compile
org.hibernate:hibernate-core:jar:5.0.11.Final:compile
org.springframework.data:spring-data-jpa:jar:1.11.0.RELEASE:compile
org.springframework.boot:spring-boot-starter-web:jar:1.5.1.RELEASE:compile
org.liquibase:liquibase-core:jar:3.5.1:compile
org.liquibase.ext:liquibase-hibernate5:jar:3.6:compile
Finally my ask is for help to understand why the application shuts itself down and how could I fix it so it is able to reach 500 connections?
spring butch configuration file:
#Configuration
public class TransitionConfiguration {
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Bean
public Step step1() {
return stepBuilderFactory.get("step1")
.tasklet((contribution, chunkContext) -> {
System.out.println(">> This is step 1");
return RepeatStatus.FINISHED;
}).build();
}
#Bean
public Step step2() {
return stepBuilderFactory.get("step2")
.tasklet((contribution, chunkContext) -> {
System.out.println(">> This is step 2");
return RepeatStatus.FINISHED;
}).build();
}
#Bean
public Job jobSimpleNext() {
System.out.println("starting job");
return jobBuilderFactory.get("jobNext")
.start(step1())
.next(step2())
.build();
}
}
in case i've started it:
2019-06-09 23:32:02.052 INFO 19976 --- [ restartedMain] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2019-06-09 23:32:02.190 INFO 19976 --- [ restartedMain] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
starting job
2019-06-09 23:32:02.495 INFO 19976 --- [ restartedMain] o.s.b.c.r.s.JobRepositoryFactoryBean : No database type set, using meta data indicating: MYSQL
2019-06-09 23:32:02.601 INFO 19976 --- [ restartedMain] o.s.b.c.l.support.SimpleJobLauncher : No TaskExecutor has been set, defaulting to synchronous executor.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.springframework.cglib.core.ReflectUtils (file:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/5.2.0.M2/9a84d456ad8d5151da06fc8a85540da9cc95d734/spring-core-5.2.0.M2.jar) to method java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain)
WARNING: Please consider reporting this to the maintainers of org.springframework.cglib.core.ReflectUtils
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2019-06-09 23:32:02.705 INFO 19976 --- [ restartedMain] o.s.b.d.a.OptionalLiveReloadServer : LiveReload server is running on port 35729
2019-06-09 23:32:02.738 INFO 19976 --- [ restartedMain] o.v.s.b.l.s.t.TransitionsApplication : Started TransitionsApplication in 2.12 seconds (JVM running for 2.588)
2019-06-09 23:32:02.740 INFO 19976 --- [ restartedMain] o.s.b.a.b.JobLauncherCommandLineRunner : Running default command line with: []
2019-06-09 23:32:02.891 INFO 19976 --- [ restartedMain] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=transitionJobNext]] launched with the following parameters: [{}]
2019-06-09 23:32:02.931 INFO 19976 --- [ restartedMain] o.s.batch.core.job.SimpleStepHandler : Step already complete or not restartable, so no action to execute: StepExecution: id=19, version=3, name=step1, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0, exitDescription=
2019-06-09 23:32:02.945 INFO 19976 --- [ restartedMain] o.s.batch.core.job.SimpleStepHandler : Step already complete or not restartable, so no action to execute: StepExecution: id=20, version=3, name=step2, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0, exitDescription=
2019-06-09 23:32:02.954 INFO 19976 --- [ restartedMain] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=transitionJobNext]] completed with the following parameters: [{}] and the following status: [COMPLETED]
2019-06-09 23:32:02.963 INFO 19976 --- [ Thread-7] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
2019-06-09 23:32:02.974 INFO 19976 --- [ Thread-7] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
Checking logs we can detect that everything is completed.
BUT
Tasklets was not executed. I really confused because of this.
Checking DB I've detected that job status is COMPLETED but exit_code is NOOP.
I've detected also, that using .allowStartIfComplete(true) for step triggers tasklet execution. But that like a hack which can be used for the step only single time in the same job scope (it doesn't work for case double execution step in the same job). For example:
#Bean
public Step step2() {
return stepBuilderFactory.get("step2")
.allowStartIfComplete(true)
.tasklet((contribution, chunkContext) -> {
System.out.println(">> This is step 2");
return RepeatStatus.FINISHED;
}).build();
}
In case I start project in terminal - it works correctly.
My env:
OpenJDK Runtime Environment Zulu11.29+3-CA (build 11.0.2+7-LTS)
Gradle 5.4.1
Idea community (just installed latest version)
OS ubuntu 18.04
command to start application from idea is:
/home/sergii/.sdkman/candidates/java/current/bin/java -javaagent:/home/sergii/IDE/idea-IC-191.7479.19/lib/idea_rt.jar=33229:/home/sergii/IDE/idea-IC-191.7479.19/bin -Dfile.encoding=UTF-8 -classpath /home/sergii/Development/projects/my/spring/spring-batch-learning/out/production/classes:/home/sergii/Development/projects/my/spring/spring-batch-learning/out/production/resources:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework.boot/spring-boot-starter-actuator/2.2.0.M3/f7d3810d75b6fb01b7aa1d8df9e21b3bea8c2dbc/spring-boot-starter-actuator-2.2.0.M3.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework.boot/spring-boot-devtools/2.2.0.M3/853cb206490a2946646d4bac02446b8dc564be30/spring-boot-devtools-2.2.0.M3.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework.boot/spring-boot-starter-batch/2.2.0.M3/fe0bb4352d0c49d9b088db13c6a89fd9619bf2dc/spring-boot-starter-batch-2.2.0.M3.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework.boot/spring-boot-starter-mail/2.2.0.M3/ffc975519f3c14e7577d0407845240caa3fc10f9/spring-boot-starter-mail-2.2.0.M3.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework.boot/spring-boot-starter-jdbc/2.2.0.M3/fbd92e6461e5e37186c360f0080af27719780811/spring-boot-starter-jdbc-2.2.0.M3.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework.boot/spring-boot-starter/2.2.0.M3/d858f3131933381d6661c0f08b6bd9669f581123/spring-boot-starter-2.2.0.M3.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework.boot/spring-boot-actuator-autoconfigure/2.2.0.M3/3a6536cc550e7e555be62d16720c285b40c3c95a/spring-boot-actuator-autoconfigure-2.2.0.M3.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/mysql/mysql-connector-java/8.0.16/6088b7a25188ab4b3ab865422a8ec77ade29236/mysql-connector-java-8.0.16.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework.batch/spring-batch-core/4.2.0.M2/60b52cb2d85ead44ecf7b0bf47dfeb6e672316b6/spring-batch-core-4.2.0.M2.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/io.micrometer/micrometer-core/1.1.4/96eabfe2343a4a4676d215b2122cbbc4d4b6af9b/micrometer-core-1.1.4.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework/spring-context-support/5.2.0.M2/fa3009cdce6b4155da8833bb003577991cd71908/spring-context-support-5.2.0.M2.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/com.sun.mail/jakarta.mail/1.6.3/787e007e377223bba85a33599d3da416c135f99b/jakarta.mail-1.6.3.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework.boot/spring-boot-autoconfigure/2.2.0.M3/3d990f3a7716875013570a1ddd9c79a5dc556390/spring-boot-autoconfigure-2.2.0.M3.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework.boot/spring-boot-actuator/2.2.0.M3/e08a8914653e13395ddaf23fde41570d919b8109/spring-boot-actuator-2.2.0.M3.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework.boot/spring-boot/2.2.0.M3/fc0b424da418b242c3d953bd6bbf06f03bfb1925/spring-boot-2.2.0.M3.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework.boot/spring-boot-starter-logging/2.2.0.M3/ae4dc76f8f14327ca3a792584f666d484f21b5/spring-boot-starter-logging-2.2.0.M3.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/jakarta.annotation/jakarta.annotation-api/1.3.4/a858ec3f0ebd2b8d855c1ddded2cde9b381b0517/jakarta.annotation-api-1.3.4.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework/spring-context/5.2.0.M2/c85d8095c7765d8d38b9b6a357aa347b617bab79/spring-context-5.2.0.M2.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework/spring-jdbc/5.2.0.M2/ea5a41f2b01a2a5f88426e3a509f53479e2b6c41/spring-jdbc-5.2.0.M2.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework.batch/spring-batch-infrastructure/4.2.0.M2/b522e7c7f1c5ebb575136f149b0408462d391e22/spring-batch-infrastructure-4.2.0.M2.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework/spring-aop/5.2.0.M2/8deb00a5c5e18b6c594877afe4e82b8b8d64ccb/spring-aop-5.2.0.M2.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework/spring-tx/5.2.0.M2/7042f477e471796be677f71ec63575ac7c47a749/spring-tx-5.2.0.M2.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework/spring-beans/5.2.0.M2/c4aa2bb803602ebc26a7ee47628f6af106e1bf55/spring-beans-5.2.0.M2.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework.retry/spring-retry/1.2.4.RELEASE/e5a1e629b2743dc7bbe4a8d07ebe9ff6c3b816ce/spring-retry-1.2.4.RELEASE.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework/spring-expression/5.2.0.M2/cdf6909ed2decf704486ca85395e97177fd7535b/spring-expression-5.2.0.M2.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/5.2.0.M2/9a84d456ad8d5151da06fc8a85540da9cc95d734/spring-core-5.2.0.M2.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.hdrhistogram/HdrHistogram/2.1.9/e4631ce165eb400edecfa32e03d3f1be53dee754/HdrHistogram-2.1.9.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.latencyutils/LatencyUtils/2.0.3/769c0b82cb2421c8256300e907298a9410a2a3d3/LatencyUtils-2.0.3.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/com.zaxxer/HikariCP/3.3.1/bb447db60818ecfdbb1b99e7bd096ba7a252d91a/HikariCP-3.3.1.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.24/13a9c0d6776483c3876e3ff9384f9bb55b17001b/snakeyaml-1.24.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.datatype/jackson-datatype-jsr310/2.9.8/28ad1bced632ba338e51c825a652f6e11a8e6eac/jackson-datatype-jsr310-2.9.8.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/javax.batch/javax.batch-api/1.0/65392d027a6eb369fd9fcd1b75cae150e25ac03c/javax.batch-api-1.0.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.codehaus.jettison/jettison/1.2/765a6181653f4b05c18c7a9e8f5c1f8269bf9b2/jettison-1.2.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/com.sun.activation/jakarta.activation/1.2.1/8013606426a73d8ba6b568370877251e91a38b89/jakarta.activation-1.2.1.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/ch.qos.logback/logback-classic/1.2.3/7c4f3c474fb2c041d8028740440937705ebb473a/logback-classic-1.2.3.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-to-slf4j/2.11.2/6d37bf7b046c0ce2669f26b99365a2cfa45c4c18/log4j-to-slf4j-2.11.2.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.slf4j/jul-to-slf4j/1.7.26/8031352b2bb0a49e67818bf04c027aa92e645d5c/jul-to-slf4j-1.7.26.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-annotations/2.9.0/7c10d545325e3a6e72e06381afe469fd40eb701/jackson-annotations-2.9.0.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.springframework/spring-jcl/5.2.0.M2/988d5bac4a51ed2675626378ee79f8447eda2002/spring-jcl-5.2.0.M2.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.9.8/f5a654e4675769c716e5b387830d19b501ca191/jackson-core-2.9.8.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-api/1.7.26/77100a62c2e6f04b53977b9f541044d7d722693d/slf4j-api-1.7.26.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/ch.qos.logback/logback-core/1.2.3/864344400c3d4d92dfeb0a305dc87d953677c03c/logback-core-1.2.3.jar:/home/sergii/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-api/2.11.2/f5e9a2ffca496057d6891a3de65128efc636e26e/log4j-api-2.11.2.jar org.vl.spring.batch.learning.springbatchlearning.transition.TransitionsApplication
Question
I guess the issues with idea, because of it works in terminal. Is any ways to analyze and detect a reason for my case and fix it, how?
P.S.
I've done next unsuccessful tries to fix it:
reinstal idea;
recreate project from prepared sources and from spring start;
restart idea invalidating caches
-
It seem was no intellij idea issues...
I've tried to remove my configured datasource trying experiment with default one using embedded db. It works correctly. Next thing that I've done, I used my first datasource configuration but with different step names. That works. Checking DB I've detected that every executed step has COMPLETED status.
Continue investigation...
Every my step has COMPLETED status in the DB. By default spring batch doesn't allow execute completed steps.
To make COMPLETED steps handled one more time we can:
update COMPLETED status in DB with STOPPED or FAILED - that is really hack
use in step builder function .allowStartIfComplete(true) - the feature of spring batch.
Recently I've been doing an update of an application from Grails 2.5.5 to Grails 3.2.9.
An application is serving ~3K RPM.
The issue I currently have is a poor performance after application startup. Our normal release process works in the following way (assuming 2 nodes setup):
2 nodes with service are running.
Turn off the first node.
Release new version on the first node.
Service on the first node register itself in Eureka and start consuming requests.
(repeat same on second nodes)
Problems are starting to appear on the 4th step. The application responds quite slowly and timing is really inconsistent - some responses are in an expected time range, but some are really off normal.
Sample logs:
2017-09-01 08:03:38,594 INFO [request][http-nio-12345-exec-72][][]- END controller=98ms
2017-09-01 08:03:38,911 INFO [request][http-nio-12345-exec-101][][]- END controller=134ms
2017-09-01 08:03:38,948 INFO [request][http-nio-12345-exec-56][][]- END controller=211ms
2017-09-01 08:03:39,156 INFO [request][http-nio-12345-exec-82][][]- END controller=95ms
2017-09-01 08:03:39,124 INFO [request][http-nio-12345-exec-111][][]- END controller=98ms
2017-09-01 08:03:39,184 INFO [request][http-nio-12345-exec-110][][]- END controller=4099ms
2017-09-01 08:03:39,399 INFO [request][http-nio-12345-exec-46][][]- END controller=24ms
2017-09-01 08:03:39,428 INFO [request][http-nio-12345-exec-43][][]- END controller=191ms
2017-09-01 08:03:39,744 INFO [request][http-nio-12345-exec-83][][]- END controller=117ms
2017-09-01 08:03:40,335 INFO [request][http-nio-12345-exec-56][][]- END controller=483ms
2017-09-01 08:03:45,595 INFO [request][http-nio-12345-exec-110][][]- END controller=5623ms
2017-09-01 08:03:45,618 INFO [request][http-nio-12345-exec-83][][]- END controller=5274ms
2017-09-01 08:03:45,629 INFO [request][http-nio-12345-exec-144][][]- END controller=2007ms
2017-09-01 08:03:45,671 INFO [request][http-nio-12345-exec-119][][]- END controller=4591ms
As you can see from it - few requests went below 100ms and some - more than 5 seconds.
My assumption is that’s happening due to slow warm up of Grails 3 application and lazy class loading.
Things I've already done:
grais.gorm.autowire = false
grais.gorm.reactor.events = false
Delay registration of service in Eureka for 30 seconds (to wait while applications is fully loaded)
The next thing which comes to my mind is to compile the project with #CompileStatic annotation.
When using the package spring-boot-starter-data-rest Spring automatically creates some end-points named /profile for Alps, as follows:
2017-03-08 22:09:12.737 INFO 8663 --- [ restartedMain] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped "{[/profile],methods=[OPTIONS]}" onto public org.springframework.http.HttpEntity<?> org.springframework.data.rest.webmvc.ProfileController.profileOptions()
2017-03-08 22:09:12.738 INFO 8663 --- [ restartedMain] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped "{[/profile],methods=[GET]}" onto org.springframework.http.HttpEntity<org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.ProfileController.listAllFormsOfMetadata()
2017-03-08 22:09:12.738 INFO 8663 --- [ restartedMain] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped "{[/profile/{repository}],methods=[GET],produces=[application/schema+json]}" onto public org.springframework.http.HttpEntity<org.springframework.data.rest.webmvc.json.JsonSchema> org.springframework.data.rest.webmvc.RepositorySchemaController.schema(org.springframework.data.rest.webmvc.RootResourceInformation)
2017-03-08 22:09:12.738 INFO 8663 --- [ restartedMain] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped "{[/profile/{repository}],methods=[OPTIONS],produces=[application/alps+json]}" onto org.springframework.http.HttpEntity<?> org.springframework.data.rest.webmvc.alps.AlpsController.alpsOptions()
2017-03-08 22:09:12.738 INFO 8663 --- [ restartedMain] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped "{[/profile/{repository}],methods=[GET],produces=[application/alps+json || */*]}" onto org.springframework.http.HttpEntity<org.springframework.data.rest.webmvc.RootResourceInformation> org.springframework.data.rest.webmvc.alps.AlpsController.descriptor(org.springframework.data.rest.webmvc.RootResourceInformation)
The problem is that I have a RestRepository entitled profile as well, so I'm going to use those end-points as well.
Question is:
how to change the end-point to some other thing? Or even take it off.
Add following to application.properties file.
endpoints.enabled=false
This will disable all public endpoints provided by spring/actuator.
management.context-path=/public
This will append /public to all spring provided endpoints such as /profile.
Stumbled upon the same issue 4 years later, per this bug it appears there's a partial workaround to at least disable the functionality of the endpoints, though the /profile url is still exposed.