My database has limited active connections which results in a HikariPool initialization exception as shown below for which I want to neglect the entire stack trace and catch the exception in my Main class.
Here's a stack trace of my exception log:
2020-11-18 16:27:16.619 INFO 9124 --- [ restartedMain] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2020-11-18 16:27:18.344 ERROR 9124 --- [ restartedMain] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
java.sql.SQLSyntaxErrorException: User 6eX6BxR3TY already has more than 'max_user_connections' active connections
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:836)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246)
Here's the main class:
#SpringBootApplication(exclude = { DataSourceAutoConfiguration.class })
public class NseapiApplication {
private static final Logger LOGGER = LoggerFactory.getLogger(NseapiApplication.class);
private static String splunkUrl;
#Value("${splunk.url}")
public void setSplunkUrl(String splunkUrl) {
NseapiApplication.splunkUrl = splunkUrl;
}
public static void main(String[] args) {
SpringApplication.run(NseapiApplication.class, args);
LOGGER.info("Forwarding logs to Splunk Cloud Instance : " + splunkUrl);
}
#Bean
public static BeanFactoryPostProcessor dependsOnPostProcessor() {
return bf -> {
String[] jpa = bf.getBeanNamesForType(EntityManagerFactory.class);
Stream.of(jpa).map(bf::getBeanDefinition).forEach(it -> it.setDependsOn("databaseStartupValidator"));
};
}
#Bean
public DatabaseStartupValidator databaseStartupValidator(DataSource dataSource) {
DatabaseStartupValidator dsv = new DatabaseStartupValidator();
dsv.setDataSource(dataSource);
dsv.setValidationQuery(DatabaseDriver.MYSQL.getValidationQuery());
return dsv;
}
}
Here's the Database Configuration class:
#Configuration
public class DatasourceConfig {
private static final Logger LOGGER = LoggerFactory.getLogger(DatasourceConfig.class);
#Bean
public DataSource datasource() {
return DataSourceBuilder.create()
.driverClassName("com.mysql.cj.jdbc.Driver")
.url("jdbc:mysql://myDbUrl").username("myUserName").password("myPassword")
.build();
}
}
Related
I'm using Testcontainers for Redis cluster integration tests; though locally everything work as expected, but after creating a remote branch and pushing the code on Gitlab pipeline I'm getting the exception below. I have Cassandra and PostgreSQL that are working fine, but for Redis I'm getting the exception below.
Update:
previously with fixedExposePort the tests are working locally but now after removing the fixedExposePort even locally I'm getting the same error.
Caused by: redis.clients.jedis.exceptions.JedisClusterMaxAttemptsException: No more cluster attempts left.
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:156)
at redis.clients.jedis.JedisClusterCommand.runBinary(JedisClusterCommand.java:69)
at redis.clients.jedis.BinaryJedisCluster.setex(BinaryJedisCluster.java:496)
at redis.clients.jedis.commands.BinaryJedisClusterCommands.setex(BinaryJedisClusterCommands.java:74)
at org.springframework.data.redis.connection.jedis.JedisClusterStringCommands.setEx(JedisClusterStringCommands.java:175)
... 39 common frames omitted
Suppressed: redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
at redis.clients.jedis.util.Pool.getResource(Pool.java:84)
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:366)
at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnectionFromSlot(JedisSlotBasedConnectionHandler.java:129)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:118)
... 43 common frames omitted
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Failed to create socket.
at redis.clients.jedis.DefaultJedisSocketFactory.createSocket(DefaultJedisSocketFactory.java:110)
at redis.clients.jedis.Connection.connect(Connection.java:226)
at redis.clients.jedis.BinaryClient.connect(BinaryClient.java:135)
at redis.clients.jedis.BinaryJedis.connect(BinaryJedis.java:309)
at redis.clients.jedis.BinaryJedis.initializeFromClientConfig(BinaryJedis.java:87)
at redis.clients.jedis.BinaryJedis.<init>(BinaryJedis.java:292)
at redis.clients.jedis.Jedis.<init>(Jedis.java:167)
at redis.clients.jedis.JedisFactory.makeObject(JedisFactory.java:177)
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:889)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:424)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:349)
at redis.clients.jedis.util.Pool.getResource(Pool.java:75)
... 46 common frames omitted
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.base/java.net.Socket.connect(Socket.java:609)
at redis.clients.jedis.DefaultJedisSocketFactory.createSocket(DefaultJedisSocketFactory.java:80)
... 57 common frames omitted
Here is the Testcontainers configuration.
public class RedisClusterContainer extends GenericContainer<RedisClusterContainer> {
public RedisClusterContainer() {
super("grokzen/redis-cluster:6.2.8");
withEnv("IP", "0.0.0.0");
addExposedPorts(7000, 7001, 7002, 7003, 7004, 7005);
}
public String getNodeAddress() {
return Stream.of(7000, 7001, 7002, 7003, 7004, 7005)
.map(port -> {
Integer mappedPort = getMappedPort(port);
return getHost() + ":" + mappedPort;
})
.collect(Collectors.joining(","));
}
}
I have now added two tests that create the container in different ways without the FixedExposedPort but non of them is working.
#Slf4j
#SpringBootTest
public class AbstractRedisClusterIT {
}
#Slf4j
class FirstRedisClusterIT extends AbstractRedisClusterIT {
#Autowired
private RedisTemplate<String, String> redisTemplate;
static {
final RedisClusterContainer redisClusterContainer = new RedisClusterContainer();
redisClusterContainer.start();
String redisNodes = redisClusterContainer.getNodeAddress();
log.info("Redis container started on nodes: {}", redisNodes);
System.setProperty("spring.redis.cluster.nodes", redisNodes);
}
#Test
void firstRedisTestContainerTest() {
redisTemplate.opsForValue().set("secondRedisKey", "secondRedisValue", 15, TimeUnit.MINUTES);
String result = redisTemplate.opsForValue().get("secondRedisKey");
assertThat(result).isEqualTo("secondRedisValue");
}
}
#Slf4j
class SecondRedisClusterIT extends AbstractRedisClusterIT {
#Autowired
private RedisTemplate<String, String> redisTemplate;
static {
final GenericContainer<?> genericContainer = new GenericContainer<>(DockerImageName.parse("grokzen/redis-cluster:6.2.8"))
.withEnv("IP", "0.0.0.0")
.withExposedPorts(7000, 7001, 7002, 7003, 7004, 7005);
genericContainer.start();
String redisNodes = Stream.of(7000, 7001, 7002, 7003, 7004, 7005)
.map(port -> {
Integer mappedPort = genericContainer.getMappedPort(port);
return genericContainer.getHost() + ":" + mappedPort;
})
.collect(Collectors.joining(","));
System.setProperty("spring.redis.cluster.nodes", redisNodes);
log.info("Redis container started on nodes: {}", redisNodes);
}
#Test
void secondRedisTestContainerTest() {
redisTemplate.opsForValue().set("firstRedisKey", "firstRedisValue", 15, TimeUnit.MINUTES);
String result = redisTemplate.opsForValue().get("firstRedisKey");
assertThat(result).isEqualTo("firstRedisValue");
}
}
Here is the connection factory configuration for the cluster.
#Slf4j
#Configuration
public class JedisConfiguration {
#Value("${spring.redis.cluster.nodes}")
private String redisClusterNodes;
#Value("${spring.redis.client-name:redis}")
private String clientName;
#Bean
#Primary
RedisConnectionFactory connectionFactory() {
log.info("Cluster nodes: {}", redisClusterNodes);
List<String> nodes = Arrays.stream(redisClusterNodes.split(",")).collect(toList());
RedisClusterConfiguration clusterConfig = new RedisClusterConfiguration(nodes);
JedisClientConfiguration clientConfig = JedisClientConfiguration.builder().clientName(clientName).usePooling().build();
return new JedisConnectionFactory(clusterConfig, clientConfig);
}
#Bean
RedisTemplate<String, String> redisTemplate(RedisConnectionFactory factory) {
return new StringRedisTemplate(factory);
}
}
In the pipeline logs I could see the containers started.
2022-12-15 14:14:19.804 INFO 87 --- [ Test worker] i.c.testenv.RedisContainerExtension : Starting Redis container
2022-12-15 14:14:19.814 INFO 87 --- [ Test worker] 🐳 [grokzen/redis-cluster:5.0.7] : Pulling docker image: grokzen/redis-cluster:5.0.7. Please be patient; this may take some time but only needs to be done once.
2022-12-15 14:14:20.170 INFO 87 --- [ream-2042455873] 🐳 [grokzen/redis-cluster:5.0.7] : Starting to pull image
................
2022-12-15 14:14:35.997 INFO 87 --- [ Test worker] 🐳 [grokzen/redis-cluster:5.0.7] : Creating container for image: grokzen/redis-cluster:5.0.7
2022-12-15 14:14:35.999 INFO 87 --- [ream-2042455873] 🐳 [grokzen/redis-cluster:5.0.7] : Pull complete. 17 layers, pulled in 15s (downloaded 176 MB at 11 MB/s)
2022-12-15 14:14:36.335 INFO 87 --- [ Test worker] 🐳 [grokzen/redis-cluster:5.0.7] : Container grokzen/redis-cluster:5.0.7 is starting: 47534017152ee0a974cf65d2030fbbab592da976a2d258613e5c27ad4b5b71e9
2022-12-15 14:14:39.206 INFO 87 --- [ Test worker] 🐳 [grokzen/redis-cluster:5.0.7] : Container grokzen/redis-cluster:5.0.7 started in PT19.39715S
Even though the container is started and getHost() returns docker as the host but I'm still getting the above exception, does anyone knows what's I am doing wrong?
An example based on this github project can be found below
The example has been created using start.spring.io
The trick is in the Config class, spring.data.redis.cluster.nodes property is set with host and random ports. However, the client tries to resolve random ports and the original ones, so, the customizer will help to redirect the original ones to the random ones.
#Testcontainers
#SpringBootTest
class SpringBootRedisApplicationTests {
#Container
private static final RedisClusterContainer redisCluster = new RedisClusterContainer();
#Autowired
private RedisTemplate<String, String> redisTemplate;
#DynamicPropertySource
static void properties(DynamicPropertyRegistry registry) {
registry.add("spring.data.redis.cluster.nodes", redisCluster::getNodeAddress);
}
#Test
void contextLoads() {
this.redisTemplate.opsForValue().set("redisKey", "redisValue");
String result = redisTemplate.opsForValue().get("redisKey");
assertThat(result).isEqualTo("redisValue");
}
static class RedisClusterContainer extends GenericContainer<RedisClusterContainer> {
public RedisClusterContainer() {
super("grokzen/redis-cluster:6.2.8");
withEnv("IP", "0.0.0.0");
addExposedPorts(7000, 7001, 7002, 7003, 7004, 7005);
waitStrategy = Wait.forLogMessage(".*Cluster state changed: ok*\\n", 6);
}
public String getNodeAddress() {
return Stream.of(7000, 7001, 7002, 7003, 7004, 7005)
.map(port -> {
Integer mappedPort = getMappedPort(port);
return getHost() + ":" + mappedPort;
})
.collect(Collectors.joining(","));
}
public Map<Integer, Integer > ports() {
return Map.of(7000, getMappedPort(7000),
7001, getMappedPort(7001),
7002, getMappedPort(7002),
7003, getMappedPort(7003),
7004, getMappedPort(7004),
7005, getMappedPort(7005));
}
}
#TestConfiguration
static class Config {
#Bean
ClientResourcesBuilderCustomizer customizer() {
return builder -> {
Function<HostAndPort, HostAndPort> mappingFn = hostAndPort -> {
if (redisCluster.ports().containsKey(hostAndPort.getPort())) {
Integer mappedPort = redisCluster.ports().get(hostAndPort.getPort());
return HostAndPort.of(hostAndPort.getHostText(), mappedPort);
}
return hostAndPort;
};
builder.socketAddressResolver(MappingSocketAddressResolver.create(mappingFn));
};
}
}
}
I'm overriding the onFileCreate() method of the org.apache.commons.io.monitor.FileAlterationListener interface.
The method works, but I found that sometimes it spawns two threads and I don't fully understand what is triggering this behaviour.
WatchService Class
#Component
#Slf4j(topic="watchService")
public class WatchService {
private static RestTemplate restTemplate;
private static Environment env;
#Autowired
public WatchService(RestTemplate restTemplate, Environment env) {
WatchService.restTemplate = restTemplate;
WatchService.env = env;
}
//When a new file is created inside a folder, the file content is sent to kafka
#Bean
public static void startFolderPolling() throws Exception {
FileAlterationObserver observer = new FileAlterationObserver(env.getRequiredProperty("folder"));
FileAlterationMonitor monitor = new FileAlterationMonitor(5000);
log.info("setup completed");
FileAlterationListener listener = new FileAlterationListenerAdaptor() {
#Override
public void onFileCreate(File file) {
log.info("are you single thread ?");
try {
String data = FileUtils.readFileToString(file, "UTF-8");
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
HttpEntity<String> entity = new HttpEntity<String>(data,headers);
log.info("Calling Kakfa microservice");
String answer = restTemplate.postForObject("http://kafka/api/messages/receiveSapOrder", entity, String.class);
log.info("sending SAP Order result:" + answer);
} catch (IOException e) {
e.printStackTrace();
}
}
};
observer.addListener(listener);
monitor.addObserver(observer);
monitor.start();
}
}
Main Method
#SpringBootApplication
#EnableEurekaClient
public class DirectoryListenerApplication {
public static void main(String[] args) throws Exception {
SpringApplication.run(DirectoryListenerApplication.class, args);
startFolderPolling();
}
}
With the same file created in the folder sometimes the method logs two calls in two separated threads, sometimes the method log only one call in a single thread.
2022-05-10 09:46:42.382 INFO 88330 --- [ main] watchService : setup completed
2022-05-10 09:46:42.397 INFO 88330 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_SAP-LISTENER/192.168.2.63:sap-listener:8095 - registration status: 204
2022-05-10 09:46:57.394 INFO 88330 --- [ Thread-4] watchService : are you single thread ?
2022-05-10 09:46:57.423 INFO 88330 --- [ Thread-4] watchService : Calling Kakfa microservice
2022-05-10 09:46:58.788 INFO 88330 --- [ Thread-4] watchService : sending SAP Order result:{"message":"Uploaded the file successfully"}
2022-05-10 09:47:00.108 INFO 88330 --- [ Thread-2] watchService : are you single thread ?
2022-05-10 09:47:00.112 INFO 88330 --- [ Thread-2] watchService : Calling Kakfa microservice
2022-05-10 09:47:00.197 INFO 88330 --- [ Thread-2] watchService : sending SAP Order result:{"message":"Uploaded the file successfully"}
Is it possible to force the single thread behaviour ?
I removed the SprigBoot #Bean notation over my startFolderPolling method and now only one thread is created.
I´m creating an application with Spring boot + Quartz + Oracle and i would like to save the scheduling in the database (persistent, in case the server crashes). With RAMJobStore works fine, but when I try to use JobStoreTX it doesn't work, it always uses RAMJobStore, where colud be the problem?. I'm surely making a lot of mistakes, but it's my first application with spring boot + Quartz, can you give me an idea?
The events will be created dynamically, receiving the information in the controller.
application.yaml (In addition to Quartz, the application connects to the database to query tables, but the application and Quartz will use the same database)
hibernate:
globally_quoted_identifiers: true
show_sql: true
logging:
level:
org:
hibernate:
SQL: ${hibernate.logging}
pattern:
console: '%d{yyyy-MM-dd HH:mm:ss} %-5level %logger{36} - %msg%n'
server:
port: ${port}
spring:
activemq:
broker-url: ${activemq-url}
password: ${activemq-password}
user: ${activemq-user}
datasource:
driver-class-name: ${driverClassName}
password: ${ddbb-password}
jdbcUrl: ${ddbb-url}
username: ${ddbb-user}
jpa:
properties:
hibernate:
dialect: org.hibernate.dialect.Oracle12cDialect
quartz:
job-store-type: jdbc
jdbc:
initialize-schema: never
properties:
org:
quartz:
scheduler:
instanceId: AUTO
jobStore:
useProperties: true
isClustered: false
clusterCheckinInterval: 5000
class: org.quartz.impl.jdbcjobstore.JobStoreTX
driverDelegateClass: org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
dataSource: quartzDataSource
dataSource:
quartzDataSource:
driver: oracle.jdbc.driver.OracleDriver
URL: ${ddbb-url}
user: ${ddbb-user}
password: ${ddbb-password}
Class SchedulerConfiguration
#Configuration
public class SchedulerConfiguration {
#Bean
public SchedulerFactoryBean schedulerFactory(ApplicationContext applicationContext) {
SchedulerFactoryBean schedulerFactoryBean = new SchedulerFactoryBean();
schedulerFactoryBean.setJobFactory(new AutoWiringSpringBeanJobFactory());
return schedulerFactoryBean;
}
#Bean
public Scheduler scheduler(ApplicationContext applicationContext) throws SchedulerException {
Scheduler scheduler = schedulerFactory(applicationContext).getScheduler();
scheduler.start();
return scheduler;
}
#Bean
#QuartzDataSource
#ConfigurationProperties(prefix = "spring.datasource")
public DataSource quartzDataSource() {
return DataSourceBuilder.create().build();
}
}
class AutoWiringSpringBeanJobFactor
public class AutoWiringSpringBeanJobFactory extends SpringBeanJobFactory implements
ApplicationContextAware{
private transient AutowireCapableBeanFactory beanFactory;
#Override
public void setApplicationContext(final ApplicationContext context) {
beanFactory = context.getAutowireCapableBeanFactory();
}
#Override
protected Object createJobInstance(final TriggerFiredBundle bundle) throws Exception {
final Object job = super.createJobInstance(bundle);
beanFactory.autowireBean(job);
return job;
}
}
Job class
#Component
public class CampaignJob implements Job{
#Override
public void execute(JobExecutionContext jobExecutionContext) throws JobExecutionException {
System.out.println("Hi, the job works");
}
}
The class where the scheduler it´s created
public class ManagerServiceImpl implements ManagerService {
#Autowired
private ProducerQueue producer;
#Autowired
private ManagementDatabase managementDatabase;
#Autowired
private Scheduler scheduler;
#Override
public String processCampaign(ScheduleCampaign scheduleCampaign) {
try {
ZonedDateTime dateTime = ZonedDateTime.of(scheduleCampaign.getDateTime(), scheduleCampaign.getTimeZone());
JobDetail jobDetail = buildJobDetail(scheduleCampaign);
Trigger trigger = buildJobTrigger(jobDetail, dateTime);
scheduler.scheduleJob(jobDetail, trigger);
} catch (SchedulerException e) {
System.out.println("There was an error creating the scheduler: "+e);
}
return "Scheduler created";
}
private JobDetail buildJobDetail(ScheduleCampaign scheduleCampaign) {
JobDataMap jobDataMap = new JobDataMap();
System.out.println("Function: buildJobDetail - campaign value: "+scheduleCampaign.getCampaign());
jobDataMap.put("campaign", scheduleCampaign.getCampaign());
return JobBuilder.newJob(CampaignJob.class)
.withIdentity(UUID.randomUUID().toString(), "campaign-jobs")
.requestRecovery(true)
.storeDurably(true)
.withDescription("campaign job planned")
.usingJobData(jobDataMap)
.storeDurably()
.build();
}
private Trigger buildJobTrigger(JobDetail jobDetail, ZonedDateTime startAt) {
return TriggerBuilder.newTrigger()
.forJob(jobDetail)
.withIdentity(jobDetail.getKey().getName(), "campaign-triggers")
.withDescription("campaign job Trigger")
.startAt(Date.from(startAt.toInstant()))
.withSchedule(SimpleScheduleBuilder.simpleSchedule().withMisfireHandlingInstructionFireNow())
.build();
}
}
Logs
2020-12-28 16:16:08 INFO com.zaxxer.hikari.HikariDataSource - HikariPool-1 - Starting...
2020-12-28 16:16:09 INFO com.zaxxer.hikari.HikariDataSource - HikariPool-1 - Start completed.
2020-12-28 16:16:09 INFO o.h.jpa.internal.util.LogHelper - HHH000204: Processing PersistenceUnitInfo [name: default]
2020-12-28 16:16:10 INFO org.hibernate.Version - HHH000412: Hibernate ORM core version 5.4.23.Final
2020-12-28 16:16:10 INFO org.quartz.impl.StdSchedulerFactory - Using default implementation for ThreadExecutor
2020-12-28 16:16:10 INFO o.quartz.core.SchedulerSignalerImpl - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
2020-12-28 16:16:10 INFO org.quartz.core.QuartzScheduler - Quartz Scheduler v.2.3.2 created.
2020-12-28 16:16:10 INFO org.quartz.simpl.RAMJobStore - RAMJobStore initialized.
2020-12-28 16:16:10 INFO org.quartz.core.QuartzScheduler - Scheduler meta-data: Quartz Scheduler (v2.3.2) 'schedulerFactory' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
Try to autowire your app default dataSource, if you are pointing the same DB for quartz jobs as well.
The below configuration worked for me with PostgreSQL
#Autowired
DataSource dataSource;
#Autowired
JobFactory jobFactory;
#Bean
public JobFactory jobFactory(ApplicationContext applicationContext) {
AutoWiringSpringBeanJobFactory jobFactory = new AutoWiringSpringBeanJobFactory();
jobFactory.setApplicationContext(applicationContext);
return jobFactory;
}
#Bean
public SchedulerFactoryBean schedulerFactoryBean() throws IOException {
SchedulerFactoryBean factory = new SchedulerFactoryBean();
factory.setOverwriteExistingJobs(true);
factory.setAutoStartup(true);
factory.setDataSource(dataSource);
factory.setJobFactory(jobFactory);
factory.setQuartzProperties(quartzProperties());
return factory;
}
#Bean
public Properties quartzProperties() throws IOException {
PropertiesFactoryBean propertiesFactoryBean = new PropertiesFactoryBean();
propertiesFactoryBean.setLocation(new ClassPathResource("/quartz.properties"));
propertiesFactoryBean.afterPropertiesSet();
return propertiesFactoryBean.getObject();
}
I am working on a spring batch project in which I am reading a list of students, processing it and writing it.
For now I have kept it simple and processing just returns the student and write just prints it.
I was expecting every-time the step runs I will see the output but I see it only once when the step runs for the first time. Below is the output
2020-04-03 01:33:16.153 INFO 14710 --- [ main] o.s.batch.core.job.SimpleStepHandler : Executing step: [xxxx]
[Student{id=1, name='ABC'}]
as
[Student{id=2, name='DEF'}]
as
[Student{id=3, name='GHI'}]
as
2020-04-03 01:33:16.187 INFO 14710 --- [ main] o.s.batch.core.step.AbstractStep : Step: [xxxx] executed in 33ms
2020-04-03 01:33:16.190 INFO 14710 --- [ main] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=readStudents]] completed with the following parameters: [{}] and the following status: [COMPLETED] in 52ms
job triggered
2020-04-03 01:33:17.011 INFO 14710 --- [ scheduling-1] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=readStudents]] launched with the following parameters: [{time=1585857797003}]
2020-04-03 01:33:17.017 INFO 14710 --- [ scheduling-1] o.s.batch.core.job.SimpleStepHandler : Executing step: [xxxx]
2020-04-03 01:33:17.022 INFO 14710 --- [ scheduling-1] o.s.batch.core.step.AbstractStep : Step: [xxxx] executed in 4ms
2020-04-03 01:33:17.024 INFO 14710 --- [ scheduling-1] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=readStudents]] completed with the following parameters: [{time=1585857797003}] and the following status: [COMPLETED] in 11ms
Also I notice that first time there are no parameters in job and after that there are parameters. Whereas I am supplying job parameters whenever I run job.
Config file
#EnableBatchProcessing
public class Config {
private JobRunner jobRunner;
public Config(JobRunner jobRunner){
this.jobRunner = jobRunner;
}
#Scheduled(cron = "* * * * * *")
public void scheduleJob() throws JobParametersInvalidException, JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException {
System.out.println("job triggered");
jobRunner.runJob();
}
}
#Configuration
public class JobConfig {
#Bean
public Job job(JobBuilderFactory jobBuilderFactory,
StepBuilderFactory stepBuilderFactory,
ItemReader<Student> reader,
ItemProcessor<Student, Student> processor,
ItemWriter<Student> writer) {
Step step = stepBuilderFactory.get("xxxx")
.<Student, Student>chunk(1)
.reader(reader)
.processor(processor)
.writer(writer)
.build();
return jobBuilderFactory
.get("readStudents")
.start(step)
.build();
}
#Bean
public ItemReader<Student> reader() {
return new InMemoryStudentReader();
}
}
Job runner file
public class JobRunner {
private Job job;
private JobLauncher simpleJobLauncher;
#Autowired
public JobRunner(Job job, JobLauncher jobLauncher) {
this.simpleJobLauncher = jobLauncher;
this.job = job;
}
public void runJob() throws JobParametersInvalidException, JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException {
JobParameters jobParameters =
new JobParametersBuilder()
.addLong("time",System.currentTimeMillis()).toJobParameters();
simpleJobLauncher.run(job, jobParameters);
}
}
In memory student reader
public class InMemoryStudentReader implements ItemReader<Student> {
private int nextStudentIndex;
private List<Student> studentData;
public InMemoryStudentReader() {
initialize();
}
private void initialize() {
Student s1 = new Student(1, "ABC");
Student s2 = new Student(2, "DEF");
Student s3 = new Student(3, "GHI");
studentData = Collections.unmodifiableList(Arrays.asList(s1, s2,s3));
nextStudentIndex = 0;
}
#Override
public Student read() throws Exception {
Student nextStudent = null;
if (nextStudentIndex < studentData.size()) {
nextStudent = studentData.get(nextStudentIndex);
nextStudentIndex++;
}
return nextStudent;
}
}
Because you are calling initialize() in InMemoryStudentReader constructor. Spring only initialize InMemoryStudentReader once and wire it to your job. After the first run, nextStudentIndex is not reset to 0. So the next time your job runs, your reader cannot read anymore.
If you want it to work, you should reset the nextStudentIndex to 0 whenever you start your job.
I'm using Spring Webflux and as I understand it, by using this, the thread used for receiving request and the one used for response should be different. However, whether I use netty or undertow, I end up using the same thread.
My app is a simple crud app with MySQL DB. I'm not using r2dbc but a jdbc coupled with Executor and Scheduler.
As shown in the log below, request is handled by thread [ XNIO-1 I/O-6] and the response is given by the same one.
By this, I'm assuming the thread is blocked until db operation has finished. How can I fix this?
Here's the log
2019-07-23 17:49:10.051 INFO 132 --- [ main] org.xnio : XNIO version 3.3.8.Final
2019-07-23 17:49:10.059 INFO 132 --- [ main] org.xnio.nio : XNIO NIO Implementation Version 3.3.8.Final
2019-07-23 17:49:10.114 INFO 132 --- [ main] o.s.b.w.e.undertow.UndertowWebServer : Undertow started on port(s) 8080 (http)
2019-07-23 17:49:10.116 INFO 132 --- [ main] c.n.webflux.demo.WebfluxFunctionalApp : Started WebfluxFunctionalApp in 1.262 seconds (JVM running for 2.668)
2019-07-23 17:49:10.302 DEBUG 132 --- [ XNIO-1 I/O-6] o.s.w.s.adapter.HttpWebHandlerAdapter : [4c85975] HTTP GET "/api/findall"
2019-07-23 17:49:10.322 DEBUG 132 --- [ XNIO-1 I/O-6] s.w.r.r.m.a.RequestMappingHandlerMapping : [4c85975] Mapped to public reactor.core.publisher.Mono<java.util.List<com.webflux.demo.model.TypeStatus>> com.webflux.demo.controller.MonitoringController.findAll()
2019-07-23 17:49:10.337 DEBUG 132 --- [ XNIO-1 I/O-6] o.s.w.r.r.m.a.ResponseBodyResultHandler : Using 'application/json;charset=UTF-8' given [*/*] and supported [application/json;charset=UTF-8, application/*+json;charset=UTF-8, text/event-stream]
2019-07-23 17:49:10.338 DEBUG 132 --- [ XNIO-1 I/O-6] o.s.w.r.r.m.a.ResponseBodyResultHandler : [4c85975] 0..1 [java.util.List<com.webflux.demo.model.TypeStatus>]
2019-07-23 17:49:10.347 INFO 132 --- [pool-1-thread-1] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2019-07-23 17:49:10.785 INFO 132 --- [pool-1-thread-1] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
2019-07-23 17:49:10.838 DEBUG 132 --- [pool-1-thread-1] org.springframework.web.HttpLogging : [4c85975] Encoding [[com.webflux.demo.model.TypeStatus#7b4509cb, com.webflux.demo.model.TypeStatus#22676ebe, (truncated)...]
2019-07-23 17:49:10.949 DEBUG 132 --- [ XNIO-1 I/O-6] o.s.w.s.adapter.HttpWebHandlerAdapter : [4c85975] Completed 200 OK
Also my dao is
#Repository
public class TypeStatusJdbcTemplate {
private JdbcTemplate jdbcTemplate;
public TypeStatusJdbcTemplate(JdbcTemplate jdbcTemplate) {
this.jdbcTemplate = jdbcTemplate;
}
private final static String SQL_FIND_ALL = "select * from `monitoring`.`type_status` limit 3";
public List<TypeStatus> findAll() {
return jdbcTemplate.query(SQL_FIND_ALL,
new TypeStatusMapper());
}
}
service is
#Service
public class MonitoringService {
private final Scheduler scheduler;
private TypeStatusJdbcTemplate repository;
public MonitoringService(Scheduler scheduler, TypeStatusJdbcTemplate repository) {
this.scheduler = scheduler;
this.repository = repository;
}
public Mono<List<TypeStatus>> findAll() {
return Mono.fromCallable(repository::findAll).subscribeOn(scheduler);
}
}
controller is
#RestController
#RequestMapping("/api")
public class MonitoringController {
private final MonitoringService monitoringService;
private static final Logger logger = LoggerFactory.getLogger(MonitoringController.class);
public MonitoringController(MonitoringService monitoringService) {
this.monitoringService = monitoringService;
}
#GetMapping(value="/findall")
public Mono<List<TypeStatus>> findAll() {
return monitoringService.findAll();
}
}
main file (showing scheduler)
#SpringBootApplication
public class WebfluxFunctionalApp {
public static void main(String[] args){
SpringApplication.run(WebfluxFunctionalApp.class, args);
}
#PostConstruct
public void init(){
// Setting Spring Boot SetTimeZone
TimeZone.setDefault(TimeZone.getTimeZone("UTC"));
}
#Bean
public Scheduler jdbcScheduler() {
return Schedulers.fromExecutor(Executors.newFixedThreadPool(30));
}
}
Thread execution does not always have to be different threads. Taken from the Reactive documentation:
Reactive Schedulers
Obtaining a Flux or a Mono doesn’t necessarily mean it will run in a dedicated Thread. Instead, most operators continue working in the Thread on which the previous operator executed. Unless specified, the topmost operator (the source) itself runs on the Thread in which the subscribe() call was made.
So there is nothing that says that it has to be a new thread.