Spring Batch step not executing - java

I am working on a spring batch project in which I am reading a list of students, processing it and writing it.
For now I have kept it simple and processing just returns the student and write just prints it.
I was expecting every-time the step runs I will see the output but I see it only once when the step runs for the first time. Below is the output
2020-04-03 01:33:16.153 INFO 14710 --- [ main] o.s.batch.core.job.SimpleStepHandler : Executing step: [xxxx]
[Student{id=1, name='ABC'}]
as
[Student{id=2, name='DEF'}]
as
[Student{id=3, name='GHI'}]
as
2020-04-03 01:33:16.187 INFO 14710 --- [ main] o.s.batch.core.step.AbstractStep : Step: [xxxx] executed in 33ms
2020-04-03 01:33:16.190 INFO 14710 --- [ main] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=readStudents]] completed with the following parameters: [{}] and the following status: [COMPLETED] in 52ms
job triggered
2020-04-03 01:33:17.011 INFO 14710 --- [ scheduling-1] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=readStudents]] launched with the following parameters: [{time=1585857797003}]
2020-04-03 01:33:17.017 INFO 14710 --- [ scheduling-1] o.s.batch.core.job.SimpleStepHandler : Executing step: [xxxx]
2020-04-03 01:33:17.022 INFO 14710 --- [ scheduling-1] o.s.batch.core.step.AbstractStep : Step: [xxxx] executed in 4ms
2020-04-03 01:33:17.024 INFO 14710 --- [ scheduling-1] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=readStudents]] completed with the following parameters: [{time=1585857797003}] and the following status: [COMPLETED] in 11ms
Also I notice that first time there are no parameters in job and after that there are parameters. Whereas I am supplying job parameters whenever I run job.
Config file
#EnableBatchProcessing
public class Config {
private JobRunner jobRunner;
public Config(JobRunner jobRunner){
this.jobRunner = jobRunner;
}
#Scheduled(cron = "* * * * * *")
public void scheduleJob() throws JobParametersInvalidException, JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException {
System.out.println("job triggered");
jobRunner.runJob();
}
}
#Configuration
public class JobConfig {
#Bean
public Job job(JobBuilderFactory jobBuilderFactory,
StepBuilderFactory stepBuilderFactory,
ItemReader<Student> reader,
ItemProcessor<Student, Student> processor,
ItemWriter<Student> writer) {
Step step = stepBuilderFactory.get("xxxx")
.<Student, Student>chunk(1)
.reader(reader)
.processor(processor)
.writer(writer)
.build();
return jobBuilderFactory
.get("readStudents")
.start(step)
.build();
}
#Bean
public ItemReader<Student> reader() {
return new InMemoryStudentReader();
}
}
Job runner file
public class JobRunner {
private Job job;
private JobLauncher simpleJobLauncher;
#Autowired
public JobRunner(Job job, JobLauncher jobLauncher) {
this.simpleJobLauncher = jobLauncher;
this.job = job;
}
public void runJob() throws JobParametersInvalidException, JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException {
JobParameters jobParameters =
new JobParametersBuilder()
.addLong("time",System.currentTimeMillis()).toJobParameters();
simpleJobLauncher.run(job, jobParameters);
}
}
In memory student reader
public class InMemoryStudentReader implements ItemReader<Student> {
private int nextStudentIndex;
private List<Student> studentData;
public InMemoryStudentReader() {
initialize();
}
private void initialize() {
Student s1 = new Student(1, "ABC");
Student s2 = new Student(2, "DEF");
Student s3 = new Student(3, "GHI");
studentData = Collections.unmodifiableList(Arrays.asList(s1, s2,s3));
nextStudentIndex = 0;
}
#Override
public Student read() throws Exception {
Student nextStudent = null;
if (nextStudentIndex < studentData.size()) {
nextStudent = studentData.get(nextStudentIndex);
nextStudentIndex++;
}
return nextStudent;
}
}

Because you are calling initialize() in InMemoryStudentReader constructor. Spring only initialize InMemoryStudentReader once and wire it to your job. After the first run, nextStudentIndex is not reset to 0. So the next time your job runs, your reader cannot read anymore.
If you want it to work, you should reset the nextStudentIndex to 0 whenever you start your job.

Related

Testcontainers "JedisConnectionException: Could not get a resource from the pool" on Gitlab pipeline

I'm using Testcontainers for Redis cluster integration tests; though locally everything work as expected, but after creating a remote branch and pushing the code on Gitlab pipeline I'm getting the exception below. I have Cassandra and PostgreSQL that are working fine, but for Redis I'm getting the exception below.
Update:
previously with fixedExposePort the tests are working locally but now after removing the fixedExposePort even locally I'm getting the same error.
Caused by: redis.clients.jedis.exceptions.JedisClusterMaxAttemptsException: No more cluster attempts left.
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:156)
at redis.clients.jedis.JedisClusterCommand.runBinary(JedisClusterCommand.java:69)
at redis.clients.jedis.BinaryJedisCluster.setex(BinaryJedisCluster.java:496)
at redis.clients.jedis.commands.BinaryJedisClusterCommands.setex(BinaryJedisClusterCommands.java:74)
at org.springframework.data.redis.connection.jedis.JedisClusterStringCommands.setEx(JedisClusterStringCommands.java:175)
... 39 common frames omitted
Suppressed: redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
at redis.clients.jedis.util.Pool.getResource(Pool.java:84)
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:366)
at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnectionFromSlot(JedisSlotBasedConnectionHandler.java:129)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:118)
... 43 common frames omitted
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Failed to create socket.
at redis.clients.jedis.DefaultJedisSocketFactory.createSocket(DefaultJedisSocketFactory.java:110)
at redis.clients.jedis.Connection.connect(Connection.java:226)
at redis.clients.jedis.BinaryClient.connect(BinaryClient.java:135)
at redis.clients.jedis.BinaryJedis.connect(BinaryJedis.java:309)
at redis.clients.jedis.BinaryJedis.initializeFromClientConfig(BinaryJedis.java:87)
at redis.clients.jedis.BinaryJedis.<init>(BinaryJedis.java:292)
at redis.clients.jedis.Jedis.<init>(Jedis.java:167)
at redis.clients.jedis.JedisFactory.makeObject(JedisFactory.java:177)
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:889)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:424)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:349)
at redis.clients.jedis.util.Pool.getResource(Pool.java:75)
... 46 common frames omitted
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.base/java.net.Socket.connect(Socket.java:609)
at redis.clients.jedis.DefaultJedisSocketFactory.createSocket(DefaultJedisSocketFactory.java:80)
... 57 common frames omitted
Here is the Testcontainers configuration.
public class RedisClusterContainer extends GenericContainer<RedisClusterContainer> {
public RedisClusterContainer() {
super("grokzen/redis-cluster:6.2.8");
withEnv("IP", "0.0.0.0");
addExposedPorts(7000, 7001, 7002, 7003, 7004, 7005);
}
public String getNodeAddress() {
return Stream.of(7000, 7001, 7002, 7003, 7004, 7005)
.map(port -> {
Integer mappedPort = getMappedPort(port);
return getHost() + ":" + mappedPort;
})
.collect(Collectors.joining(","));
}
}
I have now added two tests that create the container in different ways without the FixedExposedPort but non of them is working.
#Slf4j
#SpringBootTest
public class AbstractRedisClusterIT {
}
#Slf4j
class FirstRedisClusterIT extends AbstractRedisClusterIT {
#Autowired
private RedisTemplate<String, String> redisTemplate;
static {
final RedisClusterContainer redisClusterContainer = new RedisClusterContainer();
redisClusterContainer.start();
String redisNodes = redisClusterContainer.getNodeAddress();
log.info("Redis container started on nodes: {}", redisNodes);
System.setProperty("spring.redis.cluster.nodes", redisNodes);
}
#Test
void firstRedisTestContainerTest() {
redisTemplate.opsForValue().set("secondRedisKey", "secondRedisValue", 15, TimeUnit.MINUTES);
String result = redisTemplate.opsForValue().get("secondRedisKey");
assertThat(result).isEqualTo("secondRedisValue");
}
}
#Slf4j
class SecondRedisClusterIT extends AbstractRedisClusterIT {
#Autowired
private RedisTemplate<String, String> redisTemplate;
static {
final GenericContainer<?> genericContainer = new GenericContainer<>(DockerImageName.parse("grokzen/redis-cluster:6.2.8"))
.withEnv("IP", "0.0.0.0")
.withExposedPorts(7000, 7001, 7002, 7003, 7004, 7005);
genericContainer.start();
String redisNodes = Stream.of(7000, 7001, 7002, 7003, 7004, 7005)
.map(port -> {
Integer mappedPort = genericContainer.getMappedPort(port);
return genericContainer.getHost() + ":" + mappedPort;
})
.collect(Collectors.joining(","));
System.setProperty("spring.redis.cluster.nodes", redisNodes);
log.info("Redis container started on nodes: {}", redisNodes);
}
#Test
void secondRedisTestContainerTest() {
redisTemplate.opsForValue().set("firstRedisKey", "firstRedisValue", 15, TimeUnit.MINUTES);
String result = redisTemplate.opsForValue().get("firstRedisKey");
assertThat(result).isEqualTo("firstRedisValue");
}
}
Here is the connection factory configuration for the cluster.
#Slf4j
#Configuration
public class JedisConfiguration {
#Value("${spring.redis.cluster.nodes}")
private String redisClusterNodes;
#Value("${spring.redis.client-name:redis}")
private String clientName;
#Bean
#Primary
RedisConnectionFactory connectionFactory() {
log.info("Cluster nodes: {}", redisClusterNodes);
List<String> nodes = Arrays.stream(redisClusterNodes.split(",")).collect(toList());
RedisClusterConfiguration clusterConfig = new RedisClusterConfiguration(nodes);
JedisClientConfiguration clientConfig = JedisClientConfiguration.builder().clientName(clientName).usePooling().build();
return new JedisConnectionFactory(clusterConfig, clientConfig);
}
#Bean
RedisTemplate<String, String> redisTemplate(RedisConnectionFactory factory) {
return new StringRedisTemplate(factory);
}
}
In the pipeline logs I could see the containers started.
2022-12-15 14:14:19.804 INFO 87 --- [ Test worker] i.c.testenv.RedisContainerExtension : Starting Redis container
2022-12-15 14:14:19.814 INFO 87 --- [ Test worker] 🐳 [grokzen/redis-cluster:5.0.7] : Pulling docker image: grokzen/redis-cluster:5.0.7. Please be patient; this may take some time but only needs to be done once.
2022-12-15 14:14:20.170 INFO 87 --- [ream-2042455873] 🐳 [grokzen/redis-cluster:5.0.7] : Starting to pull image
................
2022-12-15 14:14:35.997 INFO 87 --- [ Test worker] 🐳 [grokzen/redis-cluster:5.0.7] : Creating container for image: grokzen/redis-cluster:5.0.7
2022-12-15 14:14:35.999 INFO 87 --- [ream-2042455873] 🐳 [grokzen/redis-cluster:5.0.7] : Pull complete. 17 layers, pulled in 15s (downloaded 176 MB at 11 MB/s)
2022-12-15 14:14:36.335 INFO 87 --- [ Test worker] 🐳 [grokzen/redis-cluster:5.0.7] : Container grokzen/redis-cluster:5.0.7 is starting: 47534017152ee0a974cf65d2030fbbab592da976a2d258613e5c27ad4b5b71e9
2022-12-15 14:14:39.206 INFO 87 --- [ Test worker] 🐳 [grokzen/redis-cluster:5.0.7] : Container grokzen/redis-cluster:5.0.7 started in PT19.39715S
Even though the container is started and getHost() returns docker as the host but I'm still getting the above exception, does anyone knows what's I am doing wrong?
An example based on this github project can be found below
The example has been created using start.spring.io
The trick is in the Config class, spring.data.redis.cluster.nodes property is set with host and random ports. However, the client tries to resolve random ports and the original ones, so, the customizer will help to redirect the original ones to the random ones.
#Testcontainers
#SpringBootTest
class SpringBootRedisApplicationTests {
#Container
private static final RedisClusterContainer redisCluster = new RedisClusterContainer();
#Autowired
private RedisTemplate<String, String> redisTemplate;
#DynamicPropertySource
static void properties(DynamicPropertyRegistry registry) {
registry.add("spring.data.redis.cluster.nodes", redisCluster::getNodeAddress);
}
#Test
void contextLoads() {
this.redisTemplate.opsForValue().set("redisKey", "redisValue");
String result = redisTemplate.opsForValue().get("redisKey");
assertThat(result).isEqualTo("redisValue");
}
static class RedisClusterContainer extends GenericContainer<RedisClusterContainer> {
public RedisClusterContainer() {
super("grokzen/redis-cluster:6.2.8");
withEnv("IP", "0.0.0.0");
addExposedPorts(7000, 7001, 7002, 7003, 7004, 7005);
waitStrategy = Wait.forLogMessage(".*Cluster state changed: ok*\\n", 6);
}
public String getNodeAddress() {
return Stream.of(7000, 7001, 7002, 7003, 7004, 7005)
.map(port -> {
Integer mappedPort = getMappedPort(port);
return getHost() + ":" + mappedPort;
})
.collect(Collectors.joining(","));
}
public Map<Integer, Integer > ports() {
return Map.of(7000, getMappedPort(7000),
7001, getMappedPort(7001),
7002, getMappedPort(7002),
7003, getMappedPort(7003),
7004, getMappedPort(7004),
7005, getMappedPort(7005));
}
}
#TestConfiguration
static class Config {
#Bean
ClientResourcesBuilderCustomizer customizer() {
return builder -> {
Function<HostAndPort, HostAndPort> mappingFn = hostAndPort -> {
if (redisCluster.ports().containsKey(hostAndPort.getPort())) {
Integer mappedPort = redisCluster.ports().get(hostAndPort.getPort());
return HostAndPort.of(hostAndPort.getHostText(), mappedPort);
}
return hostAndPort;
};
builder.socketAddressResolver(MappingSocketAddressResolver.create(mappingFn));
};
}
}
}

Make FileAlterationListenerAdaptor.onFileCreate() always single thread, apache.commons.io.monitor

I'm overriding the onFileCreate() method of the org.apache.commons.io.monitor.FileAlterationListener interface.
The method works, but I found that sometimes it spawns two threads and I don't fully understand what is triggering this behaviour.
WatchService Class
#Component
#Slf4j(topic="watchService")
public class WatchService {
private static RestTemplate restTemplate;
private static Environment env;
#Autowired
public WatchService(RestTemplate restTemplate, Environment env) {
WatchService.restTemplate = restTemplate;
WatchService.env = env;
}
//When a new file is created inside a folder, the file content is sent to kafka
#Bean
public static void startFolderPolling() throws Exception {
FileAlterationObserver observer = new FileAlterationObserver(env.getRequiredProperty("folder"));
FileAlterationMonitor monitor = new FileAlterationMonitor(5000);
log.info("setup completed");
FileAlterationListener listener = new FileAlterationListenerAdaptor() {
#Override
public void onFileCreate(File file) {
log.info("are you single thread ?");
try {
String data = FileUtils.readFileToString(file, "UTF-8");
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
HttpEntity<String> entity = new HttpEntity<String>(data,headers);
log.info("Calling Kakfa microservice");
String answer = restTemplate.postForObject("http://kafka/api/messages/receiveSapOrder", entity, String.class);
log.info("sending SAP Order result:" + answer);
} catch (IOException e) {
e.printStackTrace();
}
}
};
observer.addListener(listener);
monitor.addObserver(observer);
monitor.start();
}
}
Main Method
#SpringBootApplication
#EnableEurekaClient
public class DirectoryListenerApplication {
public static void main(String[] args) throws Exception {
SpringApplication.run(DirectoryListenerApplication.class, args);
startFolderPolling();
}
}
With the same file created in the folder sometimes the method logs two calls in two separated threads, sometimes the method log only one call in a single thread.
2022-05-10 09:46:42.382 INFO 88330 --- [ main] watchService : setup completed
2022-05-10 09:46:42.397 INFO 88330 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_SAP-LISTENER/192.168.2.63:sap-listener:8095 - registration status: 204
2022-05-10 09:46:57.394 INFO 88330 --- [ Thread-4] watchService : are you single thread ?
2022-05-10 09:46:57.423 INFO 88330 --- [ Thread-4] watchService : Calling Kakfa microservice
2022-05-10 09:46:58.788 INFO 88330 --- [ Thread-4] watchService : sending SAP Order result:{"message":"Uploaded the file successfully"}
2022-05-10 09:47:00.108 INFO 88330 --- [ Thread-2] watchService : are you single thread ?
2022-05-10 09:47:00.112 INFO 88330 --- [ Thread-2] watchService : Calling Kakfa microservice
2022-05-10 09:47:00.197 INFO 88330 --- [ Thread-2] watchService : sending SAP Order result:{"message":"Uploaded the file successfully"}
Is it possible to force the single thread behaviour ?
I removed the SprigBoot #Bean notation over my startFolderPolling method and now only one thread is created.

How do I catch exceptions occurring during HikariPool intialization in Spring Boot?

My database has limited active connections which results in a HikariPool initialization exception as shown below for which I want to neglect the entire stack trace and catch the exception in my Main class.
Here's a stack trace of my exception log:
2020-11-18 16:27:16.619 INFO 9124 --- [ restartedMain] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2020-11-18 16:27:18.344 ERROR 9124 --- [ restartedMain] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
java.sql.SQLSyntaxErrorException: User 6eX6BxR3TY already has more than 'max_user_connections' active connections
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:836)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246)
Here's the main class:
#SpringBootApplication(exclude = { DataSourceAutoConfiguration.class })
public class NseapiApplication {
private static final Logger LOGGER = LoggerFactory.getLogger(NseapiApplication.class);
private static String splunkUrl;
#Value("${splunk.url}")
public void setSplunkUrl(String splunkUrl) {
NseapiApplication.splunkUrl = splunkUrl;
}
public static void main(String[] args) {
SpringApplication.run(NseapiApplication.class, args);
LOGGER.info("Forwarding logs to Splunk Cloud Instance : " + splunkUrl);
}
#Bean
public static BeanFactoryPostProcessor dependsOnPostProcessor() {
return bf -> {
String[] jpa = bf.getBeanNamesForType(EntityManagerFactory.class);
Stream.of(jpa).map(bf::getBeanDefinition).forEach(it -> it.setDependsOn("databaseStartupValidator"));
};
}
#Bean
public DatabaseStartupValidator databaseStartupValidator(DataSource dataSource) {
DatabaseStartupValidator dsv = new DatabaseStartupValidator();
dsv.setDataSource(dataSource);
dsv.setValidationQuery(DatabaseDriver.MYSQL.getValidationQuery());
return dsv;
}
}
Here's the Database Configuration class:
#Configuration
public class DatasourceConfig {
private static final Logger LOGGER = LoggerFactory.getLogger(DatasourceConfig.class);
#Bean
public DataSource datasource() {
return DataSourceBuilder.create()
.driverClassName("com.mysql.cj.jdbc.Driver")
.url("jdbc:mysql://myDbUrl").username("myUserName").password("myPassword")
.build();
}
}

IllegalStateException: No function defined for Spring Cloud Function

I have defined a function in my Spring Cloud Function project, but after executing function (ScanQrCode) as AS Lambda I get:
2020-07-13 10:19:04.592 INFO 1 --- [ main] lambdainternal.LambdaRTEntry : Started LambdaRTEntry in 26.357 seconds (JVM running for 27.777)
2020-07-13 10:19:04.653 INFO 1 --- [ main] o.s.c.f.c.c.SimpleFunctionRegistry : Looking up function 'function' with acceptedOutputTypes: []
2020-07-13 10:19:04.731 INFO 1 --- [ main] o.s.c.f.c.c.SimpleFunctionRegistry : Looking up function 'consumer' with acceptedOutputTypes: []
2020-07-13 10:19:04.734 INFO 1 --- [ main] o.s.c.f.c.c.SimpleFunctionRegistry : Looking up function 'supplier' with acceptedOutputTypes: []
2020-07-13 10:19:04.754 INFO 1 --- [ main] o.s.c.f.c.c.SimpleFunctionRegistry : Looking up function '' with acceptedOutputTypes: []
2020-07-13 10:19:04.811 INFO 1 --- [ main] o.s.c.f.c.c.SimpleFunctionRegistry : Looking up function '' with acceptedOutputTypes: []
2020-07-13 10:19:04.811 INFO 1 --- [ main] o.s.c.f.c.c.SimpleFunctionRegistry : Looking up function '' with acceptedOutputTypes: []
No function defined: java.lang.IllegalStateException
java.lang.IllegalStateException: No function defined
at org.springframework.cloud.function.context.AbstractSpringFunctionAdapterInitializer.apply(AbstractSpringFunctionAdapterInitializer.java:187)
at org.springframework.cloud.function.adapter.aws.SpringBootRequestHandler.handleRequest(SpringBootRequestHandler.java:51)
The weired thing here is that I have commented out the consume, supplier and function functions...I don´t know why Spring or AWS Lambda considered them...
Here are my classes:
Spring Boot App
#SpringBootApplication
public class FoodhatQrApplication {
public static void main(String[] args) {
SpringApplication.run(FoodhatQrApplication.class, args);
}
// #Bean
// public Function<String, String> function(){
// return input -> input;
// }
//
// #Bean
// public Consumer<String> consume(){
// return input -> System.out.println("Input: " + input);
// }
//
// #Bean
// public Supplier<String> supply(){
// return () -> "Hello World";
// }
}
Function class
#Component
public class ScanQrCode implements Function<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent>{
#Autowired
private QrCodeRepository qrCodeRepository;
#Override
public APIGatewayProxyResponseEvent apply(APIGatewayProxyRequestEvent request) {
APIGatewayProxyResponseEvent response = new APIGatewayProxyResponseEvent();
response.setStatusCode(302);
response.setHeaders(Collections.singletonMap("location" , "http://www.google.de"));
return response;
}
}
What am I missing?
Function Component Scan Adding spring.cloud.function.scan.packages might help.
Reference: https://cloud.spring.io/spring-cloud-function/reference/html/spring-cloud-function.html#_function_component_scan
You definitely need to add your function name in our case it "ScanQrCode" to AWS lambda env variables.
FUNCTION_NAME: ScanQrCode

Spring Webflux: How to use different thread for request and response

I'm using Spring Webflux and as I understand it, by using this, the thread used for receiving request and the one used for response should be different. However, whether I use netty or undertow, I end up using the same thread.
My app is a simple crud app with MySQL DB. I'm not using r2dbc but a jdbc coupled with Executor and Scheduler.
As shown in the log below, request is handled by thread [ XNIO-1 I/O-6] and the response is given by the same one.
By this, I'm assuming the thread is blocked until db operation has finished. How can I fix this?
Here's the log
2019-07-23 17:49:10.051 INFO 132 --- [ main] org.xnio : XNIO version 3.3.8.Final
2019-07-23 17:49:10.059 INFO 132 --- [ main] org.xnio.nio : XNIO NIO Implementation Version 3.3.8.Final
2019-07-23 17:49:10.114 INFO 132 --- [ main] o.s.b.w.e.undertow.UndertowWebServer : Undertow started on port(s) 8080 (http)
2019-07-23 17:49:10.116 INFO 132 --- [ main] c.n.webflux.demo.WebfluxFunctionalApp : Started WebfluxFunctionalApp in 1.262 seconds (JVM running for 2.668)
2019-07-23 17:49:10.302 DEBUG 132 --- [ XNIO-1 I/O-6] o.s.w.s.adapter.HttpWebHandlerAdapter : [4c85975] HTTP GET "/api/findall"
2019-07-23 17:49:10.322 DEBUG 132 --- [ XNIO-1 I/O-6] s.w.r.r.m.a.RequestMappingHandlerMapping : [4c85975] Mapped to public reactor.core.publisher.Mono<java.util.List<com.webflux.demo.model.TypeStatus>> com.webflux.demo.controller.MonitoringController.findAll()
2019-07-23 17:49:10.337 DEBUG 132 --- [ XNIO-1 I/O-6] o.s.w.r.r.m.a.ResponseBodyResultHandler : Using 'application/json;charset=UTF-8' given [*/*] and supported [application/json;charset=UTF-8, application/*+json;charset=UTF-8, text/event-stream]
2019-07-23 17:49:10.338 DEBUG 132 --- [ XNIO-1 I/O-6] o.s.w.r.r.m.a.ResponseBodyResultHandler : [4c85975] 0..1 [java.util.List<com.webflux.demo.model.TypeStatus>]
2019-07-23 17:49:10.347 INFO 132 --- [pool-1-thread-1] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2019-07-23 17:49:10.785 INFO 132 --- [pool-1-thread-1] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
2019-07-23 17:49:10.838 DEBUG 132 --- [pool-1-thread-1] org.springframework.web.HttpLogging : [4c85975] Encoding [[com.webflux.demo.model.TypeStatus#7b4509cb, com.webflux.demo.model.TypeStatus#22676ebe, (truncated)...]
2019-07-23 17:49:10.949 DEBUG 132 --- [ XNIO-1 I/O-6] o.s.w.s.adapter.HttpWebHandlerAdapter : [4c85975] Completed 200 OK
Also my dao is
#Repository
public class TypeStatusJdbcTemplate {
private JdbcTemplate jdbcTemplate;
public TypeStatusJdbcTemplate(JdbcTemplate jdbcTemplate) {
this.jdbcTemplate = jdbcTemplate;
}
private final static String SQL_FIND_ALL = "select * from `monitoring`.`type_status` limit 3";
public List<TypeStatus> findAll() {
return jdbcTemplate.query(SQL_FIND_ALL,
new TypeStatusMapper());
}
}
service is
#Service
public class MonitoringService {
private final Scheduler scheduler;
private TypeStatusJdbcTemplate repository;
public MonitoringService(Scheduler scheduler, TypeStatusJdbcTemplate repository) {
this.scheduler = scheduler;
this.repository = repository;
}
public Mono<List<TypeStatus>> findAll() {
return Mono.fromCallable(repository::findAll).subscribeOn(scheduler);
}
}
controller is
#RestController
#RequestMapping("/api")
public class MonitoringController {
private final MonitoringService monitoringService;
private static final Logger logger = LoggerFactory.getLogger(MonitoringController.class);
public MonitoringController(MonitoringService monitoringService) {
this.monitoringService = monitoringService;
}
#GetMapping(value="/findall")
public Mono<List<TypeStatus>> findAll() {
return monitoringService.findAll();
}
}
main file (showing scheduler)
#SpringBootApplication
public class WebfluxFunctionalApp {
public static void main(String[] args){
SpringApplication.run(WebfluxFunctionalApp.class, args);
}
#PostConstruct
public void init(){
// Setting Spring Boot SetTimeZone
TimeZone.setDefault(TimeZone.getTimeZone("UTC"));
}
#Bean
public Scheduler jdbcScheduler() {
return Schedulers.fromExecutor(Executors.newFixedThreadPool(30));
}
}
Thread execution does not always have to be different threads. Taken from the Reactive documentation:
Reactive Schedulers
Obtaining a Flux or a Mono doesn’t necessarily mean it will run in a dedicated Thread. Instead, most operators continue working in the Thread on which the previous operator executed. Unless specified, the topmost operator (the source) itself runs on the Thread in which the subscribe() call was made.
So there is nothing that says that it has to be a new thread.

Categories