I am NOT able to stop an JMS consumer dynamically using a Spring Boot REST endpoint.
The number of consumers stays as is. No exceptions either.
IBM MQ Version: 9.2.0.5
pom.xml
<dependency>
<groupId>com.ibm.mq</groupId>
<artifactId>mq-jms-spring-boot-starter</artifactId>
<version>2.0.8</version>
</dependency>
JmsConfig.java
#Configuration
#EnableJms
#Log4j2
public class JmsConfig {
#Bean
public MQQueueConnectionFactory mqQueueConnectionFactory() {
MQQueueConnectionFactory mqQueueConnectionFactory = new MQQueueConnectionFactory();
mqQueueConnectionFactory.setHostName("my-ibm-mq-host.com");
try {
mqQueueConnectionFactory.setTransportType(WMQConstants.WMQ_CM_CLIENT);
mqQueueConnectionFactory.setCCSID(1208);
mqQueueConnectionFactory.setChannel("my-channel");
mqQueueConnectionFactory.setPort(1234);
mqQueueConnectionFactory.setQueueManager("my-QM");
} catch (Exception e) {
log.error("Exception while creating JMS connecion...", e.getMessage());
}
return mqQueueConnectionFactory;
}
}
JmsListenerConfig.java
#Configuration
#Log4j2
public class JmsListenerConfig implements JmsListenerConfigurer {
#Autowired
private JmsConfig jmsConfig;
private Map<String, String> queueMap = new HashMap<>();
#Bean
public DefaultJmsListenerContainerFactory mqJmsListenerContainerFactory() throws JMSException {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(jmsConfig.mqQueueConnectionFactory());
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setSessionTransacted(true);
factory.setConcurrency("5");
return factory;
}
#Override
public void configureJmsListeners(JmsListenerEndpointRegistrar registrar) {
queueMap.put("my-queue-101", "101");
log.info("queueMap: " + queueMap);
queueMap.entrySet().forEach(e -> {
SimpleJmsListenerEndpoint endpoint = new SimpleJmsListenerEndpoint();
endpoint.setDestination(e.getKey());
endpoint.setId(e.getValue());
try {
log.info("Reading message....");
endpoint.setMessageListener(message -> {
try {
log.info("Receieved ID: {} Destination {}", message.getJMSMessageID(), message.getJMSDestination());
} catch (JMSException ex) {
log.error("Exception while reading message - " + ex.getMessage());
}
});
registrar.setContainerFactory(mqJmsListenerContainerFactory());
} catch (JMSException ex) {
log.error("Exception while reading message - " + ex.getMessage());
}
registrar.registerEndpoint(endpoint);
});
}
}
JmsController.java
#RestController
#RequestMapping("/jms")
#Log4j2
public class JmsController {
#Autowired
ApplicationContext context;
#RequestMapping(value = "/stop", method = RequestMethod.GET)
public #ResponseBody
String haltJmsListener() {
JmsListenerEndpointRegistry listenerEndpointRegistry = context.getBean(JmsListenerEndpointRegistry.class);
Set<String> containerIds = listenerEndpointRegistry.getListenerContainerIds();
log.info("containerIds: " + containerIds);
//stops all consumers
listenerEndpointRegistry.stop(); //DOESN'T WORK :(
//stops a consumer by id, used when there are multiple consumers and want to stop them individually
//listenerEndpointRegistry.getListenerContainer("101").stop(); //DOESN'T WORK EITHER :(
return "Jms Listener stopped";
}
}
Here is the result that I noticed.
Initial # of consumers: 0 (as expected)
After server startup and queue connection, total # of consumers: 1 (as expected)
After hitting http://localhost:8080/jms/stop endpoint, total # of consumers: 1 (NOT as expected, should go back to 0)
Am I missing any configuration ?
You need to also call shutDown on the container; see my comment on this answer DefaultMessageListenerContainer's "isActive" vs "isRunning"
start()/stop() set/reset running; initialize()/shutDown() set/reset active. It depends on what your requirements are. stop() just stops the consumers from getting new messages, but the consumers still exist. shutDown() closes the consumers. Most people call stop + shutdown and then initialize + start to restart. But if you just want to stop consuming for a short time, stop/start is all you need.
You will need to iterate over the containers and cast them to call shutDown().
Related
I have a Spring Boot application running with two endpoints running asynchronously.
Register an user in a external system using an rest API. After a successful registration put him to the DB and Redis cache.
code is something like this
#Service
public class UserRegistrationService {
#Async("asyncExecutor")
public String registerUser(DomainRequest request) throws SystemException {
try {
// External API call
extResponse = extServiceImplInterface.registerUser(extRequest);
} catch (Exception e) {
}
if (extResponse.getResCode = 0) {
// Success response from API - save to DB and redis cache
savedUser = saveUser(extResponse);
}
}
}
Refresh the each user in the DB table by calling an external rest api to each one of them. To trigger this event I call my 2nd endpoint each 5 secs, and it executes refreshUser() method.
code is something like this
#Service
public class UserRefreshService {
#Autowired
//External API call class
GetLastChatResponse getLastChatResponse;
#Async("asyncExecutor")
public void refreshUser() {
try{
//Get all registerd users from DB
List<User> currentUsers = userRepositoryInterface.findAll();
//Traverse through the list and call an external API
if(!currentUsers.isEmpty()) {
for(User item : currentUsers) {
getLastChatResponse.getLastResponse(item);
}
}
}
catch(Exception e){
}
}
}
#Service
public class GetLastChatResponse {
#Autowired
JedisPool jedisPool;
#Async("asyncExecutor")
public void getLastResponse(User item) {
//Call external rest API
LastAgentResponse lastResponseMessage = getLastAgentResponse(item);
try {
if(lastResponseMessage != null) {
//Set info to Redis cache
Jedis jedis = jedisPool.getResource();
jedis.set(item.getChatId()+Constants.LAST_INDEX, lastResponseMessage.getLastIndex());
jedis.set(item.getChatId()+Constants.LAST_TEXT_TIME, LocalDateTime.now().toString());
}
} catch (SystemException e) {
logger.error("Exception: {}", e);
}
}
}
Im using these thread pool config
#Bean(name = "asyncExecutor")
public Executor asyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(100);
executor.setMaxPoolSize(200);
executor.setQueueCapacity(1);
executor.setKeepAliveSeconds(5);
executor.setThreadNamePrefix("AsyncThread-");
executor.initialize();
return executor;
}
Usually DB table contains around 10 users as the expired users are removed from the table.
The problem I'm having is I get this error when I call one of the endpoints, after running the application for sometime.
{
"code": "500",
"type": "TaskRejectedException",
"message": "Executor [java.util.concurrent.ThreadPoolExecutor#7de76256[Running, pool size = 200, active threads = 200, queued tasks = 1, completed tasks = 5089]] did not accept task: org.springframework.cloud.sleuth.instrument.async.TraceCallable#325cf639"
}
I tried changing the pool configs but it didn't work.
executor.setCorePoolSize(2000);
executor.setMaxPoolSize(4000);
executor.setQueueCapacity(1);
executor.setKeepAliveSeconds(5);
Does anyone have an idea about this?
I'm working on a Spring-Batch application, which uses a REDIS connection to populate data.
Here are some relevant dependencies:
implementation 'org.springframework.boot:spring-boot-starter-data-redis'
implementation 'io.lettuce:lettuce-core:5.3.3.RELEASE'
RedisConnection is imported from org.springframework.data.redis.connection
PROBLEM STATEMENT:
There might be a case when the RedisConnection is active when we start the application, but during the time application is running, we might loose the Redis Connection. In that case, when it enters the method below, the method will throw an error that Redis connection is lost. Hence, we retry using the #Retryable logic.
But, lets say during the second retry, the Redis Connection is re-established, we want the Retry to be able to detect that and re-connect to redis and go for the normal flow. But, "THE REDIS-RECONNECTION IS NOT GETTING DETECTED"
TRIED: I tried following https://github.com/lettuce-io/lettuce-core/issues/338 and added lettuceConnectionFactory.validateConnection(); to the defaultRedisConnection as below but to no vain
#Qualifier("defaultRedisConnection")
#Bean
public RedisConnection defaultRedisConnectionDockerCluster() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration();
redisStandaloneConfiguration.setHostName("redis");
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(redisStandaloneConfiguration);
lettuceConnectionFactory.validateConnection();
lettuceConnectionFactory.afterPropertiesSet();
return lettuceConnectionFactory.getConnection();
}
Here is the class:
#Slf4j
#Service
public class PopulateRedisDataService {
#Qualifier("defaultRedisConnection")
private final RedisConnection redisConnection;
private RedisClientData redisClientData = new RedisClientData();
public PopulateRedisDataService(
#Qualifier("defaultRedisConnection") RedisConnection redisConnection,
RedisDataUtils redisDataUtils) {
this.redisConnection = redisConnection;
}
#Retryable(maxAttemptsExpression = "3", backoff = #Backoff(delayExpression = "20_000",
multiplierExpression = "100_000", maxDelayExpression = "100_000"))
public RedisClientData populateData() {
try {
byte[] serObj = Objects.requireNonNull(redisConnection.get("SOME_KEY".getBytes()));
RedisClientData redisClientData = new RedisClientData();
// Some operations to load data from Redis/serObj into redisClientData object.
} catch (Exception e) {
// If Redis doesn't have the key, return empty redisClientData
redisClientData = new RedisClientData();
log.error("Failed to get ClientRegList", e);
}
return redisClientData;
}
#Recover
public void recover(Exception e) {
// Some operations
}
}
Any suggestions to handle this case would be much appreciated.
The configuration class(part):
public static RabbitQueueConfig clubProNotAvailableConfig =
new RabbitQueueConfig("club-pro-not-available", "club-pro-not-available", "club-pro-not-available-status", "3-3");
#Bean
public SimpleMessageListenerContainer listenerContainer5(ClubProNotAvailableListener listener, ConnectionFactory connectionFactory) {
return initListenerContainer(listener, clubProNotAvailableConfig, connectionFactory);
}
private SimpleMessageListenerContainer initListenerContainer(
ChannelAwareMessageListener listener,
RabbitQueueConfig config,
ConnectionFactory connectionFactory
) {
SimpleMessageListenerContainer listenerContainer = new SimpleMessageListenerContainer();
listenerContainer.setConnectionFactory(connectionFactory);
listenerContainer.setQueueNames(config.getQueue());
listenerContainer.setMessageListener(listener);
listenerContainer.setAcknowledgeMode(AcknowledgeMode.MANUAL);
listenerContainer.setConcurrency(config.getThreadPoolSize());
listenerContainer.setPrefetchCount(1);
return listenerContainer;
}
Method of sending a message:
try {
success = clientRepository.updateAnketa(privatePersonProfile.getProfileId(), clubProAnketa, null);
} catch (ClubProNotAvailableException e) {
ClubProNotAvailableRabbit clubProNotAvailableRabbit = new ClubProNotAvailableRabbit();
clubProNotAvailableRabbit.setRequestContextRabbit(RequestContextRabbit.createContext(requestContextService.getContext()));
clubProNotAvailableRabbit.setCountRetry(0L);
clubProNotAvailableRabbit.setProfileId(privatePersonProfile.getProfileId());
clubProNotAvailableRabbit.setNameMethod(ChangeMethod.CHANGE_ANKETA);
clubProNotAvailableRabbit.setChangeAnketaData(anketa);
rabbitTemplate.convertAndSend(config.getExchange(), config.getRoutingKey(), clubProNotAvailableRabbit, new MessagePostProcessor() {
#Override
public Message postProcessMessage(Message message) throws AmqpException {
message.getMessageProperties().setHeader("x-delay", 10000);
return message;
}
});
throw new ClubProNotAvailableException();
}
Configuration in the broker:
Queue configuration:
configuration of the exchanger:
I've read the documentation, tried a couple of options, but I can't apply it to my code.
What am I doing wrong? I will be very grateful for your help.
It looks like you don't have the delayed exchange plugin; you have also declared the exchange as a simple fanout; this is what the exchange should look like this:
Also, to set the delay when sending, you should use:
template.convertAndSend(exchangeName, queue.getName(), "foo", message -> {
message.getMessageProperties().setDelay(1000);
return message;
});
I have following configuration for creation of two channels (by using the JmsChannelFactoryBean):
#Bean
public JmsChannelFactoryBean jmsChannel(ActiveMQConnectionFactory activeMQConnectionFactory) {
JmsChannelFactoryBean fb = new JmsChannelFactoryBean(true);
fb.setConnectionFactory(activeMQConnectionFactory);
fb.setDestinationName("something.queue");
fb.setErrorHandler(t -> log.error("something went wrong on jms channel", t));
return fb;
}
#Bean
public JmsChannelFactoryBean jmsChannelDLQ(ActiveMQConnectionFactory activeMQConnectionFactory) {
JmsChannelFactoryBean fb = new JmsChannelFactoryBean(true);
fb.setConnectionFactory(activeMQConnectionFactory);
fb.setDestinationName("something.queue.DLQ");
fb.setErrorHandler(t -> log.error("something went wrong on jms channel", t));
return fb;
}
The something.queue is configured to put the dead letter on something.queue.DLQ. Im using mostly Java DSL to configure the app, and if possible - would like to keep this.
Case is: the message is taken from jmsChannel put to sftp outbound gateway, if there is a problem on sending the file, the message is put back into the jmsChannel as not delivered. After some retries it is designed as poisonus, and put to something.queue.DLQ.
Is it possbile to have the info on error channel when that happens?
What is best practice to handle errors when using JMS backed message channels?
EDIT 2
The integration flow is defined as:
IntegrationFlows.from(filesToProcessChannel).handle(outboundGateway)
Where filesToProcessChannel is the JMS backed channel and outbound gateway is defined as:
#Bean
public SftpOutboundGateway outboundGateway(SftpRemoteFileTemplate sftpRemoteFileTemplate) {
SftpOutboundGateway gateway = new SftpOutboundGateway(sftpRemoteFileTemplate, AbstractRemoteFileOutboundGateway.Command.PUT.getCommand(), EXPRESSION_PAYLOAD);
ArrayList<Advice> adviceChain = new ArrayList<>();
adviceChain.add(errorHandlingAdvice());
gateway.setAdviceChain(adviceChain);
return gateway;
}
Im trying to grab exception using advice:
#Bean
public Advice errorHandlingAdvice() {
RequestHandlerRetryAdvice advice = new RequestHandlerRetryAdvice();
RetryTemplate retryTemplate = new RetryTemplate();
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(1);
retryTemplate.setRetryPolicy(retryPolicy);
advice.setRetryTemplate(retryTemplate);
advice.setRecoveryCallback(new ErrorMessageSendingRecoverer(filesToProcessErrorChannel));
return advice;
}
Is this the right way?
EDIT 3
There is certanly something wrong with SFTPOutboundGateway and advices (or with me :/):
I used the folowing advice from the spring integration reference:
#Bean
public Advice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setSuccessChannelName("success.input");
advice.setOnSuccessExpressionString("payload + ' was successful'");
advice.setFailureChannelName("failure.input");
advice.setOnFailureExpressionString(
"payload + ' was bad, with reason: ' + #exception.cause.message");
advice.setTrapException(true);
return advice;
}
#Bean
public IntegrationFlow success() {
return f -> f.handle(System.out::println);
}
#Bean
public IntegrationFlow failure() {
return f -> f.handle(System.out::println);
}
And when I use :
return IntegrationFlows.from(filesToProcessChannel)
.handle((GenericHandler<File>) (payload, headers) -> {
if (payload.equals("x")) {
return null;
}
else {
throw new RuntimeException("some failure");
}
}, spec -> spec.advice(expressionAdvice()))
It gets called, and i get error message printed out (and that is expected), but when I try to use:
return IntegrationFlows.from(filesToProcessChannel)
.handle(outboundGateway, spec -> spec.advice(expressionAdvice()))
The advice is not called, and the error message is put back to JMS.
The app is using Spring Boot v2.0.0.RELEASE, Spring v5.0.4.RELEASE.
EDIT 4
I managed to resolve the advice issue using following configuration, still don't understand why the handler spec will not work:
#Bean
IntegrationFlow files(SftpOutboundGateway outboundGateway,
...
) {
return IntegrationFlows.from(filesToProcessChannel)
.handle(outboundGateway)
...
.log(LoggingHandler.Level.INFO)
.get();
}
#Bean
public SftpOutboundGateway outboundGateway(SftpRemoteFileTemplate sftpRemoteFileTemplate) {
SftpOutboundGateway gateway = new SftpOutboundGateway(sftpRemoteFileTemplate, AbstractRemoteFileOutboundGateway.Command.PUT.getCommand(), EXPRESSION_PAYLOAD);
ArrayList<Advice> adviceChain = new ArrayList<>();
adviceChain.add(expressionAdvice());
gateway.setAdviceChain(adviceChain);
return gateway;
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setSuccessChannelName("success.input");
advice.setOnSuccessExpressionString("payload + ' was successful'");
advice.setFailureChannelName("failure.input");
advice.setOnFailureExpressionString(
"payload + ' was bad, with reason: ' + #exception.cause.message");
advice.setTrapException(true);
return advice;
}
#Bean
public IntegrationFlow success() {
return f -> f.handle(System.out::println);
}
#Bean
public IntegrationFlow failure() {
return f -> f.handle(System.out::println);
}
Since the movement to the DLQ is performed by the broker, the application has no mechanism to log the situation - it is not even aware that it happened.
You would have to catch the exceptions yourself and publish the message the the DLQ yourself, after some number of attempts (JMSXDeliveryCount header), instead of using the broker policy.
EDIT
Add an Advice to the .handle() step.
.handle(outboundGateway, e -> e.advice(myAdvice))
Where myAdvice implements MethodInterceptor.
In the invoke method, after a failure, you can check the delivery count header and, if it exceeds your threshold, publish the message to the DLQ (e.g. send it to another channel that has a JMS outbound adapter subscribed) and log the error; if the threshold has not been exceeded, simply return the result of the invocation.proceed() (or rethrow the exception).
That way, you control publishing to the DLQ rather than having the broker do it. You can also add more information, such as the exception, to headers.
EDIT2
You need something like this
public class MyAdvice implements MethodInterceptor {
#Autowired
private MessageChannel toJms;
public Object invoke(MethodInvocation invocation) throws Throwable {
try {
return invocation.proceed();
}
catch Exception(e) {
Message<?> message = (Message<?>) invocation.getArguments()[0];
Integer redeliveries = messasge.getHeader("JMXRedeliveryCount", Integer.class);
if (redeliveries != null && redeliveries > 3) {
this.toJms.send(message); // maybe rebuild with additional headers about the error
}
else {
throw e;
}
}
}
}
(it should be close, but I haven't tested it). It assumes your broker populates that header.
I wan't to write Spring Boot Application in spring which will be monitoring directory in windows, and when I change sub folder or add new one or delete existing one I wanna get information about that.
How can i do that?
I have read this one:
http://docs.spring.io/spring-integration/reference/html/files.html
and each result under 'spring file watcher' in google,
but I can't find solution...
Do you have a good article or example with something like this?
I wan't it to like like this:
#SpringBootApplication
#EnableIntegration
public class SpringApp{
public static void main(String[] args) {
SpringApplication.run(SpringApp.class, args);
}
#Bean
public WatchService watcherService() {
...//define WatchService here
}
}
Regards
spring-boot-devtools has FileSystemWatcher
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
</dependency>
FileWatcherConfig
#Configuration
public class FileWatcherConfig {
#Bean
public FileSystemWatcher fileSystemWatcher() {
FileSystemWatcher fileSystemWatcher = new FileSystemWatcher(true, Duration.ofMillis(5000L), Duration.ofMillis(3000L));
fileSystemWatcher.addSourceFolder(new File("/path/to/folder"));
fileSystemWatcher.addListener(new MyFileChangeListener());
fileSystemWatcher.start();
System.out.println("started fileSystemWatcher");
return fileSystemWatcher;
}
#PreDestroy
public void onDestroy() throws Exception {
fileSystemWatcher().stop();
}
}
MyFileChangeListener
#Component
public class MyFileChangeListener implements FileChangeListener {
#Override
public void onChange(Set<ChangedFiles> changeSet) {
for(ChangedFiles cfiles : changeSet) {
for(ChangedFile cfile: cfiles.getFiles()) {
if( /* (cfile.getType().equals(Type.MODIFY)
|| cfile.getType().equals(Type.ADD)
|| cfile.getType().equals(Type.DELETE) ) && */ !isLocked(cfile.getFile().toPath())) {
System.out.println("Operation: " + cfile.getType()
+ " On file: "+ cfile.getFile().getName() + " is done");
}
}
}
}
private boolean isLocked(Path path) {
try (FileChannel ch = FileChannel.open(path, StandardOpenOption.WRITE); FileLock lock = ch.tryLock()) {
return lock == null;
} catch (IOException e) {
return true;
}
}
}
From Java 7 there is WatchService - it will be the best solution.
Spring configuration could be like the following:
#Slf4j
#Configuration
public class MonitoringConfig {
#Value("${monitoring-folder}")
private String folderPath;
#Bean
public WatchService watchService() {
log.debug("MONITORING_FOLDER: {}", folderPath);
WatchService watchService = null;
try {
watchService = FileSystems.getDefault().newWatchService();
Path path = Paths.get(folderPath);
if (!Files.isDirectory(path)) {
throw new RuntimeException("incorrect monitoring folder: " + path);
}
path.register(
watchService,
StandardWatchEventKinds.ENTRY_DELETE,
StandardWatchEventKinds.ENTRY_MODIFY,
StandardWatchEventKinds.ENTRY_CREATE
);
} catch (IOException e) {
log.error("exception for watch service creation:", e);
}
return watchService;
}
}
And Bean for launching monitoring itself:
#Slf4j
#Service
#AllArgsConstructor
public class MonitoringServiceImpl {
private final WatchService watchService;
#Async
#PostConstruct
public void launchMonitoring() {
log.info("START_MONITORING");
try {
WatchKey key;
while ((key = watchService.take()) != null) {
for (WatchEvent<?> event : key.pollEvents()) {
log.debug("Event kind: {}; File affected: {}", event.kind(), event.context());
}
key.reset();
}
} catch (InterruptedException e) {
log.warn("interrupted exception for monitoring service");
}
}
#PreDestroy
public void stopMonitoring() {
log.info("STOP_MONITORING");
if (watchService != null) {
try {
watchService.close();
} catch (IOException e) {
log.error("exception while closing the monitoring service");
}
}
}
}
Also, you have to set #EnableAsync for your application class (it configuration).
and snipped from application.yml:
monitoring-folder: C:\Users\nazar_art
Tested with Spring Boot 2.3.1.
Also used configuration for Async pool:
#Slf4j
#EnableAsync
#Configuration
#AllArgsConstructor
#EnableConfigurationProperties(AsyncProperties.class)
public class AsyncConfiguration implements AsyncConfigurer {
private final AsyncProperties properties;
#Override
#Bean(name = "taskExecutor")
public Executor getAsyncExecutor() {
log.debug("Creating Async Task Executor");
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(properties.getCorePoolSize());
taskExecutor.setMaxPoolSize(properties.getMaxPoolSize());
taskExecutor.setQueueCapacity(properties.getQueueCapacity());
taskExecutor.setThreadNamePrefix(properties.getThreadName());
taskExecutor.initialize();
return taskExecutor;
}
#Bean
public TaskScheduler taskScheduler() {
return new ConcurrentTaskScheduler();
}
#Override
public AsyncUncaughtExceptionHandler getAsyncUncaughtExceptionHandler() {
return new CustomAsyncExceptionHandler();
}
}
Where the custom async exception handler is:
#Slf4j
public class CustomAsyncExceptionHandler implements AsyncUncaughtExceptionHandler {
#Override
public void handleUncaughtException(Throwable throwable, Method method, Object... objects) {
log.error("Exception for Async execution: ", throwable);
log.error("Method name - {}", method.getName());
for (Object param : objects) {
log.error("Parameter value - {}", param);
}
}
}
Configuration at properties file:
async-monitoring:
core-pool-size: 10
max-pool-size: 20
queue-capacity: 1024
thread-name: 'async-ex-'
Where AsyncProperties:
#Getter
#Setter
#ConfigurationProperties("async-monitoring")
public class AsyncProperties {
#NonNull
private Integer corePoolSize;
#NonNull
private Integer maxPoolSize;
#NonNull
private Integer queueCapacity;
#NonNull
private String threadName;
}
For using asynchronous execution I am processing an event like the following:
validatorService.processRecord(recordANPR, zipFullPath);
Where validator service has a look like:
#Async
public void processRecord(EvidentialRecordANPR record, String fullFileName) {
The main idea is that you configure async configuration -> call it from MonitoringService -> put #Async annotation above method at another service which you called (it should be a method of another bean - initialisation goes through a proxy).
You can use pure java for this no need for spring https://docs.oracle.com/javase/tutorial/essential/io/notification.html
See the Spring Integration Samples Repo there's a file sample under 'basic'.
There's a more recent and more sophisticated sample under applications file-split-ftp - it uses Spring Boot and Java configuration Vs. the xml used in the older sample.
found a workaround
you can annotate your task by #Scheduled(fixedDelay = Long.MAX_VALUE)
you could check code:
#Scheduled(fixedDelay = Long.MAX_VALUE)
public void watchTask() {
this.loadOnStartup();
try {
WatchService watcher = FileSystems.getDefault().newWatchService();
Path file = Paths.get(propertyFile);
Path dir = Paths.get(file.getParent().toUri());
dir.register(watcher, ENTRY_MODIFY);
logger.info("Watch Service registered for dir: " + dir.getFileName());
while (true) {
WatchKey key;
try {
key = watcher.take();
} catch (InterruptedException ex) {
return;
}
for (WatchEvent<?> event : key.pollEvents()) {
WatchEvent.Kind<?> kind = event.kind();
#SuppressWarnings("unchecked")
WatchEvent<Path> ev = (WatchEvent<Path>) event;
Path fileName = ev.context();
logger.debug(kind.name() + ": " + fileName);
if (kind == ENTRY_MODIFY &&
fileName.toString().equals(file.getFileName().toString())) {
//publish event here
}
}
boolean valid = key.reset();
if (!valid) {
break;
}
}
} catch (Exception ex) {
logger.error(ex.getMessage(), ex);
}
}
}
Without giving the details here a few pointers which might help you out.
You can take the directory WatchService code from SÅ‚awomir Czaja's answer:
You can use pure java for this no need for spring https://docs.oracle.com/javase/tutorial/essential/io/notification.html
and wrap that code into a runnable task. This task can notify your clients of directory change using the SimpMessagingTemplate as described here:
Websocket STOMP handle send
Then you can create a scheduler like described here:
Scheduling which handles the start and reaccurance of your task.
Don't forget to configure scheduling and websocket support in your mvc-config as well as STOMP support on the client side (further reading here: STOMP over Websocket)
Apache commons-io is another good alternative to watch changes to files/directories.
You can see the overview of pros and cons of using it in this answer:
https://stackoverflow.com/a/41013350/16470819
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.11.0</version>
</dependency>
Just in case, if somebody is looking for recursive sub-folder watcher, this link may help: How to watch a folder and subfolders for changes