Spring-Redis connection not re-connecting between retries - java

I'm working on a Spring-Batch application, which uses a REDIS connection to populate data.
Here are some relevant dependencies:
implementation 'org.springframework.boot:spring-boot-starter-data-redis'
implementation 'io.lettuce:lettuce-core:5.3.3.RELEASE'
RedisConnection is imported from org.springframework.data.redis.connection
PROBLEM STATEMENT:
There might be a case when the RedisConnection is active when we start the application, but during the time application is running, we might loose the Redis Connection. In that case, when it enters the method below, the method will throw an error that Redis connection is lost. Hence, we retry using the #Retryable logic.
But, lets say during the second retry, the Redis Connection is re-established, we want the Retry to be able to detect that and re-connect to redis and go for the normal flow. But, "THE REDIS-RECONNECTION IS NOT GETTING DETECTED"
TRIED: I tried following https://github.com/lettuce-io/lettuce-core/issues/338 and added lettuceConnectionFactory.validateConnection(); to the defaultRedisConnection as below but to no vain
#Qualifier("defaultRedisConnection")
#Bean
public RedisConnection defaultRedisConnectionDockerCluster() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration();
redisStandaloneConfiguration.setHostName("redis");
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(redisStandaloneConfiguration);
lettuceConnectionFactory.validateConnection();
lettuceConnectionFactory.afterPropertiesSet();
return lettuceConnectionFactory.getConnection();
}
Here is the class:
#Slf4j
#Service
public class PopulateRedisDataService {
#Qualifier("defaultRedisConnection")
private final RedisConnection redisConnection;
private RedisClientData redisClientData = new RedisClientData();
public PopulateRedisDataService(
#Qualifier("defaultRedisConnection") RedisConnection redisConnection,
RedisDataUtils redisDataUtils) {
this.redisConnection = redisConnection;
}
#Retryable(maxAttemptsExpression = "3", backoff = #Backoff(delayExpression = "20_000",
multiplierExpression = "100_000", maxDelayExpression = "100_000"))
public RedisClientData populateData() {
try {
byte[] serObj = Objects.requireNonNull(redisConnection.get("SOME_KEY".getBytes()));
RedisClientData redisClientData = new RedisClientData();
// Some operations to load data from Redis/serObj into redisClientData object.
} catch (Exception e) {
// If Redis doesn't have the key, return empty redisClientData
redisClientData = new RedisClientData();
log.error("Failed to get ClientRegList", e);
}
return redisClientData;
}
#Recover
public void recover(Exception e) {
// Some operations
}
}
Any suggestions to handle this case would be much appreciated.

Related

Not able to stop an IBM MQ JMS consumer in Spring Boot

I am NOT able to stop an JMS consumer dynamically using a Spring Boot REST endpoint.
The number of consumers stays as is. No exceptions either.
IBM MQ Version: 9.2.0.5
pom.xml
<dependency>
<groupId>com.ibm.mq</groupId>
<artifactId>mq-jms-spring-boot-starter</artifactId>
<version>2.0.8</version>
</dependency>
JmsConfig.java
#Configuration
#EnableJms
#Log4j2
public class JmsConfig {
#Bean
public MQQueueConnectionFactory mqQueueConnectionFactory() {
MQQueueConnectionFactory mqQueueConnectionFactory = new MQQueueConnectionFactory();
mqQueueConnectionFactory.setHostName("my-ibm-mq-host.com");
try {
mqQueueConnectionFactory.setTransportType(WMQConstants.WMQ_CM_CLIENT);
mqQueueConnectionFactory.setCCSID(1208);
mqQueueConnectionFactory.setChannel("my-channel");
mqQueueConnectionFactory.setPort(1234);
mqQueueConnectionFactory.setQueueManager("my-QM");
} catch (Exception e) {
log.error("Exception while creating JMS connecion...", e.getMessage());
}
return mqQueueConnectionFactory;
}
}
JmsListenerConfig.java
#Configuration
#Log4j2
public class JmsListenerConfig implements JmsListenerConfigurer {
#Autowired
private JmsConfig jmsConfig;
private Map<String, String> queueMap = new HashMap<>();
#Bean
public DefaultJmsListenerContainerFactory mqJmsListenerContainerFactory() throws JMSException {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(jmsConfig.mqQueueConnectionFactory());
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setSessionTransacted(true);
factory.setConcurrency("5");
return factory;
}
#Override
public void configureJmsListeners(JmsListenerEndpointRegistrar registrar) {
queueMap.put("my-queue-101", "101");
log.info("queueMap: " + queueMap);
queueMap.entrySet().forEach(e -> {
SimpleJmsListenerEndpoint endpoint = new SimpleJmsListenerEndpoint();
endpoint.setDestination(e.getKey());
endpoint.setId(e.getValue());
try {
log.info("Reading message....");
endpoint.setMessageListener(message -> {
try {
log.info("Receieved ID: {} Destination {}", message.getJMSMessageID(), message.getJMSDestination());
} catch (JMSException ex) {
log.error("Exception while reading message - " + ex.getMessage());
}
});
registrar.setContainerFactory(mqJmsListenerContainerFactory());
} catch (JMSException ex) {
log.error("Exception while reading message - " + ex.getMessage());
}
registrar.registerEndpoint(endpoint);
});
}
}
JmsController.java
#RestController
#RequestMapping("/jms")
#Log4j2
public class JmsController {
#Autowired
ApplicationContext context;
#RequestMapping(value = "/stop", method = RequestMethod.GET)
public #ResponseBody
String haltJmsListener() {
JmsListenerEndpointRegistry listenerEndpointRegistry = context.getBean(JmsListenerEndpointRegistry.class);
Set<String> containerIds = listenerEndpointRegistry.getListenerContainerIds();
log.info("containerIds: " + containerIds);
//stops all consumers
listenerEndpointRegistry.stop(); //DOESN'T WORK :(
//stops a consumer by id, used when there are multiple consumers and want to stop them individually
//listenerEndpointRegistry.getListenerContainer("101").stop(); //DOESN'T WORK EITHER :(
return "Jms Listener stopped";
}
}
Here is the result that I noticed.
Initial # of consumers: 0 (as expected)
After server startup and queue connection, total # of consumers: 1 (as expected)
After hitting http://localhost:8080/jms/stop endpoint, total # of consumers: 1 (NOT as expected, should go back to 0)
Am I missing any configuration ?
You need to also call shutDown on the container; see my comment on this answer DefaultMessageListenerContainer's "isActive" vs "isRunning"
start()/stop() set/reset running; initialize()/shutDown() set/reset active. It depends on what your requirements are. stop() just stops the consumers from getting new messages, but the consumers still exist. shutDown() closes the consumers. Most people call stop + shutdown and then initialize + start to restart. But if you just want to stop consuming for a short time, stop/start is all you need.
You will need to iterate over the containers and cast them to call shutDown().

Mixed up Test configuration when using #ResourceArg

TL:DR; When running tests with different #ResourceArgs, the configuration of different tests get thrown around and override others, breaking tests meant to run with specific configurations.
So, I have a service that has tests that run in different configuration setups. The main difference at the moment is the service can either manage its own authentication or get it from an external source (Keycloak).
I firstly control this using test profiles, which seem to work fine. Unfortunately, in order to support both cases, the ResourceLifecycleManager I have setup supports setting up a Keycloak instance and returns config values that break the config for self authentication (This is due primarily to the fact that I have not found out how to get the lifecycle manager to determine on its own what profile or config is currently running. If I could do this, I think I would be much better off than using #ResourceArg, so would love to know if I missed something here).
To remedy this shortcoming, I have attempted to use #ResourceArgs to convey to the lifecycle manager when to setup for external auth. However, I have noticed some really odd execution timings and the config that ends up at my test/service isn't what I intend based on the test class's annotations, where it is obvious the lifecycle manager has setup for external auth.
Additionally, it should be noted that I have my tests ordered such that the profiles and configs shouldn't be running out of order; all the tests that don't care are run first, then the 'normal' tests with self auth, then the tests with the external auth profile. I can see this working appropriately when I run in intellij, and the fact I can tell the time is being taken to start up the new service instance between the test profiles.
Looking at the logs when I throw a breakpoint in places, some odd things are obvious:
When breakpoint on an erring test (before the external-configured tests run)
The start() method of my TestResourceLifecycleManager has been called twice
The first run ran with Keycloak starting, would override/break config
though the time I would expect to need to be taken to start up keycloak not happening, a little confused here
The second run is correct, not starting keycloak
The profile config is what is expected, except for what the keycloak setup would override
When breakpoint on an external-configured test (after all self-configured tests run):
The start() method has now been called 4 times; appears that things were started in the same order as before again for the new run of the app
There could be some weirdness in how Intellij/Gradle shows logs, but I am interpreting this as:
Quarkus initting the two instances of LifecycleManager when starting the app for some reason, and one's config overrides the other, causing my woes.
The lifecycle manager is working as expected; it appropriately starts/ doesn't start keycloak when configured either way
At this point I can't tell if I'm doing something wrong, or if there's a bug.
Test class example for self-auth test (same annotations for all tests in this (test) profile):
#Slf4j
#QuarkusTest
#QuarkusTestResource(TestResourceLifecycleManager.class)
#TestHTTPEndpoint(Auth.class)
class AuthTest extends RunningServerTest {
Test class example for external auth test (same annotations for all tests in this (externalAuth) profile):
#Slf4j
#QuarkusTest
#TestProfile(ExternalAuthTestProfile.class)
#QuarkusTestResource(value = TestResourceLifecycleManager.class, initArgs = #ResourceArg(name=TestResourceLifecycleManager.EXTERNAL_AUTH_ARG, value="true"))
#TestHTTPEndpoint(Auth.class)
class AuthExternalTest extends RunningServerTest {
ExternalAuthTestProfile extends this, providing the appropriate profile name:
public class NonDefaultTestProfile implements QuarkusTestProfile {
private final String testProfile;
private final Map<String, String> overrides = new HashMap<>();
protected NonDefaultTestProfile(String testProfile) {
this.testProfile = testProfile;
}
protected NonDefaultTestProfile(String testProfile, Map<String, String> configOverrides) {
this(testProfile);
this.overrides.putAll(configOverrides);
}
#Override
public Map<String, String> getConfigOverrides() {
return new HashMap<>(this.overrides);
}
#Override
public String getConfigProfile() {
return testProfile;
}
#Override
public List<TestResourceEntry> testResources() {
return QuarkusTestProfile.super.testResources();
}
}
Lifecycle manager:
#Slf4j
public class TestResourceLifecycleManager implements QuarkusTestResourceLifecycleManager {
public static final String EXTERNAL_AUTH_ARG = "externalAuth";
private static volatile MongodExecutable MONGO_EXE = null;
private static volatile KeycloakContainer KEYCLOAK_CONTAINER = null;
private boolean externalAuth = false;
public synchronized Map<String, String> startKeycloakTestServer() {
if(!this.externalAuth){
log.info("No need for keycloak.");
return Map.of();
}
if (KEYCLOAK_CONTAINER != null) {
log.info("Keycloak already started.");
} else {
KEYCLOAK_CONTAINER = new KeycloakContainer()
// .withEnv("hello","world")
.withRealmImportFile("keycloak-realm.json");
KEYCLOAK_CONTAINER.start();
log.info(
"Test keycloak started at endpoint: {}\tAdmin creds: {}:{}",
KEYCLOAK_CONTAINER.getAuthServerUrl(),
KEYCLOAK_CONTAINER.getAdminUsername(),
KEYCLOAK_CONTAINER.getAdminPassword()
);
}
String clientId;
String clientSecret;
String publicKey = "";
try (
Keycloak keycloak = KeycloakBuilder.builder()
.serverUrl(KEYCLOAK_CONTAINER.getAuthServerUrl())
.realm("master")
.grantType(OAuth2Constants.PASSWORD)
.clientId("admin-cli")
.username(KEYCLOAK_CONTAINER.getAdminUsername())
.password(KEYCLOAK_CONTAINER.getAdminPassword())
.build();
) {
RealmResource appsRealmResource = keycloak.realms().realm("apps");
ClientRepresentation qmClientResource = appsRealmResource.clients().findByClientId("quartermaster").get(0);
clientSecret = qmClientResource.getSecret();
log.info("Got client id \"{}\" with secret: {}", "quartermaster", clientSecret);
//get private key
for (KeysMetadataRepresentation.KeyMetadataRepresentation curKey : appsRealmResource.keys().getKeyMetadata().getKeys()) {
if (!SIG.equals(curKey.getUse())) {
continue;
}
if (!"RSA".equals(curKey.getType())) {
continue;
}
String publicKeyTemp = curKey.getPublicKey();
if (publicKeyTemp == null || publicKeyTemp.isBlank()) {
continue;
}
publicKey = publicKeyTemp;
log.info("Found a relevant key for public key use: {} / {}", curKey.getKid(), publicKey);
}
}
// write public key
// = new File(TestResourceLifecycleManager.class.getResource("/").toURI().toString() + "/security/testKeycloakPublicKey.pem");
File publicKeyFile;
try {
publicKeyFile = File.createTempFile("oqmTestKeycloakPublicKey",".pem");
// publicKeyFile = new File(TestResourceLifecycleManager.class.getResource("/").toURI().toString().replace("/classes/java/", "/resources/") + "/security/testKeycloakPublicKey.pem");
log.info("path of public key: {}", publicKeyFile);
// if(publicKeyFile.createNewFile()){
// log.info("created new public key file");
//
// } else {
// log.info("Public file already exists");
// }
try (
FileOutputStream os = new FileOutputStream(
publicKeyFile
);
) {
IOUtils.write(publicKey, os, UTF_8);
} catch (IOException e) {
log.error("Failed to write out public key of keycloak: ", e);
throw new IllegalStateException("Failed to write out public key of keycloak.", e);
}
} catch (IOException e) {
log.error("Failed to create public key file: ", e);
throw new IllegalStateException("Failed to create public key file", e);
}
String keycloakUrl = KEYCLOAK_CONTAINER.getAuthServerUrl().replace("/auth", "");
return Map.of(
"test.keycloak.url", keycloakUrl,
"test.keycloak.authUrl", KEYCLOAK_CONTAINER.getAuthServerUrl(),
"test.keycloak.adminName", KEYCLOAK_CONTAINER.getAdminUsername(),
"test.keycloak.adminPass", KEYCLOAK_CONTAINER.getAdminPassword(),
//TODO:: add config for server to talk to
"service.externalAuth.url", keycloakUrl,
"mp.jwt.verify.publickey.location", publicKeyFile.getAbsolutePath()
);
}
public static synchronized void startMongoTestServer() throws IOException {
if (MONGO_EXE != null) {
log.info("Flapdoodle Mongo already started.");
return;
}
Version.Main version = Version.Main.V4_0;
int port = 27018;
log.info("Starting Flapdoodle Test Mongo {} on port {}", version, port);
IMongodConfig config = new MongodConfigBuilder()
.version(version)
.net(new Net(port, Network.localhostIsIPv6()))
.build();
try {
MONGO_EXE = MongodStarter.getDefaultInstance().prepare(config);
MongodProcess process = MONGO_EXE.start();
if (!process.isProcessRunning()) {
throw new IOException();
}
} catch (Throwable e) {
log.error("FAILED to start test mongo server: ", e);
MONGO_EXE = null;
throw e;
}
}
public static synchronized void stopMongoTestServer() {
if (MONGO_EXE == null) {
log.warn("Mongo was not started.");
return;
}
MONGO_EXE.stop();
MONGO_EXE = null;
}
public synchronized static void cleanMongo() throws IOException {
if (MONGO_EXE == null) {
log.warn("Mongo was not started.");
return;
}
log.info("Cleaning Mongo of all entries.");
}
#Override
public void init(Map<String, String> initArgs) {
this.externalAuth = Boolean.parseBoolean(initArgs.getOrDefault(EXTERNAL_AUTH_ARG, Boolean.toString(this.externalAuth)));
}
#Override
public Map<String, String> start() {
log.info("STARTING test lifecycle resources.");
Map<String, String> configOverride = new HashMap<>();
try {
startMongoTestServer();
} catch (IOException e) {
log.error("Unable to start Flapdoodle Mongo server");
}
configOverride.putAll(startKeycloakTestServer());
return configOverride;
}
#Override
public void stop() {
log.info("STOPPING test lifecycle resources.");
stopMongoTestServer();
}
}
The app can be found here: https://github.com/Epic-Breakfast-Productions/OpenQuarterMaster/tree/main/software/open-qm-base-station
The tests are currently failing in the ways I am describing, so feel free to look around.
Note that to run this, you will need to run ./gradlew build publishToMavenLocal in https://github.com/Epic-Breakfast-Productions/OpenQuarterMaster/tree/main/software/libs/open-qm-core to install a dependency locally.
Github issue also tracking this: https://github.com/quarkusio/quarkus/issues/22025
Any use of #QuarkusTestResource() without restrictToAnnotatedClass set to true, means that the QuarkusTestResourceLifecycleManager will be applied to all tests no matter where the annotation is placed.
Hope restrictToAnnotatedClass will solve the problem.

TaskRejectedException in Spring Boot

I have a Spring Boot application running with two endpoints running asynchronously.
Register an user in a external system using an rest API. After a successful registration put him to the DB and Redis cache.
code is something like this
#Service
public class UserRegistrationService {
#Async("asyncExecutor")
public String registerUser(DomainRequest request) throws SystemException {
try {
// External API call
extResponse = extServiceImplInterface.registerUser(extRequest);
} catch (Exception e) {
}
if (extResponse.getResCode = 0) {
// Success response from API - save to DB and redis cache
savedUser = saveUser(extResponse);
}
}
}
Refresh the each user in the DB table by calling an external rest api to each one of them. To trigger this event I call my 2nd endpoint each 5 secs, and it executes refreshUser() method.
code is something like this
#Service
public class UserRefreshService {
#Autowired
//External API call class
GetLastChatResponse getLastChatResponse;
#Async("asyncExecutor")
public void refreshUser() {
try{
//Get all registerd users from DB
List<User> currentUsers = userRepositoryInterface.findAll();
//Traverse through the list and call an external API
if(!currentUsers.isEmpty()) {
for(User item : currentUsers) {
getLastChatResponse.getLastResponse(item);
}
}
}
catch(Exception e){
}
}
}
#Service
public class GetLastChatResponse {
#Autowired
JedisPool jedisPool;
#Async("asyncExecutor")
public void getLastResponse(User item) {
//Call external rest API
LastAgentResponse lastResponseMessage = getLastAgentResponse(item);
try {
if(lastResponseMessage != null) {
//Set info to Redis cache
Jedis jedis = jedisPool.getResource();
jedis.set(item.getChatId()+Constants.LAST_INDEX, lastResponseMessage.getLastIndex());
jedis.set(item.getChatId()+Constants.LAST_TEXT_TIME, LocalDateTime.now().toString());
}
} catch (SystemException e) {
logger.error("Exception: {}", e);
}
}
}
Im using these thread pool config
#Bean(name = "asyncExecutor")
public Executor asyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(100);
executor.setMaxPoolSize(200);
executor.setQueueCapacity(1);
executor.setKeepAliveSeconds(5);
executor.setThreadNamePrefix("AsyncThread-");
executor.initialize();
return executor;
}
Usually DB table contains around 10 users as the expired users are removed from the table.
The problem I'm having is I get this error when I call one of the endpoints, after running the application for sometime.
{
"code": "500",
"type": "TaskRejectedException",
"message": "Executor [java.util.concurrent.ThreadPoolExecutor#7de76256[Running, pool size = 200, active threads = 200, queued tasks = 1, completed tasks = 5089]] did not accept task: org.springframework.cloud.sleuth.instrument.async.TraceCallable#325cf639"
}
I tried changing the pool configs but it didn't work.
executor.setCorePoolSize(2000);
executor.setMaxPoolSize(4000);
executor.setQueueCapacity(1);
executor.setKeepAliveSeconds(5);
Does anyone have an idea about this?

Spring Batch Integration Remote Chunking error - Message contained wrong job instance id [25] should have been [24]

I'm running into this bug (more info here) which appears to mean that for multi-threaded batches using remote chunking you can't use a common response channel. I'm not exactly sure how to proceed to get this working. Surely there's a way to get this working, because without it I can't see much benefit to remote chunking.
Here's my DSL config that creates a JobRequest:
#Bean
IntegrationFlow newPollingJobsAdapter(JobLaunchingGateway jobLaunchingGateway) {
// Start by polling the DB for new PollingJobs according to the polling rate
return IntegrationFlows.from(jdbcPollingChannelAdapter(),
c -> c.poller(Pollers.fixedRate(10000)
// Do the polling on one of 10 threads.
.taskExecutor(Executors.newFixedThreadPool(10))
// pull out up to 100 new ids for each poll.
.maxMessagesPerPoll(100)))
.log(LoggingHandler.Level.WARN)
// The polling adapter above returns a list of ids. Split them out into
// individual ids
.split()
// Now push each one onto a separate thread for batch processing.
.channel(MessageChannels.executor(Executors.newFixedThreadPool(10)))
.log(LoggingHandler.Level.WARN)
// Transform each one into a JobLaunchRequest
.<Long, JobLaunchRequest>transform(id -> {
logger.warn("Creating job for ID {}", id);
JobParametersBuilder builder = new JobParametersBuilder()
.addLong("polling-job-id", id, true);
return new JobLaunchRequest(job, builder.toJobParameters());
})
.handle(jobLaunchingGateway)
// TODO: Notify somebody? No idea yet
.<JobExecution>handle(exec -> System.out.println("GOT EXECUTION: " + exec))
.get();
}
Nothing in here is particularly special, no odd configs that I'm aware of.
The job itself is pretty straight-forward, too:
/**
* This is the definition of the entire batch process that runs polling.
* #return
*/
#Bean
Job pollingJobJob() {
return jobBuilderFactory.get("pollingJobJob")
.incrementer(new RunIdIncrementer())
// Ship it down to the slaves for actual processing
.start(remoteChunkingStep())
// Now mark it as complete
.next(markCompleteStep())
.build();
}
/**
* Sends the job to a remote slave via an ActiveMQ-backed JMS queue.
*/
#Bean
TaskletStep remoteChunkingStep() {
return stepBuilderFactory.get("polling-job-step-remote-chunking")
.<Long, String>chunk(20)
.reader(runningPollingJobItemReader)
.processor(toJsonProcessor())
.writer(chunkWriter)
.build();
}
/**
* This step just marks the PollerJob as Complete.
*/
#Bean
Step markCompleteStep() {
return stepBuilderFactory.get("polling-job-step-mark-complete")
// We want each PollerJob instance to be a separate job in batch, and the
// reader is using the id passed in via job params to grab the one we want,
// so we don't need a large chunk size. One at a time is fine.
.<Long, Long>chunk(1)
.reader(runningPollingJobItemReader)
.processor(new PassThroughItemProcessor<Long>())
.writer(this.completeStatusWriter)
.build();
}
Here's the chunk writer config:
/**
* This is part of the bridge between the spring-batch and spring-integration. Nothing special or weird is going
* on, so see the RemoteChunkHandlerFactoryBean for a description.
*/
#Bean
RemoteChunkHandlerFactoryBean<PollerJob> remoteChunkHandlerFactoryBean() {
RemoteChunkHandlerFactoryBean<PollerJob> factory = new RemoteChunkHandlerFactoryBean<>();
factory.setChunkWriter(chunkWriter);
factory.setStep(remoteChunkingStep());
return factory;
}
/**
* This is the writer that will actually send the chunk to the slaves. Note that it also configures the
* internal channel on which replies are expected.
*/
#Bean
#StepScope
ChunkMessageChannelItemWriter<String> chunkWriter() {
ChunkMessageChannelItemWriter<String> writer = new ChunkMessageChannelItemWriter<>();
writer.setMessagingOperations(batchMessagingTemplate());
writer.setReplyChannel(batchResponseChannel());
writer.setThrottleLimit(1000);
return writer;
}
The problem seems to be that last section sets up the ChunkMessageChannelItemWriter such that the replyChannel is the same one used by all of the writers, despite each writer being step-scoped. It would seem that I need to add a replyChannel header to one of the messages, but I'm not sure where in the chain to do that or how to process that (if I need to at all?).
Also, this is being sent to the slaves via JMS/ActiveMQ and I'd like to avoid having just a stupid number of nearly-identical queues on ActiveMQ just to support this.
What are my options?
Given that you are using a shared JMS infrastructure, you will need a router to get the responses back to the correct chunk writer.
If you use prototype scope on the batchResponseChannel() #Bean; you'll get a unique channel for each writer.
I don't have time to figure out how to set up a chunked batch job so the following simulates your environment (non-singleton bean that needs a unique reply channel for each instance). Hopefully it's self-explanatory...
#SpringBootApplication
public class So44806067Application {
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(So44806067Application.class, args);
SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker1 = context
.getBean(SomeNonSingletonNeedingDistinctRequestAndReplyChannels.class);
SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker2 = context
.getBean(SomeNonSingletonNeedingDistinctRequestAndReplyChannels.class);
if (chunker1.equals(chunker2)) {
throw new IllegalStateException("Expected different instances");
}
chunker1.sendSome();
chunker2.sendSome();
ChunkResponse results = chunker1.getResults();
if (results == null) {
throw new IllegalStateException("No results1");
}
if (results.getJobId() != 1L) {
throw new IllegalStateException("Incorrect routing1");
}
results = chunker2.getResults();
if (results == null) {
throw new IllegalStateException("No results2");
}
if (results.getJobId() != 2L) {
throw new IllegalStateException("Incorrect routing2");
}
context.close();
}
#Bean
public Map<Long, PollableChannel> registry() {
// TODO: should clean up entry for jobId when job completes.
return new ConcurrentHashMap<>();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker() {
MessagingTemplate template = template();
final PollableChannel replyChannel = replyChannel();
SomeNonSingletonNeedingDistinctRequestAndReplyChannels bean =
new SomeNonSingletonNeedingDistinctRequestAndReplyChannels(template, replyChannel);
AbstractSubscribableChannel requestChannel = (AbstractSubscribableChannel) template.getDefaultDestination();
requestChannel.addInterceptor(new ChannelInterceptorAdapter() {
#Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
registry().putIfAbsent(((ChunkRequest<?>) message.getPayload()).getJobId(), replyChannel);
return message;
}
});
BridgeHandler bridge = bridge();
requestChannel.subscribe(bridge);
return bean;
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public MessagingTemplate template() {
MessagingTemplate messagingTemplate = new MessagingTemplate();
messagingTemplate.setDefaultChannel(requestChannel());
return messagingTemplate;
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public DirectChannel requestChannel() {
return new DirectChannel();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public PollableChannel replyChannel() {
return new QueueChannel();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public BridgeHandler bridge() {
BridgeHandler bridgeHandler = new BridgeHandler();
bridgeHandler.setOutputChannel(outboundChannel());
return bridgeHandler;
}
#Bean
public DirectChannel outboundChannel() {
return new DirectChannel();
}
#Bean
public DirectChannel masterReplyChannel() {
return new DirectChannel();
}
#ServiceActivator(inputChannel = "outboundChannel")
public void simulateJmsChannelAdapterPair(ChunkRequest<?> request) {
masterReplyChannel()
.send(new GenericMessage<>(new ChunkResponse(request.getSequence(), request.getJobId(), null)));
}
#Router(inputChannel = "masterReplyChannel")
public MessageChannel route(ChunkResponse reply) {
// TODO: error checking - missing reply channel for jobId
return registry().get(reply.getJobId());
}
public static class SomeNonSingletonNeedingDistinctRequestAndReplyChannels {
private final static AtomicLong jobIds = new AtomicLong();
private final long jobId = jobIds.incrementAndGet();
private final MessagingTemplate template;
private final PollableChannel replyChannel;
public SomeNonSingletonNeedingDistinctRequestAndReplyChannels(MessagingTemplate template,
PollableChannel replyChannel) {
this.template = template;
this.replyChannel = replyChannel;
}
public void sendSome() {
ChunkRequest<String> cr = new ChunkRequest<>(0, Collections.singleton("foo"), this.jobId, null);
this.template.send(new GenericMessage<>(cr));
}
public ChunkResponse getResults() {
#SuppressWarnings("unchecked")
Message<ChunkResponse> received = (Message<ChunkResponse>) this.replyChannel.receive(10_000);
if (received != null) {
if (received.getPayload().getJobId().equals(this.jobId)) {
System.out.println("Got the right one");
}
else {
System.out.println(
"Got the wrong one " + received.getPayload().getJobId() + " instead of " + this.jobId);
}
return received.getPayload();
}
return null;
}
}
}

EntityManagerFactory closed after context reloaded with #DirtiesContext

I have a Spring Boot app which uses JMS to connect to a queue and listen for incoming messages. In the app I have an integration test which sends some messages to a queue, then makes sure that the things that are supposed to happen when the listener picks up a new message actually happen.
I have annotated my test class with #DirtiesContext(classMode=ClassMode.AFTER_EACH_TEST_METHOD)
to ensure my database is clean after each test. Each test passes when it is run in isolation. However when running them all together after the first test passes successfully the next test fails with the exception below when the code under test attempts to save an entity to the database:
org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is java.lang.IllegalStateException: EntityManagerFactory is closed
at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:431) ~[spring-orm-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:373) ~[spring-tx-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:447) ~[spring-tx-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:277) ~[spring-tx-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96) ~[spring-tx-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213) ~[spring-aop-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at com.sun.proxy.$Proxy95.handleWorkflowEvent(Unknown Source) ~[na:na]
at com.mottmac.processflow.infra.jms.EventListener.onWorkflowEvent(EventListener.java:51) ~[classes/:na]
at com.mottmac.processflow.infra.jms.EventListener.onMessage(EventListener.java:61) ~[classes/:na]
at org.apache.activemq.ActiveMQMessageConsumer.dispatch(ActiveMQMessageConsumer.java:1401) [activemq-client-5.14.3.jar:5.14.3]
at org.apache.activemq.ActiveMQSessionExecutor.dispatch(ActiveMQSessionExecutor.java:131) [activemq-client-5.14.3.jar:5.14.3]
at org.apache.activemq.ActiveMQSessionExecutor.iterate(ActiveMQSessionExecutor.java:202) [activemq-client-5.14.3.jar:5.14.3]
at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:133) [activemq-client-5.14.3.jar:5.14.3]
at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:48) [activemq-client-5.14.3.jar:5.14.3]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.8.0_77]
at java.lang.Thread.run(Unknown Source) [na:1.8.0_77]
Caused by: java.lang.IllegalStateException: EntityManagerFactory is closed
at org.hibernate.jpa.internal.EntityManagerFactoryImpl.validateNotClosed(EntityManagerFactoryImpl.java:367) ~[hibernate-entitymanager-5.0.11.Final.jar:5.0.11.Final]
at org.hibernate.jpa.internal.EntityManagerFactoryImpl.internalCreateEntityManager(EntityManagerFactoryImpl.java:316) ~[hibernate-entitymanager-5.0.11.Final.jar:5.0.11.Final]
at org.hibernate.jpa.internal.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:286) ~[hibernate-entitymanager-5.0.11.Final.jar:5.0.11.Final]
at org.springframework.orm.jpa.JpaTransactionManager.createEntityManagerForTransaction(JpaTransactionManager.java:449) ~[spring-orm-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:369) ~[spring-orm-4.3.6.RELEASE.jar:4.3.6.RELEASE]
... 17 common frames omitted
My test class:
#RunWith(SpringRunner.class)
#SpringBootTest(classes = { TestGovernance.class })
#DirtiesContext(classMode=ClassMode.AFTER_EACH_TEST_METHOD)
public class ActivitiIntegrationTest
{
private static final String TEST_PROCESS_KEY = "oneTaskProcess";
private static final String FIRST_TASK_KEY = "theTask";
private static final String NEXT_TASK_KEY = "nextTask";
#Autowired
private JmsTemplate jms;
#Autowired
private WorkflowEventRepository eventRepository;
#Autowired
private TaskService taskService;
#Test
public void workFlowEventForRunningTaskMovesItToTheNextStage() throws InterruptedException
{
sendMessageToCreateNewInstanceOfProcess(TEST_PROCESS_KEY);
Task activeTask = getActiveTask();
assertThat(activeTask.getTaskDefinitionKey(), is(FIRST_TASK_KEY));
sendMessageToUpdateExistingTask(activeTask.getProcessInstanceId(), FIRST_TASK_KEY);
Task nextTask = getActiveTask();
assertThat(nextTask.getTaskDefinitionKey(), is(NEXT_TASK_KEY));
}
#Test
public void newWorkflowEventIsSavedToDatabaseAndKicksOffTask() throws InterruptedException
{
sendMessageToCreateNewInstanceOfProcess(TEST_PROCESS_KEY);
assertThat(eventRepository.findAll(), hasSize(1));
}
#Test
public void newWorkflowEventKicksOffTask() throws InterruptedException
{
sendMessageToCreateNewInstanceOfProcess(TEST_PROCESS_KEY);
Task activeTask = getActiveTask();
assertThat(activeTask.getTaskDefinitionKey(), is(FIRST_TASK_KEY));
}
private void sendMessageToUpdateExistingTask(String processId, String event) throws InterruptedException
{
WorkflowEvent message = new WorkflowEvent();
message.setRaisedDt(ZonedDateTime.now());
message.setEvent(event);
// Existing
message.setIdWorkflowInstance(processId);
jms.convertAndSend("workflow", message);
Thread.sleep(5000);
}
private void sendMessageToCreateNewInstanceOfProcess(String event) throws InterruptedException
{
WorkflowEvent message = new WorkflowEvent();
message.setRaisedDt(ZonedDateTime.now());
message.setEvent(event);
jms.convertAndSend("workflow", message);
Thread.sleep(5000);
}
private Task getActiveTask()
{
// For some reason the tasks in the task service are hanging around even
// though the context is being reloaded. This means we have to get the
// ID of the only task in the database (since it has been cleaned
// properly) and use it to look up the task.
WorkflowEvent workflowEvent = eventRepository.findAll().get(0);
Task activeTask = taskService.createTaskQuery().processInstanceId(workflowEvent.getIdWorkflowInstance().toString()).singleResult();
return activeTask;
}
}
The method that throws the exception in the application (repository is just a standard Spring Data CrudRepository):
#Override
#Transactional
public void handleWorkflowEvent(WorkflowEvent event)
{
try
{
logger.info("Handling workflow event[{}]", event);
// Exception is thrown here:
repository.save(event);
logger.info("Saved event to the database [{}]", event);
if(event.getIdWorkflowInstance() == null)
{
String newWorkflow = engine.newWorkflow(event.getEvent(), event.getVariables());
event.setIdWorkflowInstance(newWorkflow);
}
else
{
engine.moveToNextStage(event.getIdWorkflowInstance(), event.getEvent(), event.getVariables());
}
}
catch (Exception e)
{
logger.error("Error while handling workflow event:" , e);
}
}
My test configuration class:
#SpringBootApplication
#EnableJms
#TestConfiguration
public class TestGovernance
{
private static final String WORKFLOW_QUEUE_NAME = "workflow";
#Bean
public ConnectionFactory connectionFactory()
{
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory("vm://localhost?broker.persistent=false");
return connectionFactory;
}
#Bean
public EventListenerJmsConnection connection(ConnectionFactory connectionFactory) throws NamingException, JMSException
{
// Look up ConnectionFactory and Queue
Destination destination = new ActiveMQQueue(WORKFLOW_QUEUE_NAME);
// Create Connection
Connection connection = connectionFactory.createConnection();
Session listenerSession = connection.createSession(false, Session.CLIENT_ACKNOWLEDGE);
MessageConsumer receiver = listenerSession.createConsumer(destination);
EventListenerJmsConnection eventListenerConfig = new EventListenerJmsConnection(receiver, connection);
return eventListenerConfig;
}
}
The JMS message listener (not sure if that will help):
/**
* Provides an endpoint which will listen for new JMS messages carrying
* {#link WorkflowEvent} objects.
*/
#Service
public class EventListener implements MessageListener
{
Logger logger = LoggerFactory.getLogger(EventListener.class);
private WorkflowEventHandler eventHandler;
private MessageConverter messageConverter;
private EventListenerJmsConnection listenerConnection;
#Autowired
public EventListener(EventListenerJmsConnection listenerConnection, WorkflowEventHandler eventHandler, MessageConverter messageConverter)
{
this.eventHandler = eventHandler;
this.messageConverter = messageConverter;
this.listenerConnection = listenerConnection;
}
#PostConstruct
public void setUpConnection() throws NamingException, JMSException
{
listenerConnection.setMessageListener(this);
listenerConnection.start();
}
private void onWorkflowEvent(WorkflowEvent event)
{
logger.info("Recieved new workflow event [{}]", event);
eventHandler.handleWorkflowEvent(event);
}
#Override
public void onMessage(Message message)
{
try
{
message.acknowledge();
WorkflowEvent fromMessage = (WorkflowEvent) messageConverter.fromMessage(message);
onWorkflowEvent((WorkflowEvent) fromMessage);
}
catch (Exception e)
{
logger.error("Error: ", e);
}
}
}
I've tried adding #Transactional' to the test methods and removing it from the code under test and various combinations with no success. I've also tried adding various test execution listeners and I still can't get it to work. If I remove the#DirtiesContext` then the exception goes away and all the tests run without exception (they do however fail with assertion errors as I would expect).
Any help would be greatly appreciated. My searches so far haven't turned up anything, everything suggests that #DirtiesContext should work.
Using #DirtiesContext for this is a terrible idea (imho) what you should do is make your tests #Transactional. I would also suggest to remove the Thread.sleep and use something like awaitility instead.
In theory when you execute a query all pending changes should be committed so you could use awaitility to check for at most 6 seconds to see if something has been persisted in the database. If that doesn't work you can try adding a flush before the query.
#RunWith(SpringRunner.class)
#SpringBootTest(classes = { TestGovernance.class })
#Transactional
public class ActivitiIntegrationTest {
private static final String TEST_PROCESS_KEY = "oneTaskProcess";
private static final String FIRST_TASK_KEY = "theTask";
private static final String NEXT_TASK_KEY = "nextTask";
#Autowired
private JmsTemplate jms;
#Autowired
private WorkflowEventRepository eventRepository;
#Autowired
private TaskService taskService;
#Autowired
private EntityManager em;
#Test
public void workFlowEventForRunningTaskMovesItToTheNextStage() throws InterruptedException
{
sendMessageToCreateNewInstanceOfProcess(TEST_PROCESS_KEY);
await().atMost(6, SECONDS).until(getActiveTask() != null);
Task activeTask = getActiveTask());
assertThat(activeTask.getTaskDefinitionKey(), is(FIRST_TASK_KEY));
sendMessageToUpdateExistingTask(activeTask.getProcessInstanceId(), FIRST_TASK_KEY);
Task nextTask = getActiveTask();
assertThat(nextTask.getTaskDefinitionKey(), is(NEXT_TASK_KEY));
}
private Task getActiveTask()
{
em.flush(); // simulate a commit
// For some reason the tasks in the task service are hanging around even
// though the context is being reloaded. This means we have to get the
// ID of the only task in the database (since it has been cleaned
// properly) and use it to look up the task.
WorkflowEvent workflowEvent = eventRepository.findAll().get(0);
Task activeTask = taskService.createTaskQuery().processInstanceId(workflowEvent.getIdWorkflowInstance().toString()).singleResult();
return activeTask;
}
}
You might need / want to polish your getActiveTask a little to be able to return null or maybe this change makes it even behave like you expected it to do.
I just did a single method the others you can probably figure out yourself. Your gain with this approach is probably 2 fold, 1 it will not wait for 5 seconds anymore but less and you don't have to reload your whole application between tests. Both of which should make your tests faster.

Categories