I have a Spring Boot app which uses JMS to connect to a queue and listen for incoming messages. In the app I have an integration test which sends some messages to a queue, then makes sure that the things that are supposed to happen when the listener picks up a new message actually happen.
I have annotated my test class with #DirtiesContext(classMode=ClassMode.AFTER_EACH_TEST_METHOD)
to ensure my database is clean after each test. Each test passes when it is run in isolation. However when running them all together after the first test passes successfully the next test fails with the exception below when the code under test attempts to save an entity to the database:
org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is java.lang.IllegalStateException: EntityManagerFactory is closed
at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:431) ~[spring-orm-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:373) ~[spring-tx-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:447) ~[spring-tx-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:277) ~[spring-tx-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96) ~[spring-tx-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213) ~[spring-aop-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at com.sun.proxy.$Proxy95.handleWorkflowEvent(Unknown Source) ~[na:na]
at com.mottmac.processflow.infra.jms.EventListener.onWorkflowEvent(EventListener.java:51) ~[classes/:na]
at com.mottmac.processflow.infra.jms.EventListener.onMessage(EventListener.java:61) ~[classes/:na]
at org.apache.activemq.ActiveMQMessageConsumer.dispatch(ActiveMQMessageConsumer.java:1401) [activemq-client-5.14.3.jar:5.14.3]
at org.apache.activemq.ActiveMQSessionExecutor.dispatch(ActiveMQSessionExecutor.java:131) [activemq-client-5.14.3.jar:5.14.3]
at org.apache.activemq.ActiveMQSessionExecutor.iterate(ActiveMQSessionExecutor.java:202) [activemq-client-5.14.3.jar:5.14.3]
at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:133) [activemq-client-5.14.3.jar:5.14.3]
at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:48) [activemq-client-5.14.3.jar:5.14.3]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.8.0_77]
at java.lang.Thread.run(Unknown Source) [na:1.8.0_77]
Caused by: java.lang.IllegalStateException: EntityManagerFactory is closed
at org.hibernate.jpa.internal.EntityManagerFactoryImpl.validateNotClosed(EntityManagerFactoryImpl.java:367) ~[hibernate-entitymanager-5.0.11.Final.jar:5.0.11.Final]
at org.hibernate.jpa.internal.EntityManagerFactoryImpl.internalCreateEntityManager(EntityManagerFactoryImpl.java:316) ~[hibernate-entitymanager-5.0.11.Final.jar:5.0.11.Final]
at org.hibernate.jpa.internal.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:286) ~[hibernate-entitymanager-5.0.11.Final.jar:5.0.11.Final]
at org.springframework.orm.jpa.JpaTransactionManager.createEntityManagerForTransaction(JpaTransactionManager.java:449) ~[spring-orm-4.3.6.RELEASE.jar:4.3.6.RELEASE]
at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:369) ~[spring-orm-4.3.6.RELEASE.jar:4.3.6.RELEASE]
... 17 common frames omitted
My test class:
#RunWith(SpringRunner.class)
#SpringBootTest(classes = { TestGovernance.class })
#DirtiesContext(classMode=ClassMode.AFTER_EACH_TEST_METHOD)
public class ActivitiIntegrationTest
{
private static final String TEST_PROCESS_KEY = "oneTaskProcess";
private static final String FIRST_TASK_KEY = "theTask";
private static final String NEXT_TASK_KEY = "nextTask";
#Autowired
private JmsTemplate jms;
#Autowired
private WorkflowEventRepository eventRepository;
#Autowired
private TaskService taskService;
#Test
public void workFlowEventForRunningTaskMovesItToTheNextStage() throws InterruptedException
{
sendMessageToCreateNewInstanceOfProcess(TEST_PROCESS_KEY);
Task activeTask = getActiveTask();
assertThat(activeTask.getTaskDefinitionKey(), is(FIRST_TASK_KEY));
sendMessageToUpdateExistingTask(activeTask.getProcessInstanceId(), FIRST_TASK_KEY);
Task nextTask = getActiveTask();
assertThat(nextTask.getTaskDefinitionKey(), is(NEXT_TASK_KEY));
}
#Test
public void newWorkflowEventIsSavedToDatabaseAndKicksOffTask() throws InterruptedException
{
sendMessageToCreateNewInstanceOfProcess(TEST_PROCESS_KEY);
assertThat(eventRepository.findAll(), hasSize(1));
}
#Test
public void newWorkflowEventKicksOffTask() throws InterruptedException
{
sendMessageToCreateNewInstanceOfProcess(TEST_PROCESS_KEY);
Task activeTask = getActiveTask();
assertThat(activeTask.getTaskDefinitionKey(), is(FIRST_TASK_KEY));
}
private void sendMessageToUpdateExistingTask(String processId, String event) throws InterruptedException
{
WorkflowEvent message = new WorkflowEvent();
message.setRaisedDt(ZonedDateTime.now());
message.setEvent(event);
// Existing
message.setIdWorkflowInstance(processId);
jms.convertAndSend("workflow", message);
Thread.sleep(5000);
}
private void sendMessageToCreateNewInstanceOfProcess(String event) throws InterruptedException
{
WorkflowEvent message = new WorkflowEvent();
message.setRaisedDt(ZonedDateTime.now());
message.setEvent(event);
jms.convertAndSend("workflow", message);
Thread.sleep(5000);
}
private Task getActiveTask()
{
// For some reason the tasks in the task service are hanging around even
// though the context is being reloaded. This means we have to get the
// ID of the only task in the database (since it has been cleaned
// properly) and use it to look up the task.
WorkflowEvent workflowEvent = eventRepository.findAll().get(0);
Task activeTask = taskService.createTaskQuery().processInstanceId(workflowEvent.getIdWorkflowInstance().toString()).singleResult();
return activeTask;
}
}
The method that throws the exception in the application (repository is just a standard Spring Data CrudRepository):
#Override
#Transactional
public void handleWorkflowEvent(WorkflowEvent event)
{
try
{
logger.info("Handling workflow event[{}]", event);
// Exception is thrown here:
repository.save(event);
logger.info("Saved event to the database [{}]", event);
if(event.getIdWorkflowInstance() == null)
{
String newWorkflow = engine.newWorkflow(event.getEvent(), event.getVariables());
event.setIdWorkflowInstance(newWorkflow);
}
else
{
engine.moveToNextStage(event.getIdWorkflowInstance(), event.getEvent(), event.getVariables());
}
}
catch (Exception e)
{
logger.error("Error while handling workflow event:" , e);
}
}
My test configuration class:
#SpringBootApplication
#EnableJms
#TestConfiguration
public class TestGovernance
{
private static final String WORKFLOW_QUEUE_NAME = "workflow";
#Bean
public ConnectionFactory connectionFactory()
{
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory("vm://localhost?broker.persistent=false");
return connectionFactory;
}
#Bean
public EventListenerJmsConnection connection(ConnectionFactory connectionFactory) throws NamingException, JMSException
{
// Look up ConnectionFactory and Queue
Destination destination = new ActiveMQQueue(WORKFLOW_QUEUE_NAME);
// Create Connection
Connection connection = connectionFactory.createConnection();
Session listenerSession = connection.createSession(false, Session.CLIENT_ACKNOWLEDGE);
MessageConsumer receiver = listenerSession.createConsumer(destination);
EventListenerJmsConnection eventListenerConfig = new EventListenerJmsConnection(receiver, connection);
return eventListenerConfig;
}
}
The JMS message listener (not sure if that will help):
/**
* Provides an endpoint which will listen for new JMS messages carrying
* {#link WorkflowEvent} objects.
*/
#Service
public class EventListener implements MessageListener
{
Logger logger = LoggerFactory.getLogger(EventListener.class);
private WorkflowEventHandler eventHandler;
private MessageConverter messageConverter;
private EventListenerJmsConnection listenerConnection;
#Autowired
public EventListener(EventListenerJmsConnection listenerConnection, WorkflowEventHandler eventHandler, MessageConverter messageConverter)
{
this.eventHandler = eventHandler;
this.messageConverter = messageConverter;
this.listenerConnection = listenerConnection;
}
#PostConstruct
public void setUpConnection() throws NamingException, JMSException
{
listenerConnection.setMessageListener(this);
listenerConnection.start();
}
private void onWorkflowEvent(WorkflowEvent event)
{
logger.info("Recieved new workflow event [{}]", event);
eventHandler.handleWorkflowEvent(event);
}
#Override
public void onMessage(Message message)
{
try
{
message.acknowledge();
WorkflowEvent fromMessage = (WorkflowEvent) messageConverter.fromMessage(message);
onWorkflowEvent((WorkflowEvent) fromMessage);
}
catch (Exception e)
{
logger.error("Error: ", e);
}
}
}
I've tried adding #Transactional' to the test methods and removing it from the code under test and various combinations with no success. I've also tried adding various test execution listeners and I still can't get it to work. If I remove the#DirtiesContext` then the exception goes away and all the tests run without exception (they do however fail with assertion errors as I would expect).
Any help would be greatly appreciated. My searches so far haven't turned up anything, everything suggests that #DirtiesContext should work.
Using #DirtiesContext for this is a terrible idea (imho) what you should do is make your tests #Transactional. I would also suggest to remove the Thread.sleep and use something like awaitility instead.
In theory when you execute a query all pending changes should be committed so you could use awaitility to check for at most 6 seconds to see if something has been persisted in the database. If that doesn't work you can try adding a flush before the query.
#RunWith(SpringRunner.class)
#SpringBootTest(classes = { TestGovernance.class })
#Transactional
public class ActivitiIntegrationTest {
private static final String TEST_PROCESS_KEY = "oneTaskProcess";
private static final String FIRST_TASK_KEY = "theTask";
private static final String NEXT_TASK_KEY = "nextTask";
#Autowired
private JmsTemplate jms;
#Autowired
private WorkflowEventRepository eventRepository;
#Autowired
private TaskService taskService;
#Autowired
private EntityManager em;
#Test
public void workFlowEventForRunningTaskMovesItToTheNextStage() throws InterruptedException
{
sendMessageToCreateNewInstanceOfProcess(TEST_PROCESS_KEY);
await().atMost(6, SECONDS).until(getActiveTask() != null);
Task activeTask = getActiveTask());
assertThat(activeTask.getTaskDefinitionKey(), is(FIRST_TASK_KEY));
sendMessageToUpdateExistingTask(activeTask.getProcessInstanceId(), FIRST_TASK_KEY);
Task nextTask = getActiveTask();
assertThat(nextTask.getTaskDefinitionKey(), is(NEXT_TASK_KEY));
}
private Task getActiveTask()
{
em.flush(); // simulate a commit
// For some reason the tasks in the task service are hanging around even
// though the context is being reloaded. This means we have to get the
// ID of the only task in the database (since it has been cleaned
// properly) and use it to look up the task.
WorkflowEvent workflowEvent = eventRepository.findAll().get(0);
Task activeTask = taskService.createTaskQuery().processInstanceId(workflowEvent.getIdWorkflowInstance().toString()).singleResult();
return activeTask;
}
}
You might need / want to polish your getActiveTask a little to be able to return null or maybe this change makes it even behave like you expected it to do.
I just did a single method the others you can probably figure out yourself. Your gain with this approach is probably 2 fold, 1 it will not wait for 5 seconds anymore but less and you don't have to reload your whole application between tests. Both of which should make your tests faster.
Related
I have a gateway application with customized loadbalancing rule, and here is the code following spring cloud official doc:
#RibbonClients(defaultConfiguration = CustomizedRibbonConfig.class)
public class RibbonClientConfiguration {
public static class BazServiceList extends ConfigurationBasedServerList {
public BazServiceList(IClientConfig config) {
super.initWithNiwsConfig(config);
}
}
}
#Configuration
class CustomizedRibbonConfig {
#Bean
public IRule ribbonRule() {
return new MetadataAwareRule();
}
#Bean
public ServerListUpdater ribbonServerListUpdater() {
return new EurekaNotificationServerListUpdater();
}
}
public class MetadataAwarePredicate extends AbstractDiscoveryEnabledPredicate {
/**
* {#inheritDoc}
*/
#Override
protected boolean apply(DiscoveryEnabledServer server) {
return true;
}
}
#Slf4j
public class MetadataAwareRule extends AbstractDiscoveryEnabledRule {
public static final ThreadLocal<String> CURRENT_LOAD_BALANCED_SERVICE_IP = new ThreadLocal<>();
/**
* Creates new instance of {#link MetadataAwareRule}.
*/
public MetadataAwareRule() {
this(new MetadataAwarePredicate());
}
/**
* Creates new instance of {#link MetadataAwareRule} with specific predicate.
*
* #param predicate the predicate, can't be {#code null}
* #throws IllegalArgumentException if predicate is {#code null}
*/
public MetadataAwareRule(AbstractDiscoveryEnabledPredicate predicate) {
super(predicate);
}
#Override
public Server choose(Object key) {
....my customized choose policy....
}
And Here is the thing, I have a need to refresh application by firing RefreshEvent but it will lead to quite strange problem which may due to Eureka or zuul client of version from parent:
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix</artifactId>
<version>2.2.5.RELEASE</version>
</parent>
For easy recurrent of such problem, the function was simplified as a simple request showing below:
#GetMapping("/test/event")
public CommonResult testRaiseRefreshEvent() {
ApplicationContextHolder.getApplicationContext().publishEvent(new RefreshEvent(this, null, "test to trigger the problem"));
return CommonResult.succeed();
}
Once request this api, application will take a refresh.
But sometimes, application will have this exception:
2022-10-19 11:29:32.947 [app:web-gateway,traceId:,spanId:,parentId:] [DiscoveryClient-CacheRefreshExecutor-0] ERROR | RedirectingEurekaHttpClient.java:83 | c.n.d.s.t.d.RedirectingEurekaHttpClient | Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://localhost:8000/eureka/}
javax.ws.rs.WebApplicationException: com.fasterxml.jackson.core.JsonParseException: processing aborted
at [Source: (GZIPInputStream); line: 1, column: 18]
at com.netflix.discovery.provider.DiscoveryJerseyProvider.readFrom(DiscoveryJerseyProvider.java:110)
at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:634)
at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:586)
at com.netflix.discovery.shared.transport.jersey.AbstractJerseyEurekaHttpClient.getApplicationsInternal(AbstractJerseyEurekaHttpClient.java:200)
at com.netflix.discovery.shared.transport.jersey.AbstractJerseyEurekaHttpClient.getApplications(AbstractJerseyEurekaHttpClient.java:167)
at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$6.execute(EurekaHttpClientDecorator.java:137)
at com.netflix.discovery.shared.transport.decorator.MetricsCollectingEurekaHttpClient.execute(MetricsCollectingEurekaHttpClient.java:73)
at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134)
at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$6.execute(EurekaHttpClientDecorator.java:137)
at com.netflix.discovery.shared.transport.decorator.RedirectingEurekaHttpClient.executeOnNewServer(RedirectingEurekaHttpClient.java:118)
at com.netflix.discovery.shared.transport.decorator.RedirectingEurekaHttpClient.execute(RedirectingEurekaHttpClient.java:79)
at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134)
at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$6.execute(EurekaHttpClientDecorator.java:137)
at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:120)
at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134)
at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$6.execute(EurekaHttpClientDecorator.java:137)
at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77)
at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134)
at com.netflix.discovery.DiscoveryClient.getAndStoreFullRegistry(DiscoveryClient.java:1097)
at com.netflix.discovery.DiscoveryClient.fetchRegistry(DiscoveryClient.java:1011)
at com.netflix.discovery.DiscoveryClient.<init>(DiscoveryClient.java:440)
at com.netflix.discovery.DiscoveryClient.<init>(DiscoveryClient.java:282)
at com.netflix.discovery.DiscoveryClient.<init>(DiscoveryClient.java:278)
at org.springframework.cloud.netflix.eureka.CloudEurekaClient.<init>(CloudEurekaClient.java:67)
at org.springframework.cloud.netflix.eureka.EurekaClientAutoConfiguration$RefreshableEurekaClientConfiguration.eurekaClient(EurekaClientAutoConfiguration.java:316)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154)
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:650)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:635)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1336)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1176)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:556)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:516)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$1(AbstractBeanFactory.java:363)
at org.springframework.cloud.context.scope.GenericScope$BeanLifecycleWrapper.getBean(GenericScope.java:389)
at org.springframework.cloud.context.scope.GenericScope.get(GenericScope.java:186)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:360)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.aop.target.SimpleBeanTargetSource.getTarget(SimpleBeanTargetSource.java:35)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:192)
at com.sun.proxy.$Proxy169.getApplications(Unknown Source)
at org.springframework.cloud.netflix.eureka.EurekaDiscoveryClient.getServices(EurekaDiscoveryClient.java:80)
at org.springframework.cloud.client.discovery.composite.CompositeDiscoveryClient.getServices(CompositeDiscoveryClient.java:67)
at org.springframework.cloud.netflix.zuul.filters.discovery.DiscoveryClientRouteLocator.locateRoutes(DiscoveryClientRouteLocator.java:121)
at org.springframework.cloud.netflix.zuul.filters.discovery.DiscoveryClientRouteLocator.locateRoutes(DiscoveryClientRouteLocator.java:44)
at org.springframework.cloud.netflix.zuul.filters.SimpleRouteLocator.doRefresh(SimpleRouteLocator.java:186)
at org.springframework.cloud.netflix.zuul.filters.discovery.DiscoveryClientRouteLocator.refresh(DiscoveryClientRouteLocator.java:171)
at org.springframework.cloud.netflix.zuul.filters.CompositeRouteLocator.refresh(CompositeRouteLocator.java:78)
at org.springframework.cloud.netflix.zuul.web.ZuulHandlerMapping.setDirty(ZuulHandlerMapping.java:79)
at org.springframework.cloud.netflix.zuul.ZuulServerAutoConfiguration$ZuulRefreshListener.reset(ZuulServerAutoConfiguration.java:315)
at org.springframework.cloud.netflix.zuul.ZuulServerAutoConfiguration$ZuulRefreshListener.resetIfNeeded(ZuulServerAutoConfiguration.java:310)
at org.springframework.cloud.netflix.zuul.ZuulServerAutoConfiguration$ZuulRefreshListener.onApplicationEvent(ZuulServerAutoConfiguration.java:304)
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:172)
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:165)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:404)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:361)
at org.springframework.cloud.netflix.eureka.CloudEurekaClient.onCacheRefreshed(CloudEurekaClient.java:123)
at com.netflix.discovery.DiscoveryClient.fetchRegistry(DiscoveryClient.java:1027)
at com.netflix.discovery.DiscoveryClient.refreshRegistry(DiscoveryClient.java:1533)
at com.netflix.discovery.DiscoveryClient$CacheRefreshThread.run(DiscoveryClient.java:1500)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
at java.util.concurrent.FutureTask.run(FutureTask.java)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.fasterxml.jackson.core.JsonParseException: processing aborted
at [Source: (GZIPInputStream); line: 1, column: 18]
at com.netflix.discovery.converters.EurekaJacksonCodec$ApplicationsDeserializer.deserialize(EurekaJacksonCodec.java:805)
at com.netflix.discovery.converters.EurekaJacksonCodec$ApplicationsDeserializer.deserialize(EurekaJacksonCodec.java:791)
at com.fasterxml.jackson.databind.ObjectReader._unwrapAndDeserialize(ObjectReader.java:2196)
at com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:2054)
at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1431)
at com.netflix.discovery.converters.EurekaJacksonCodec.readValue(EurekaJacksonCodec.java:213)
at com.netflix.discovery.converters.wrappers.CodecWrappers$LegacyJacksonJson.decode(CodecWrappers.java:314)
at com.netflix.discovery.provider.DiscoveryJerseyProvider.readFrom(DiscoveryJerseyProvider.java:103)
... 69 common frames omitted
and
2022-10-19 11:29:32.956 [app:web-gateway,traceId:,spanId:,parentId:] [DiscoveryClient-CacheRefreshExecutor-0] ERROR | DiscoveryClient.java:1018 | c.netflix.discovery.DiscoveryClient | DiscoveryClient_WEB-GATEWAY/192.168.56.1:web-gateway:8004:NEW_GATEWAY_DEFAULT_GROUP - was unable to refresh its cache! status = Cannot execute request on any known server
com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server
at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:112)
at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134)
at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$6.execute(EurekaHttpClientDecorator.java:137)
at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77)
at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134)
at com.netflix.discovery.DiscoveryClient.getAndStoreFullRegistry(DiscoveryClient.java:1097)
at com.netflix.discovery.DiscoveryClient.fetchRegistry(DiscoveryClient.java:1011)
at com.netflix.discovery.DiscoveryClient.<init>(DiscoveryClient.java:440)
at com.netflix.discovery.DiscoveryClient.<init>(DiscoveryClient.java:282)
at com.netflix.discovery.DiscoveryClient.<init>(DiscoveryClient.java:278)
at org.springframework.cloud.netflix.eureka.CloudEurekaClient.<init>(CloudEurekaClient.java:67)
at org.springframework.cloud.netflix.eureka.EurekaClientAutoConfiguration$RefreshableEurekaClientConfiguration.eurekaClient(EurekaClientAutoConfiguration.java:316)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154)
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:650)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:635)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1336)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1176)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:556)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:516)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$1(AbstractBeanFactory.java:363)
at org.springframework.cloud.context.scope.GenericScope$BeanLifecycleWrapper.getBean(GenericScope.java:389)
at org.springframework.cloud.context.scope.GenericScope.get(GenericScope.java:186)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:360)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.aop.target.SimpleBeanTargetSource.getTarget(SimpleBeanTargetSource.java:35)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:192)
at com.sun.proxy.$Proxy169.getApplications(Unknown Source)
at org.springframework.cloud.netflix.eureka.EurekaDiscoveryClient.getServices(EurekaDiscoveryClient.java:80)
at org.springframework.cloud.client.discovery.composite.CompositeDiscoveryClient.getServices(CompositeDiscoveryClient.java:67)
at org.springframework.cloud.netflix.zuul.filters.discovery.DiscoveryClientRouteLocator.locateRoutes(DiscoveryClientRouteLocator.java:121)
at org.springframework.cloud.netflix.zuul.filters.discovery.DiscoveryClientRouteLocator.locateRoutes(DiscoveryClientRouteLocator.java:44)
at org.springframework.cloud.netflix.zuul.filters.SimpleRouteLocator.doRefresh(SimpleRouteLocator.java:186)
at org.springframework.cloud.netflix.zuul.filters.discovery.DiscoveryClientRouteLocator.refresh(DiscoveryClientRouteLocator.java:171)
at org.springframework.cloud.netflix.zuul.filters.CompositeRouteLocator.refresh(CompositeRouteLocator.java:78)
at org.springframework.cloud.netflix.zuul.web.ZuulHandlerMapping.setDirty(ZuulHandlerMapping.java:79)
at org.springframework.cloud.netflix.zuul.ZuulServerAutoConfiguration$ZuulRefreshListener.reset(ZuulServerAutoConfiguration.java:315)
at org.springframework.cloud.netflix.zuul.ZuulServerAutoConfiguration$ZuulRefreshListener.resetIfNeeded(ZuulServerAutoConfiguration.java:310)
at org.springframework.cloud.netflix.zuul.ZuulServerAutoConfiguration$ZuulRefreshListener.onApplicationEvent(ZuulServerAutoConfiguration.java:304)
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:172)
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:165)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:404)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:361)
at org.springframework.cloud.netflix.eureka.CloudEurekaClient.onCacheRefreshed(CloudEurekaClient.java:123)
at com.netflix.discovery.DiscoveryClient.fetchRegistry(DiscoveryClient.java:1027)
at com.netflix.discovery.DiscoveryClient.refreshRegistry(DiscoveryClient.java:1533)
at com.netflix.discovery.DiscoveryClient$CacheRefreshThread.run(DiscoveryClient.java:1500)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
at java.util.concurrent.FutureTask.run(FutureTask.java)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
And no matter how I refresh application again, gateway will never get chance to correct it's loadbalancer, no request will go through gateway due to exception like this:
java.lang.RuntimeException: com.netflix.client.ClientException: Load balancer does not have available server for client: web-message-center
at org.springframework.cloud.openfeign.ribbon.LoadBalancerFeignClient.execute(LoadBalancerFeignClient.java:90)
at org.springframework.cloud.sleuth.instrument.web.client.feign.TraceLoadBalancerFeignClient.execute(TraceLoadBalancerFeignClient.java:78)
at feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:119)
at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:89)
at feign.ReflectiveFeign$FeignInvocationHandler.invoke(ReflectiveFeign.java:100)
at com.sun.proxy.$Proxy261.sendMessage(Unknown Source)
at com.wwstation.webgateway.components.GatewayUrlCountProcessor.sendAccessLogWithMq(GatewayUrlCountProcessor.java:221)
at com.wwstation.webgateway.components.GatewayUrlCountProcessor.run(GatewayUrlCountProcessor.java:82)
at org.springframework.cloud.sleuth.instrument.async.TraceRunnable.run(TraceRunnable.java:68)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
at java.util.concurrent.FutureTask.run(FutureTask.java)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.netflix.client.ClientException: Load balancer does not have available server for client: web-message-center
at com.netflix.loadbalancer.LoadBalancerContext.getServerFromLoadBalancer(LoadBalancerContext.java:483)
at com.netflix.loadbalancer.reactive.LoadBalancerCommand$1.call(LoadBalancerCommand.java:184)
at com.netflix.loadbalancer.reactive.LoadBalancerCommand$1.call(LoadBalancerCommand.java:180)
at rx.Observable.unsafeSubscribe(Observable.java:10327)
at rx.internal.operators.OnSubscribeConcatMap.call(OnSubscribeConcatMap.java:94)
at rx.internal.operators.OnSubscribeConcatMap.call(OnSubscribeConcatMap.java:42)
at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
at rx.Observable.subscribe(Observable.java:10423)
at rx.Observable.subscribe(Observable.java:10390)
at rx.observables.BlockingObservable.blockForSingle(BlockingObservable.java:443)
at rx.observables.BlockingObservable.single(BlockingObservable.java:340)
at com.netflix.client.AbstractLoadBalancerAwareClient.executeWithLoadBalancer(AbstractLoadBalancerAwareClient.java:112)
at org.springframework.cloud.openfeign.ribbon.LoadBalancerFeignClient.execute(LoadBalancerFeignClient.java:83)
... 15 common frames omitted
It can be seen from EurekaNotificationServerListUpdater that each fetch interval, there will be a thread refreshing server list. But once I fire a RefreshEvent the refreshing thread will be shut down by refreshing of environment (or else) and no heartbeat will be triggered when eureka's fetch interval reached again, so my application will have no latest server info from eureka.
Because of that, there is another problem which would take place when firing RefreshEvent:
Gateway can still redirect request to target service, but gateway will never get latest server list from eureka. Once the target service is down, gateway will crash my request instead of telling me the target service is not online (I have a exception handler to solve Load balancer does not have available server for client).
These 2 problems will not take place at the same time, which means when problem A occurs, problem B will never take place, and the same when problem B occurs. And they all occur once a RefreshEvent was fired.
I have no idea what's going on, can anyone help me with this or give me some tips where the cause might be?
After 2 days work out, the problem is solved and I have found the cause.
Raw use of com.netflix.niws.loadbalancer.EurekaNotificationServerListUpdater in Zuul application will always have this kind of situation:
At the very beginning, if a EurekaEventListener was fired by DiscoveryClient, listeners registered by EurekaNotificationServerListUpdater will receive this message and then try to update serverlist and it is normal.
But when a RefreshEvent is fired, instance in application will re-register again which will cause DiscoveryClient to become a new instance! Which means, new DiscoveryClient will no longer holding listeners.
And also, a default use of EurekaNotificationServerListUpdater will use a singleton instance of DiscoveryClient which will never be changed by RefreshEvent, and those listeners will be hold by that old DiscoveryClient which is not managed by Eureka again after RefreshEvent.
Cause by this situation, after a RefreshEvent, Eureka's fetch heartbeat will no longer trigger listeners refreshing function and my gateway will crash if I have some applications down.
What I did to fix this problem is to markdown instance of those listeners and then try to re-register them into new DiscoveryClient when refreshing job is done.
Here is my code:
#Configuration
#Slf4j
public class RibbonDiscoveryClientListenerManager implements SmartApplicationListener {
private static EurekaClient discoveryClient;
/**
* markdown alive listeners in current EurekaClient
*/
private static final CopyOnWriteArraySet<EurekaEventListener> EUREKA_EVENT_LISTENER_SET = new CopyOnWriteArraySet<>();
/**
* judge whether to try a re-register
*
* #param listener
*/
static void register(EurekaEventListener listener) {
if (discoveryClient != null) {
registerEurekaListener(listener);
log.debug("discoveryClient update succeed");
} else {
log.warn("discoveryClient was not found waiting for scheduling...");
}
}
public static void registerEurekaListener(EurekaEventListener listener) {
if (!EUREKA_EVENT_LISTENER_SET.contains(listener)) {
EUREKA_EVENT_LISTENER_SET.add(listener);
discoveryClient.registerEventListener(listener);
}
}
#Override
public boolean supportsEventType(Class<? extends ApplicationEvent> eventType) {
return InstanceRegisteredEvent.class.isAssignableFrom(eventType);
}
#Override
public void onApplicationEvent(ApplicationEvent event) {
//clear cache
discoveryClient = null;
EUREKA_EVENT_LISTENER_SET.clear();
//try to get CloudEurekaClient
for (EurekaClient bean : ApplicationContextHolder.getBeans(EurekaClient.class)) {
if (CloudEurekaClient.class.isAssignableFrom(bean.getClass())) {
discoveryClient = bean;
}
}
}
}
Here is the customized ServerListUpdate
#Slf4j
public class RibbonClientEurekaAutoCompensateServerListUpdater implements ServerListUpdater {
private static class LazyHolder {
private final static String CORE_THREAD = "EurekaNotificationServerListUpdater.ThreadPoolSize";
private final static String QUEUE_SIZE = "EurekaNotificationServerListUpdater.queueSize";
private final static LazyHolder SINGLETON = new LazyHolder();
private final DynamicIntProperty poolSizeProp = new DynamicIntProperty(CORE_THREAD, 2);
private final DynamicIntProperty queueSizeProp = new DynamicIntProperty(QUEUE_SIZE, 1000);
private final ThreadPoolExecutor defaultServerListUpdateExecutor;
private final Thread shutdownThread;
private LazyHolder() {
int corePoolSize = getCorePoolSize();
defaultServerListUpdateExecutor = new ThreadPoolExecutor(
corePoolSize,
corePoolSize * 5,
0,
TimeUnit.NANOSECONDS,
new ArrayBlockingQueue<Runnable>(queueSizeProp.get()),
new ThreadFactoryBuilder()
.setNameFormat("EurekaNotificationServerListUpdater-%d")
.setDaemon(true)
.build()
);
poolSizeProp.addCallback(new Runnable() {
#Override
public void run() {
int corePoolSize = getCorePoolSize();
defaultServerListUpdateExecutor.setCorePoolSize(corePoolSize);
defaultServerListUpdateExecutor.setMaximumPoolSize(corePoolSize * 5);
}
});
shutdownThread = new Thread(new Runnable() {
#Override
public void run() {
log.info("Shutting down the Executor for EurekaNotificationServerListUpdater");
try {
defaultServerListUpdateExecutor.shutdown();
Runtime.getRuntime().removeShutdownHook(shutdownThread);
} catch (Exception e) {
// this can happen in the middle of a real shutdown, and that's ok.
}
}
});
Runtime.getRuntime().addShutdownHook(shutdownThread);
}
private int getCorePoolSize() {
int propSize = poolSizeProp.get();
if (propSize > 0) {
return propSize;
}
return 2; // default
}
}
public static ExecutorService getDefaultRefreshExecutor() {
return LazyHolder.SINGLETON.defaultServerListUpdateExecutor;
}
/* visible for testing */ final AtomicBoolean updateQueued = new AtomicBoolean(false);
private final AtomicBoolean isActive = new AtomicBoolean(false);
private final AtomicLong lastUpdated = new AtomicLong(System.currentTimeMillis());
private final Provider<EurekaClient> eurekaClientProvider;
private final ExecutorService refreshExecutor;
private volatile EurekaEventListener updateListener;
private volatile EurekaClient eurekaClient;
public RibbonClientEurekaAutoCompensateServerListUpdater() {
this(new LegacyEurekaClientProvider());
}
public RibbonClientEurekaAutoCompensateServerListUpdater(final Provider<EurekaClient> eurekaClientProvider) {
this(eurekaClientProvider, getDefaultRefreshExecutor());
}
public RibbonClientEurekaAutoCompensateServerListUpdater(final Provider<EurekaClient> eurekaClientProvider, ExecutorService refreshExecutor) {
this.eurekaClientProvider = eurekaClientProvider;
this.refreshExecutor = refreshExecutor;
}
#Override
public synchronized void start(final UpdateAction updateAction) {
if (isActive.compareAndSet(false, true)) {
this.updateListener = new EurekaEventListener() {
#Override
public void onEvent(EurekaEvent event) {
if (event instanceof CacheRefreshedEvent) {
if (!updateQueued.compareAndSet(false, true)) { // if an update is already queued
log.info("an update action is already queued, returning as no-op");
return;
}
if (!refreshExecutor.isShutdown()) {
try {
refreshExecutor.submit(new Runnable() {
#Override
public void run() {
try {
updateAction.doUpdate();
lastUpdated.set(System.currentTimeMillis());
} catch (Exception e) {
log.warn("Failed to update serverList", e);
} finally {
updateQueued.set(false);
}
}
}); // fire and forget
} catch (Exception e) {
log.warn("Error submitting update task to executor, skipping one round of updates", e);
updateQueued.set(false); // if submit fails, need to reset updateQueued to false
}
} else {
log.debug("stopping EurekaNotificationServerListUpdater, as refreshExecutor has been shut down");
stop();
}
}
}
};
if (eurekaClient == null) {
eurekaClient = eurekaClientProvider.get();
}
if (eurekaClient != null) {
RibbonDiscoveryClientListenerManager.register(updateListener);
} else {
log.error("Failed to register an updateListener to eureka client, eureka client is null");
throw new IllegalStateException("Failed to start the updater, unable to register the update listener due to eureka client being null.");
}
//start a shcedulepool to check new DiscoveryClient's listeners
new ScheduledThreadPoolExecutor(1,
new ThreadFactoryBuilder()
.setNameFormat("refreshListenerPool-%d")
.build())
.scheduleWithFixedDelay(() -> {
//schedule to invoke register defined in RibbonDiscoveryClientListenerManager
RibbonDiscoveryClientListenerManager.register(updateListener);
}, 10, 10, TimeUnit.SECONDS);
} else {
log.info("Update listener already registered, no-op");
}
}
#Override
public synchronized void stop() {
if (isActive.compareAndSet(true, false)) {
if (eurekaClient != null) {
eurekaClient.unregisterEventListener(updateListener);
}
} else {
log.info("Not currently active, no-op");
}
}
#Override
public String getLastUpdate() {
return new Date(lastUpdated.get()).toString();
}
#Override
public long getDurationSinceLastUpdateMs() {
return System.currentTimeMillis() - lastUpdated.get();
}
#Override
public int getNumberMissedCycles() {
return 0;
}
#Override
public int getCoreThreads() {
if (isActive.get()) {
if (refreshExecutor != null && refreshExecutor instanceof ThreadPoolExecutor) {
return ((ThreadPoolExecutor) refreshExecutor).getCorePoolSize();
}
}
return 0;
}
}
Config Class:
#RibbonClients(defaultConfiguration = CustomizedRibbonConfig.class)
public class RibbonClientConfiguration {
public static class BazServiceList extends ConfigurationBasedServerList {
public BazServiceList(IClientConfig config) {
super.initWithNiwsConfig(config);
}
}
}
#Configuration
class CustomizedRibbonConfig {
static final AtomicBoolean justRefreshed = new AtomicBoolean(false);
#Bean
public IRule ribbonRule() {
return new MetadataAwareRule();
}
#Bean
public ServerListUpdater ribbonServerListUpdater() {
return new RibbonClientEurekaAutoCompensateServerListUpdater();
}
}
I'm working on a Spring-Batch application, which uses a REDIS connection to populate data.
Here are some relevant dependencies:
implementation 'org.springframework.boot:spring-boot-starter-data-redis'
implementation 'io.lettuce:lettuce-core:5.3.3.RELEASE'
RedisConnection is imported from org.springframework.data.redis.connection
PROBLEM STATEMENT:
There might be a case when the RedisConnection is active when we start the application, but during the time application is running, we might loose the Redis Connection. In that case, when it enters the method below, the method will throw an error that Redis connection is lost. Hence, we retry using the #Retryable logic.
But, lets say during the second retry, the Redis Connection is re-established, we want the Retry to be able to detect that and re-connect to redis and go for the normal flow. But, "THE REDIS-RECONNECTION IS NOT GETTING DETECTED"
TRIED: I tried following https://github.com/lettuce-io/lettuce-core/issues/338 and added lettuceConnectionFactory.validateConnection(); to the defaultRedisConnection as below but to no vain
#Qualifier("defaultRedisConnection")
#Bean
public RedisConnection defaultRedisConnectionDockerCluster() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration();
redisStandaloneConfiguration.setHostName("redis");
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(redisStandaloneConfiguration);
lettuceConnectionFactory.validateConnection();
lettuceConnectionFactory.afterPropertiesSet();
return lettuceConnectionFactory.getConnection();
}
Here is the class:
#Slf4j
#Service
public class PopulateRedisDataService {
#Qualifier("defaultRedisConnection")
private final RedisConnection redisConnection;
private RedisClientData redisClientData = new RedisClientData();
public PopulateRedisDataService(
#Qualifier("defaultRedisConnection") RedisConnection redisConnection,
RedisDataUtils redisDataUtils) {
this.redisConnection = redisConnection;
}
#Retryable(maxAttemptsExpression = "3", backoff = #Backoff(delayExpression = "20_000",
multiplierExpression = "100_000", maxDelayExpression = "100_000"))
public RedisClientData populateData() {
try {
byte[] serObj = Objects.requireNonNull(redisConnection.get("SOME_KEY".getBytes()));
RedisClientData redisClientData = new RedisClientData();
// Some operations to load data from Redis/serObj into redisClientData object.
} catch (Exception e) {
// If Redis doesn't have the key, return empty redisClientData
redisClientData = new RedisClientData();
log.error("Failed to get ClientRegList", e);
}
return redisClientData;
}
#Recover
public void recover(Exception e) {
// Some operations
}
}
Any suggestions to handle this case would be much appreciated.
I have a multi module maven project with several modules (parent, service, updater1, updater2).
The #SpringBootApplication is in 'service' module and the others doesn't have artifacts.
'updater1' is a module which have a Kafka listener and a http client, and when receives a kafka event launches a request to an external API. I want to create integration tests in this module with testcontainers, so I've created the containers and a Kafka producer to send a KafkaTemplate to my consumer.
My problem is the Kafka producer is autowiring to null, so the tests throws a NullPointerException. I think it should be a Spring configuration problem, but I can't find the problem. Can you help me? Thank's!
This is my test class:
#ExtendWith(SpringExtension.class)
#ContextConfiguration(classes = {KafkaConfiguration.class, CacheConfiguration.class, ClientConfiguration.class})
public class InvoicingTest {
#ClassRule
public static final Containers containers = Containers.Builder.aContainer()
.withKafka()
.withServer()
.build();
private final MockHttpClient mockHttpClient =
new MockHttpClient(containers.getHost(SERVER),
containers.getPort(SERVER));
#Autowired
private KafkaEventProducer kafkaEventProducer;
#BeforeEach
#Transactional
void setUp() {
mockHttpClient.reset();
}
#Test
public void createElementSuccesfullResponse() throws ExecutionException, InterruptedException, TimeoutException {
mockHttpClient.whenPost("/v1/endpoint")
.respond(HttpStatusCode.OK_200);
kafkaEventProducer.produce("src/test/resources/event/invoiceCreated.json");
mockHttpClient.verify();
}
And this is the event producer:
#Component
public class KafkaEventProducer {
private final KafkaTemplate<String, String> kafkaTemplate;
private final String topic;
#Autowired
KafkaInvoicingEventProducer(KafkaTemplate<String, String> kafkaTemplate,
#Value("${kafka.topic.invoicing.name}") String topic){
this.kafkaTemplate = kafkaTemplate;
this.topic = topic;
}
public void produce(String event){
kafkaTemplate.send(topic, event);
}
}
You haven't detailed how KafkaEventProducer is implemented (is it a #Component?), neither your test class is annotated with #SpringBootTest and the runner #RunWith.
Check out this sample, using Apache KakfaProducer:
import org.apache.kafka.clients.producer.KafkaProducer;
public void sendRecord(String topic, String event) {
try (KafkaProducer<String, byte[]> producer = new KafkaProducer<>(producerProps(bootstrapServers, false))) {
send(producer, topic, event);
}
}
where
public void send(KafkaProducer<String, byte[]> producer, String topic, String event) {
try {
ProducerRecord<String, byte[]> record = new ProducerRecord<>(topic, event.getBytes());
producer.send(record).get();
} catch (InterruptedException | ExecutionException e) {
fail("Not expected exception: " + e.getMessage());
}
}
protected Properties producerProps(String bootstrapServer, boolean transactional) {
Properties producerProperties = new Properties();
producerProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
producerProperties.put(KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
producerProperties.put(VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getName());
if (transactional) {
producerProperties.put(TRANSACTIONAL_ID_CONFIG, "my-transactional-id");
}
return producerProperties;
}
and bootstrapServers is taken from kafka container:
KafkaContainer kafka = new KafkaContainer();
kafka.start();
bootstrapServers = kafka.getBootstrapServers();
I am getting errors like "failed to create a child event loop/failed to open a new selector/Too many open files" when there are 30 or more concurrent requests...How to solve the above errors? Am I doing anything wrong? I am using Spring boot and Java cassandra driver. Below is the connection file:
public class Connection {
public static Session getConnection() {
final Cluster cluster = Cluster.builder().addContactPoint(ConnectionBean.getCASSANDRA_DB_IP())
.withQueryOptions(new QueryOptions().setConsistencyLevel(ConsistencyLevel.LOCAL_ONE))
.withCredentials(ConnectionBean.getCASSANDRA_USER(), ConnectionBean.getCASSANDRA_PASSWORD())
.withPoolingOptions(poolingOptions)
.build();
final Session session = cluster.connect(ConnectionBean.getCASSANDRA_DB_NAME());
return session;
}
}
Below is the ConnectionBean file which I used in Connection file:
public class ConnectionBean {
public static String CASSANDRA_DB_IP;
public static String CASSANDRA_DB_NAME;
public static String CASSANDRA_USER;
public static String CASSANDRA_PASSWORD;
public ConnectionBean() {
}
public ConnectionBean(String CASSANDRA_DB_IP,String CASSANDRA_DB_NAME,String CASSANDRA_USER,String CASSANDRA_PASSWORD) {
this.CASSANDRA_DB_IP=CASSANDRA_DB_IP;
this.CASSANDRA_DB_NAME=CASSANDRA_DB_NAME;
this.CASSANDRA_USER=CASSANDRA_USER;
this.CASSANDRA_PASSWORD=CASSANDRA_PASSWORD;
}
public static String getCASSANDRA_DB_IP() {
return CASSANDRA_DB_IP;
}
public static void setCASSANDRA_DB_IP(String cASSANDRA_DB_IP) {
CASSANDRA_DB_IP = cASSANDRA_DB_IP;
}
public static String getCASSANDRA_DB_NAME() {
return CASSANDRA_DB_NAME;
}
public static void setCASSANDRA_DB_NAME(String cASSANDRA_DB_NAME) {
CASSANDRA_DB_NAME = cASSANDRA_DB_NAME;
}
public static String getCASSANDRA_USER() {
return CASSANDRA_USER;
}
public static void setCASSANDRA_USER(String cASSANDRA_USER) {
CASSANDRA_USER = cASSANDRA_USER;
}
public static String getCASSANDRA_PASSWORD() {
return CASSANDRA_PASSWORD;
}
public static void setCASSANDRA_PASSWORD(String cASSANDRA_PASSWORD) {
CASSANDRA_PASSWORD = cASSANDRA_PASSWORD;
}
}
Below is the class from where ConnectionBean variables are initialized :
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {
private static final String LOGIN_PROCESSING_URL = "/login";
private static final String LOGIN_FAILURE_URL = "/login?error";
private static final String LOGIN_URL = "/login";
#Autowired
private BCryptPasswordEncoder bCryptPasswordEncoder;
#Autowired
private DataSource dataSource;
#Value("${spring.queries.users-query}")
private String usersQuery;
#Value("${spring.queries.roles-query}")
private String rolesQuery;
#Value("${CASSANDRA_DB_IP}")
public String CASSANDRA_DB_IP;
#Value("${CASSANDRA_DB_NAME}")
public String CASSANDRA_DB_NAME;
#Value("${CASSANDRA_USER}")
public String CASSANDRA_USER;
#Value("${CASSANDRA_PASSWORD}")
public String CASSANDRA_PASSWORD;
#Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
ConnectionBean cb = new ConnectionBean(CASSANDRA_DB_IP, CASSANDRA_DB_NAME, CASSANDRA_USER, CASSANDRA_PASSWORD);
auth.jdbcAuthentication().usersByUsernameQuery(usersQuery).authoritiesByUsernameQuery(rolesQuery)
.dataSource(dataSource).passwordEncoder(bCryptPasswordEncoder);
}
#Override
protected void configure(HttpSecurity http) throws Exception {
// Not using Spring CSRF here to be able to use plain HTML for the login page
http.csrf().disable()
// Register our CustomRequestCache, that saves unauthorized access attempts, so
// the user is redirected after login.
.requestCache().requestCache(new CustomRequestCache())
// Restrict access to our application.
.and().authorizeRequests()
// Allow all flow internal requests.
.requestMatchers(SecurityUtils::isFrameworkInternalRequest).permitAll()
// Allow all requests by logged in users.
.anyRequest().authenticated()
// Configure the login page.
.and().formLogin().loginPage(LOGIN_URL).permitAll().loginProcessingUrl(LOGIN_PROCESSING_URL)
.failureUrl(LOGIN_FAILURE_URL)
// Register the success handler that redirects users to the page they last tried
// to access
.successHandler(new SavedRequestAwareAuthenticationSuccessHandler())
// Configure logout
.and().logout().logoutSuccessUrl(LOGOUT_SUCCESS_URL);
}
/**
* Allows access to static resources, bypassing Spring security.
*/
#Override
public void configure(WebSecurity web) throws Exception {
web.ignoring().antMatchers(
// Vaadin Flow static resources
"/VAADIN/**",
// the standard favicon URI
"/favicon.ico",
// web application manifest
"/manifest.json", "/sw.js", "/offline-page.html",
// icons and images
"/icons/**", "/images/**",
// (development mode) static resources
"/frontend/**",
// (development mode) webjars
"/webjars/**",
// (development mode) H2 debugging console
"/h2-console/**",
// (production mode) static resources
"/frontend-es5/**", "/frontend-es6/**");
}
}
And finally, below is the class through which I am querying cassandra data:
public class getData {
Session session;
public getData(){
session = Connection.getConnection();
getDataTable();
}
private void getDataTable() {
try {
String query = "SELECT * FROM tableName";
ResultSet rs = session.execute(query);
for (Row row : rs) {
/*Do some stuff here using row*/
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
If getConnection() is being invoked for every request, you are creating a new Cluster instance each time.
This is discouraged because one connection is created between your client and a C* node for each Cluster instance, and for each Session a connection pool of at least one connection is created for each C* node.
If you are not closing your Cluster instances after a request completes, these connections will remain open. After a number of requests, you'll have so many connections open that you will run out of file descriptors in your OS.
To resolve this issue, create only one Cluster and Session instance and reuse it between requests. This strategy is outlined in 4 simple rules when using the DataStax drivers for Cassandra:
Use one Cluster instance per (physical) cluster (per application lifetime)
Use at most one Session per keyspace, or use a single Session and explicitely specify the keyspace in your queries
I want to use Spring batch with osgi to run a job daily.
here what i did:
#Component
#EnableBatchProcessing
public class BatchConfiguration {
private JobBuilderFactory jobs;
public JobBuilderFactory getJobs() {
return jobs;
}
public void setJobs(JobBuilderFactory jobs) {
this.jobs = jobs;
}
private StepBuilderFactory steps;
private EmployeeRepository employeeRepository; //spring data repository
public EmployeeRepository getEmployeeRepository() {
return employeeRepository;
}
#Reference
public void setEmployeeRepository(EmployeeRepository employeeRepository) {
employeeRepository= employeeRepository;
}
public Step syncEmployeesStep() throws Exception{
RepositoryItemWriter writer = new RepositoryItemWriter();
writer.setRepository(employeeRepository);
writer.setMethodName("save");
return steps.get("syncEmployeesStep")
.<Employee, Employee> chunk(10)
.reader(reader())
.writer(writer)
.build();
}
public Job importEmpJob()throws Exception {
return jobs.get("importEmpJob")
.incrementer(new RunIdIncrementer())
.start(syncEmployeesStep())
.next(syncEmployeesStep())
.build();
}
public ItemReader<Employee> reader() throws Exception {
String jpqlQuery = "select a from Employee a";
ServerEMF entityManager = new ServerEMF();
JpaPagingItemReader<Employee> reader = new JpaPagingItemReader<Tariff>();
reader.setQueryString(jpqlQuery);
reader.setEntityManagerFactory(entityManager.getEntityManagerFactory());
reader.setPageSize(3);
reader.afterPropertiesSet();
reader.setSaveState(true);
return reader;
}
}
here I want to run this job to sync between two databases,My problem is how to run this job inside osgi.
#EnableScheduling
#Component
public class JobRunner {
private JobLauncher jobLauncher;
private Job job ;
private BatchConfiguration batchConfig;
//private JobBuilderFactory jobs;
//private JobRepository jobrepo;
final static Logger logger = LoggerFactory.getLogger(BatchConfiguration.class);
BundleContext ctx;
#SuppressWarnings("rawtypes")
ServiceTracker servicetracker;
#Activate
public void start(BundleContext context) {
batchConfig = new BatchConfiguration();
//jobs = new JobBuilderFactory(jobRepository)
try {
job = batchConfig.importEmpJob(); //job is null because i don't know how to use it
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
ctx = context;
servicetracker= new ServiceTracker(ctx, BatchConfiguration.class, null);
servicetracker.open();
new Thread() {
public void run() { findAndRunJob(); }
}.start();
}
#Deactivate
public void stop() {
configAdminTracker.close();
}
#Scheduled(fixedRate = 5000)
protected void findAndRunJob() {
logger.info("job created.");
try {
String dateParam = new Date().toString();
JobParameters param = new JobParametersBuilder().addString("date", dateParam).toJobParameters();
System.out.println(dateParam);
JobExecution execution = jobLauncher.run(job, param);
System.out.println("Exit Status : " + execution.getStatus());
} catch (Exception e) {
//e.printStackTrace();
}
}
for sure,I got a java.lang.NullPointerException because the job is null.
could anyone help me with that?
after updates
#Component
#EnableBatchProcessing
public class BatchConfiguration {
private EmployeeRepository employeeRepository; //spring data repository
public EmployeeRepository getEmployeeRepository() {
return employeeRepository;
}
#Reference
public void setEmployeeRepository(EmployeeRepository employeeRepository) {
employeeRepository= employeeRepository;
}
public Step syncEmployeesStep() throws Exception{
RepositoryItemWriter writer = new RepositoryItemWriter();
writer.setRepository(employeeRepository);
writer.setMethodName("save");
return steps.get("syncEmployeesStep")
.<Employee, Employee> chunk(10)
.reader(reader())
.writer(writer)
.build();
}
public Job importEmpJob(JobRepository jobRepository, PlatformTransactionManager transactionManager)throws Exception {
JobBuilderFactory jobs= new JobBuilderFactory(jobRepository);
StepBuilderFactory stepBuilderFactory = new StepBuilderFactory(jobRepository, transactionManager);
return jobs.get("importEmpJob")
.incrementer(new RunIdIncrementer())
.start(syncEmployeesStep())
.next(syncEmployeesStep())
.build();
}
public ItemReader<Employee> reader() throws Exception {
String jpqlQuery = "select a from Employee a";
ServerEMF entityManager = new ServerEMF();
JpaPagingItemReader<Employee> reader = new JpaPagingItemReader<Tariff>();
reader.setQueryString(jpqlQuery);
reader.setEntityManagerFactory(entityManager.getEntityManagerFactory());
reader.setPageSize(3);
reader.afterPropertiesSet();
reader.setSaveState(true);
return reader;
}
}
job runner class
private JobLauncher jobLauncher;
private PlatformTransactionManager transactionManager;
private JobRepository jobRepository;
Job importEmpJob;
private BatchConfiguration batchConfig;
#SuppressWarnings("deprecation")
#Activate
public void start(BundleContext context) {
try {
batchConfig = new BatchConfiguration();
this.transactionManager = new ResourcelessTransactionManager();
MapJobRepositoryFactoryBean repositorybean = new MapJobRepositoryFactoryBean();
repositorybean.setTransactionManager(transactionManager);
this.jobRepository = repositorybean.getJobRepository(); //error after executing this statement
// setup job launcher
SimpleJobLauncher simpleJobLauncher = new SimpleJobLauncher();
simpleJobLauncher.setTaskExecutor(new SyncTaskExecutor());
simpleJobLauncher.setJobRepository(jobRepository);
this.jobLauncher = simpleJobLauncher;
//System.out.println(job);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
ctx = context;
configAdminTracker = new ServiceTracker(ctx, BatchConfiguration.class.getName(), null);
configAdminTracker.open();
new Thread() {
public void run() { findAndRunJob(); }
}.start();
}
#Deactivate
public void stop() {
configAdminTracker.close();
}
protected void findAndRunJob() {
logger.info("job created.");
try {
String dateParam = new Date().toString();
// creating the job
Job job = batchConfig.importEmpJob(jobRepository, transactionManager);
// running the job
JobExecution execution = this.jobLauncher.run(job, new JobParameters());
System.out.println("Exit Status : " + execution.getStatus());
} catch (Exception e) {
//e.printStackTrace();
}
}
what i getting is "java.lang.IllegalArgumentException: interface org.springframework.batch.core.repository.JobRepository is not visible from class loader" after running .could anyone help me with that error?
In Short
If you're just trying to kick off a something simple and don't need all the Spring Batch goodness I would look into the Apache Sling Commons Scheduler which has a simple job processor on top of Quartz for scheduling[1].
In General
There are a couple of considerations here depending on what you are trying to do. Are you deploying the Spring Batch Jars to the OSGi container with the assumption that the code that is written for the jobs (steps, tasks, etc) will live in separate bundles? OSGi's purpose is to develop modular code so my answer is assuming that this is your end goal. The folks at Pivotal have dropped requiring support for OSGi on their artifacts so to make it work you'll need to determine what you need to export from the Batch jar files. This can be done with BND. I would recommend checking out the new BND Maven plugin [2]. I would configure the Export-Package to export the interfaces you need to write the jobs so that you can write the jobs in separate modular bundles. Then I would probably embed the spring batch jars in a bundle and write a small wrapper around the JobLauncher. This should contain all the actual batch code to a single classloader so that you don't have to worry about OSGi trying to pull in classes dynamically. The downside is that this will prevent you from using many of the batch annotations outside of the spring batch bundle you created but will provide the modularity that you'd be looking for by implementing this type of solution with OSGi.
[1] https://sling.apache.org/documentation/bundles/apache-sling-eventing-and-job-handling.html
[2] http://njbartlett.name/2015/03/27/announcing-bnd-maven-plugin.html