Resilience4j Retry - Memory consumption - java

I am using resilience4j with Spring Boot 2.x
What is the impact of using Retry and circuit breaker modules on memory and cpu?
Also what is the memory impact if I have 2000 events/s incoming each payload around 10Mb and I have kept the wait duration of retry as 15 seconds with exponential backoff multiplier as 2?
I have 8Gb of application memory

The best way is to monitor your application with a profile like VisualVM. Then you can know where is the bottleneck.
One thing that I know that matters is where you create your circuitbreaker instance. In the end there is a collector that remove the instances not used. But in your case it seems to be a good idea to not place the creation of the circuit breaker on the get method like here
#Service
public static class DemoControllerService {
private RestTemplate rest;
private CircuitBreakerFactory cbFactory;
public DemoControllerService(RestTemplate rest, CircuitBreakerFactory cbFactory) {
this.rest = rest;
this.cbFactory = cbFactory;
}
public String slow() {
return cbFactory.create("slow").run(() -> rest.getForObject("/slow", String.class), throwable -> "fallback");
}
}
But create the circuitbreaker on the constructor.
#Service
public class DemoControllerService {
private RestTemplate restTemplate = new RestTemplate();
private CircuitBreakerFactory circuitBreakerFactory;
private CircuitBreaker circuitBreaker;
#Autowired
public DemoControllerService(CircuitBreakerFactory circuitBreakerFactory) {
this.circuitBreakerFactory = circuitBreakerFactory;
this.circuitBreaker = circuitBreakerFactory.create("circuitbreaker");
}
There are also discussions to place one circuitbreaker per host. Other thing that you can do is to remove the instance of the Registry by yourself and not wait for the circuitbreaker component rome it in the future.
registry.remove(circuitBreakerName);
Here is a discussion also about to clean up the Registry memory.

Related

#Retryable annotation not working for non Spring Bean class method

I am new to spring-retry. Basically, for retrying calls to REST APIs, I have integrated spring-retry into my spring-boot application. To do this, I have made following changes:
Added spring-retry to pom.xml.
Added following configuration:
#Configuration
#EnableRetry
public class RetryConfiguration {
}
Finally added #Retryable annotation to the class (this class is not a Spring Bean) method that I would like to be retried for various exceptions as follows:
public class OAuth1RestClient extends OAuthRestClient {
#Override
#Retryable(maxAttempts = 3, value = {
Exception.class},
backoff = #Backoff(delay = 100, multiplier = 3))
public Response executeRequest(OAuthRequest request)
throws InterruptedException, ExecutionException, IOException {
System.out.println("Inside Oauth1 client");
return myService.execute(request);
}
Now, the executeRequest method is not retrying. I am not able to understand if I am missing anything here.
Could anyone please help? Thanks.
If your class is not Spring managed (e.g. #Component/#Bean) the
annotation processor for #Retryable won't pick it up.
You can always manually define a retryTemplate and wrap calls with it:
RetryTemplate.builder()
.maxAttempts(2)
.exponentialBackoff(100, 10, 1000)
.retryOn(RestClientException.class)
.traversingCauses()
.build();
and then
retryTemplate.execute(context -> myService.execute(request));
If you want to retry on multiple exception, this can happen via custom RetryPolicy
Map<Class(? extends Throwable), Boolean> exceptionsMap = new HashMap<>();
exceptionsMap.put(InternalServerError.class, true);
exceptionsMap.put(RestClientException.class, true);
SimpleRetryPolicy policy = new SimpleRetryPolicy(5, exceptionsMap, true);
RetryTemplate.builder()
.customPolicy(policy)
.exponentialBackoff(100, 10, 1000)
.build();
FYI: RetryTemplate is blocking and you might want to explore a non-blocking async retry approach like async-retry. - and the retryOn() supports a list of exceptions.

Strategies to implement callback mechanism / notify, when all the asynchrous spring integration flows/threads execution is completed

I have spring integration flow that gets triggered once a every day, that pulls all parties from database and sends each party to an executorChannel.
The next flow would pull data for each party and then process them parallelly by sending in to a different executor channel.
Challenge i'm facing is how do i know when this entire process ends. Any ideas on how to acheve this .
Here's my pseudo code of executor channels and integration flows.
#Bean
public IntegrationFlow fileListener() {
return IntegrationFlows.from(Files.inboundAdapter(new
File("pathtofile"))).channel("mychannel").get();
}
#Bean
public IntegrationFlow flowOne() throws ParserConfigurationException {
return IntegrationFlows.from("mychannel").handle("serviceHandlerOne",
"handle").nullChannel();
}
#Bean
public IntegrationFlow parallelFlowOne() throws ParserConfigurationException {
return IntegrationFlows.from("executorChannelOne").handle("parallelServiceHandlerOne",
"handle").nullChannel();
}
#Bean
public IntegrationFlow parallelFlowTwo() throws ParserConfigurationException {
return IntegrationFlows.from("executorChannelTwo").handle("parallelServiceHandlerTwo",
"handle").nullChannel();
}
#Bean
public MessageChannel executorChannelOne() {
return new ExecutorChannel(
Executors.newFixedThreadPool(10));
}
#Bean
public MessageChannel executorChannelTwo;() {
return new ExecutorChannel(
Executors.newFixedThreadPool(10));
}
#Component
#Scope("prototype")
public class ServiceHandlerOne{
#Autowired
MessageChannel executorChannelOne;
#ServiceActivator
public Message<?> handle(Message<?> message) {
List<?> rowDatas = repository.findAll("parties");
rowDatas.stream().forEach(data -> {
Message<?> message = MessageBuilder.withPayload(data).build();
executorChannelOne.send(message);
});
return message;
}
}
#Component
#Scope("prototype")
public class ParallelServiceHandlerOne{
#Autowired
MessageChannel executorChannelTwo;;
#ServiceActivator
public Message<?> handle(Message<?> message) {
List<?> rowDatas = repository.findAll("party");
rowDatas.stream().forEach(data -> {
Message<?> message = MessageBuilder.withPayload(data).build();
executorChannelTwo;.send(message);
});
return message;
}
}
First of all no reason to make your services as #Scope("prototype"): I don't see any state holding in your services, so they are stateless, therefore can simply be as singleton. Second: since you make your flows ending with the nullChannel(), therefore point in returning anything from your service methods. Therefore just void and flow is going to end over there naturally.
Another observation: you use executorChannelOne.send(message) directly in the code of your service method. The same would be simply achieved if you just return that new message from your service method and have that executorChannelOne as the next .channel() in your flow definition after that handle("parallelServiceHandlerOne", "handle").
Since it looks like you do that in the loop, you might consider to add a .split() in between: the handler return your List<?> rowDatas and splitter will take care for iterating over that data and replies each item to that executorChannelOne.
Now about your original question.
There is really no easy to say that your executors are not busy any more. They might not be at the moment of request just because the message for task has not reached an executor channel yet.
Typically we recommend to use some async synchronizer for your data. The aggregator is a good way to correlate several messages in-the-flight. This way the aggregator collects a group and does not emit reply until that group is completed.
The splitter I've mentioned above adds a sequence details headers by default, so subsequent aggregator can track a message group easily.
Since you have layers in your flow, it looks like you would need a several aggregators: two for your executor channels after splitting, and one top level for the file. Those two would reply to the top-level for the final, per-file grouping.
You also may think about making those parties and party calls in parallel using a PublishSubscribeChannel, which also can be configured with a applySequence=true. This info then will be used by the top-level aggregator for info per file.
See more in docs:
https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations-publishsubscribechannel
https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#splitter
https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#aggregator

Caching lookups on application startup doesn't work

I am using Spring Boot 1.5.9 on Tomcat 9.0.2 and I am trying to cache lookups using spring #Cacheable scheduling a cache refresh job that runs on application startup and repeats every 24 hours as follows:
#Component
public class RefreshCacheJob {
private static final Logger logger = LoggerFactory.getLogger(RefreshCacheJob.class);
#Autowired
private CacheService cacheService;
#Scheduled(fixedRate = 3600000 * 24, initialDelay = 0)
public void refreshCache() {
try {
cacheService.refreshAllCaches();
} catch (Exception e) {
logger.error("Exception in RefreshCacheJob", e);
}
}
}
and the cache service is as follows:
#Service
public class CacheService {
private static final Logger logger = LoggerFactory.getLogger(CacheService.class);
#Autowired
private CouponTypeRepository couponTypeRepository;
#CacheEvict(cacheNames = Constants.CACHE_NAME_COUPONS_TYPES, allEntries = true)
public void clearCouponsTypesCache() {}
public void refreshAllCaches() {
clearCouponsTypesCache();
List<CouponType> couponTypeList = couponTypeRepository.getCoupons();
logger.info("######### couponTypeList: " + couponTypeList.size());
}
}
the repository code:
public interface CouponTypeRepository extends JpaRepository<CouponType, BigInteger> {
#Query("from CouponType where active=true and expiryDate > CURRENT_DATE order by priority")
#Cacheable(cacheNames = Constants.CACHE_NAME_COUPONS_TYPES)
List<CouponType> getCoupons();
}
later in my webservice, when trying to get the lookup as follows:
#GET
#Produces(MediaType.APPLICATION_JSON + ";charset=utf-8")
#Path("/getCoupons")
#ApiOperation(value = "")
public ServiceResponse getCoupons(#HeaderParam("token") String token, #HeaderParam("lang") String lang) throws Exception {
try {
List<CouponType> couponsList = couponRepository.getCoupons();
logger.info("###### couponsList: " + couponsList.size());
return new ServiceResponse(ErrorCodeEnum.SUCCESS_CODE, resultList, errorCodeRepository, lang);
} catch (Exception e) {
logger.error("Exception in getCoupons webservice: ", e);
return new ServiceResponse(ErrorCodeEnum.SYSTEM_ERROR_CODE, errorCodeRepository, lang);
}
}
The first call it gets the lookup from the database and the subsequent calls it gets it from the cache, while it should get it from the cache in the first call in the web service?
Why am I having this behavior, and how can I fix it?
The issue was fixed after upgrading to Tomcat 9.0.4
While it's not affecting the scheduled task per se, when refreshAllCaches() is invoked in the CacheService, #CacheEvict on clearCouponsTypesCache() is bypassed since it's invoked from the same class (see this answer). It will lead to cache not being purged before
List<CouponType> couponTypeList = couponTypeRepository.getCoupons();
is invoked. This means that the #Cacheable getCoupons() method will not query the database, but will instead return values from the cache.
This makes the scheduled cache refresh action to do its work properly only once, when the cache is empty. After that it's useless.
The #CacheEvict annotation should be moved to refreshAllCaches() method and add beforeInvocation=true parameter to it, so the cache is purged before being populated, not after.
Also, when using Spring 4 / Spring Boot 1.X, these bugs should be taken into consideration:
https://github.com/spring-projects/spring-boot/issues/8331
https://jira.spring.io/browse/SPR-15271
While this bug doesn't seem to affect this specific program, it might be a good idea to separate #Cacheable annotation from JpaRepository interface until migration to Spring 5 / Spring Boot 2.X.
#CacheEvict won't be invoked when called within the same service. This is because Spring creates a proxy around the service and only calls from "outside" go through the cache proxy.
The solution is to either add
#CacheEvict(cacheNames = Constants.CACHE_NAME_COUPONS_TYPES, allEntries = true)
to refreshAllCaches too, or to move refreshAllCaches into a new service that calls ICacheService.clearCouponsTypeCache.

How to implement a round-robin queue consumer in Spring boot

I am building a message driven service in spring which will run in a cluster and needs to pull messages from a RabbitMQ queue in a round robin manner. The implementation is currently pulling messages off the queue in a first come basis leading to some servers getting backed up while others are idle.
The current QueueConsumerConfiguration.java looks like :
#Configuration
public class QueueConsumerConfiguration extends RabbitMqConfiguration {
private Logger LOG = LoggerFactory.getLogger(QueueConsumerConfiguration.class);
private static final int DEFAULT_CONSUMERS=2;
#Value("${eventservice.inbound}")
protected String inboudEventQueue;
#Value("${eventservice.consumers}")
protected int queueConsumers;
#Autowired
private EventHandler eventtHandler;
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory());
template.setRoutingKey(this.inboudEventQueue);
template.setQueue(this.inboudEventQueue);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean
public Queue inboudEventQueue() {
return new Queue(this.inboudEventQueue);
}
#Bean
public SimpleMessageListenerContainer listenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory());
container.setQueueNames(this.inboudEventQueue);
container.setMessageListener(messageListenerAdapter());
if (this.queueConsumers > 0) {
LOG.info("Starting queue consumers:" + this.queueConsumers );
container.setMaxConcurrentConsumers(this.queueConsumers);
container.setConcurrentConsumers(this.queueConsumers);
} else {
LOG.info("Starting default queue consumers:" + DEFAULT_CONSUMERS);
container.setMaxConcurrentConsumers(DEFAULT_CONSUMERS);
container.setConcurrentConsumers(DEFAULT_CONSUMERS);
}
return container;
}
#Bean
public MessageListenerAdapter messageListenerAdapter() {
return new MessageListenerAdapter(this.eventtHandler, jsonMessageConverter());
}
}
Is it a case of just adding
container.setChannelTransacted(true);
to the configuration?
RabbitMQ treats all consumers the same - it knows no difference between multiple consumers in one container Vs. one consumer in multiple containers (e.g. on different hosts). Each is a consumer from Rabbit's perspective.
If you want more control over server affinity, you need to use multiple queues with each container listening to its own queue.
You then control the distribution on the producer side - e.g. using a topic or direct exchange and specific routing keys to route messages to a specific queue.
This tightly binds the producer to the consumers (he has to know how many there are).
Or you could have your producer use routing keys rk.0, rk.1, ..., rk.29 (repeatedly, resetting to 0 when 30 is reached).
Then you can bind the consumer queues with multiple bindings -
consumer 1 gets rk.0 to rk.9, 2 gets rk.10 to rk.19, etc, etc.
If you then decide to increase the number of consumers, just refactor the bindings appropriately to redistribute the work.
The container will scale up to maxConcurrentConsumers on demand but, practically, scaling down only occurs when the entire container is idle for some time.

cache implementation on DAO with custom refresh and evictions java

In my application, I have a scenario where I have to refresh cache each 24hrs.
I'm expecting database downtime so I need to implement a use case to refresh cache after 24hrs only if the database is up running.
I'm using spring-ehache and I did implement simple cache to refresh for each 24 hrs, but unable to get my head around to make the retention possible on database downtime .
Conceptually you could split the scheduling and cache eviction into two modules and only clear your cache if certain condition (in this case, database's healthcheck returns true) is met:
SomeCachedService.java:
class SomeCachedService {
#Autowired
private YourDao dao;
#Cacheable("your-cache")
public YourData getData() {
return dao.queryForData();
}
#CacheEvict("your-cache")
public void evictCache() {
// no body needed
}
}
CacheMonitor.java
class CacheMonitor {
#Autowired
private SomeCachedService service;
#Autowired
private YourDao dao;
#Scheduled(fixedDelay = TimeUnit.DAYS.toMillis(1))
public conditionallyClearCache() {
if (dao.isDatabaseUp()) {
service.evictCache();
}
}
}
Ehcache also allows you to create a custom eviction algorithm but the documentation doesn't seem too helpful in this case.

Categories