Resilience4j Retry - logging retry attempts from client? - java

Is it possible to log retries attempts on client side with resilience4j please?
Maybe via some kind of configuration, or settings.
Currently, I am using resilience4j with Spring boot Webflux annotation based.
It is working great, the project is amazing.
While we put server logs on server side, to see that a same http call has been made due to a retry (we log time, client IP, request ID, etc...) Would I be possible to have client side logs?
I was expecting to see something like "Resilience4j - client side: 1st attempt failed because of someException, retying with attend number 2. 2nd attempt failed because of someException, retying with attend number 3. 3rd attempt successful!"
Something like that. Is there a property, some config, some setup, that can help to do this easily please? Without adding too much boiler code.
#RestController
public class TestController {
private final WebClient webClient;
public TestController(WebClient.Builder webClientBuilder) {
this.webClient = webClientBuilder.baseUrl("http://localhost:8443/serviceBgreeting").build();
}
#GetMapping("/greeting")
public Mono<String> greeting() {
System.out.println("Greeting method is invoked ");
return someRestCall();
}
#Retry(name = "greetingRetry")
public Mono<String> someRestCall() {
return this.webClient.get().retrieve().bodyToMono(String.class);
}
}
Thank you

Fortunately (or unfortunately) there is an undocumented feature :)
You can add a RegistryEventConsumer Bean in order to add event consumers to any Retry instance.
#Bean
public RegistryEventConsumer<Retry> myRetryRegistryEventConsumer() {
return new RegistryEventConsumer<Retry>() {
#Override
public void onEntryAddedEvent(EntryAddedEvent<Retry> entryAddedEvent) {
entryAddedEvent.getAddedEntry().getEventPublisher()
.onEvent(event -> LOG.info(event.toString()));
}
#Override
public void onEntryRemovedEvent(EntryRemovedEvent<Retry> entryRemoveEvent) {
}
#Override
public void onEntryReplacedEvent(EntryReplacedEvent<Retry> entryReplacedEvent) {
}
};
}
Log entry look as follows:
2020-10-26T13:00:19.807034700+01:00[Europe/Berlin]: Retry 'backendA', waiting PT0.1S until attempt '1'. Last attempt failed with exception 'org.springframework.web.client.HttpServerErrorException: 500 This is a remote exception'.
2020-10-26T13:00:19.912028800+01:00[Europe/Berlin]: Retry 'backendA', waiting PT0.1S until attempt '2'. Last attempt failed with exception 'org.springframework.web.client.HttpServerErrorException: 500 This is a remote exception'.
2020-10-26T13:00:20.023250+01:00[Europe/Berlin]: Retry 'backendA' recorded a failed retry attempt. Number of retry attempts: '3'. Giving up. Last exception was: 'org.springframework.web.client.HttpServerErrorException: 500 This is a remote exception'.

There seems to be a lot of information about this on the web if you Google for "resilience4j retry example logging". I found this as a potential solution:
RetryConfig config = RetryConfig.ofDefaults();
RetryRegistry registry = RetryRegistry.of(config);
Retry retry = registry.retry("flightSearchService", config);
...
Retry.EventPublisher publisher = retry.getEventPublisher();
publisher.onRetry(event -> System.out.println(event.toString()));
where you can register a callback to get an event whenever a Retry occurs. This. came from "https://reflectoring.io/retry-with-resilience4j".

Configured with application.properties, and using the #Retry annotation, I managed to get some output with
resilience4j.retry.instances.myRetry.maxAttempts=3
resilience4j.retry.instances.myRetry.waitDuration=1s
resilience4j.retry.instances.myRetry.enableExponentialBackoff=true
resilience4j.retry.instances.myRetry.exponentialBackoffMultiplier=2
resilience4j.retry.instances.myRetry.retryExceptions[0]=java.lang.Exception
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;
import io.github.resilience4j.retry.RetryRegistry;
import io.github.resilience4j.retry.annotation.Retry;
#Service
public class MyService {
private static final Logger LOG = LoggerFactory.getLogger(MyService.class);
public MyService(RetryRegistry retryRegistry) {
// all
retryRegistry.getAllRetries()
.forEach(retry -> retry
.getEventPublisher()
.onRetry(event -> LOG.info("{}", event))
);
// or single
retryRegistry
.retry("myRetry")
.getEventPublisher()
.onRetry(event -> LOG.info("{}", event));
}
#Retry(name = "myRetry")
public void doSomething() {
throw new RuntimeException("It failed");
}
}
eg.
2021-03-31T07:42:23 [http-nio-8083-exec-1] INFO [myService] - 2021-03-31T07:42:23.228892500Z[UTC]: Retry 'myRetry', waiting PT1S until attempt '1'. Last attempt failed with exception 'java.lang.RuntimeException: It failed'.
2021-03-31T07:42:24 [http-nio-8083-exec-1] INFO [myService] - 2021-03-31T07:42:24.231504600Z[UTC]: Retry 'myRetry', waiting PT2S until attempt '2'. Last attempt failed with exception 'java.lang.RuntimeException: It failed'.

Related

Can I configure the pool size of a specific Quarkus Vertx ConsumeEvent?

The parameter quarkus.vertx.worker-pool-size allows me to configure the "Thread size of the worker thread pool", according to the quarkus guide - All configuration options.
Is it possible to configure the pool size of a specific Quarkus ConsumeEvent like this:
#io.quarkus.vertx.ConsumeEvent(value = "my-consume-event", blocking = true)
public void start(String value) {
// do the work
}
I would like to set the number of threads that can process this my-consume-event without changing the global quarkus.vertx.worker-pool-size.
SmallRye Reactive Messaging example of configurable thread pool
On the SmallRye Reactive Messaging guide there's one example of what I want to do.
Here, I can use one Blocking annotation, define one name for it and configure the thread pool:
#Outgoing("Y")
#Incoming("X")
#Blocking("my-custom-pool")
public String process(String s) {
return s.toUpperCase();
}
Specifying the concurrency for the above worker pool requires the following configuration property to be defined:
smallrye.messaging.worker.my-custom-pool.max-concurrency=3
In this example, I can configure the size of the thread pool that will process the messages from the my-custom-pool.
Thanks
Edit to include try with #io.smallrye.common.annotation.Blocking("my-custom-pool")
I tried to set a value to the #io.smallrye.common.annotation.Blocking("my-custom-pool") annotation, but I receive the following error:
The attribute value is undefined for the annotation type Blocking
I'm using this dependency:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-vertx</artifactId>
</dependency>
I also created this project on my GitHub account to do this test.
Class that does the test
package org.acme;
import javax.inject.Inject;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.core.Response;
import org.jboss.logging.Logger;
import io.vertx.core.eventbus.EventBus;
#Path("/hello")
public class GreetingResource {
private static final Logger LOG = Logger.getLogger(GreetingResource.class);
#Inject
EventBus eventBus;
#GET
public Response hello() {
LOG.info("hello()");
eventBus.send("my-consume-event", null);
return Response
.status(Response.Status.ACCEPTED)
.build();
}
#io.quarkus.vertx.ConsumeEvent("my-consume-event")
// #io.smallrye.common.annotation.Blocking("my-custom-pool")
#io.smallrye.common.annotation.Blocking
public void start(String value) {
try {
LOG.info("before the sleep");
Thread.sleep(5000);
LOG.info("after the sleep");
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
You should be able to use the #io.smallrye.common.annotation.Blocking annotation:
#ConsumeEvent("my-consume-event")
#Blocking("my-custom-pool")
public void start(String value) {
// do the work
}
And configure your pool size in application.properties:
smallrye.messaging.worker.my-custom-pool.max-concurrency=3
EDIT
Actually, the #io.smallrye.reactive.messaging.annotations.Blocking is not supported on methods annotated with #ConsumeEvent.
Also, according to ConsumeEvent with Blocking Threading on wrong ExecutorService #19911, it seems events are executed on the default Quarkus executor.

Need to return value from scheduled method in Spring MVC

I am writing scheduler in my web application for notification purpose, the task of my scheduler is simple, It will hit the third party centralised database and look for the availability of data, if data is available then it returns true otherwise false.
But I am stuck here, I want to show the notification on based on the result (true/false)returning by my scheduler, but I am not able to think, how do I implement the same? I thought of bind the variable in session, but because it is time even so session is not possible here.
Suppose scheduler returning true, now I want this value inside my JSP page(Dashboard page) where I can able to show the message that "Data is available" in user's dashboard. I need this value to check condition
if(true)
"data is available"
else
no notification
Please see my code and suggest me.
package com.awzpact.uam.scheduler;
import com.awzpact.prayas.dao.HRMSPickSalaryDataDAO;
import com.awzpact.uam.domain.SalaryDetailReport;
import java.util.List;
import javax.servlet.http.HttpSession;
import org.apache.log4j.Logger;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
#Component
public class PayrollDataNotificationScheduler {
private static boolean AVAIL_STATUS = false;
private static final Logger LOGGER = Logger.getLogger(PayrollDataNotificationScheduler.class);
public boolean checkDataAvailability() {
try {
List<SalaryDetailReport> list = salaryDataDAO.findAll();
if (list.size() > 0) {
AVAIL_STATUS = true;
return AVAIL_STATUS;
}
return false;
} catch (Exception e) {
e.printStackTrace();
LOGGER.info("Data is not available for migrate");
return false;
}
}
#Autowired
HRMSPickSalaryDataDAO salaryDataDAO;
}
You run your scheduled task periodically if there's some data retrieved - you save it to your DB.
// in your scheduled #Component
#Autowired
private SomeDataDAO someDataDAO;
#Scheduled(cron = "...")
public void fetchThirdPartyData() {
SomeData thirdPartyData = getThirdPartyData();
someDataDAO.save(thirdPartyData);
}
private SomeData getThirdPartyData() {
// calling their API...
}
Then you create a controller which is going to get the data from db (if exists, notice the Optional interface - you can use this in your DAO method)
// a rest controller
#RestController
#RequestMapping("/someData")
public class SomeController {
#Autowired
private SomeDataDAO someDataDAO;
#GetMapping
public SomeData getSomeData() {
return someDataDao.getSomeData().orElse(null);
}
}
Now, in your fronted you do some AJAX call, depending on what you're using there and then you can do your check and print the message.
Scheduling means that you want to make some actions on the schedule basis.
Waiting for response looks more like request/response communication between client and server.
To check that data is available - it's better to use simple method invocation via REST Controller and don't use a scheduler at all.

Issue testing spring cloud SQS Listener

Environment
Spring Boot: 1.5.13.RELEASE
Cloud: Edgware.SR3
Cloud AWS: 1.2.2.RELEASE
Java 8
OSX 10.13.4
Problem
I am trying to write an integration test for SQS.
I have a local running localstack docker container with SQS running on TCP/4576
In my test code I define an SQS client with the endpoint set to local 4576 and can successfully connect and create a queue, send a message and delete a queue. I can also use the SQS client to receive messages and pick up the message that I sent.
My problem is that if I remove the code that is manually receiving the message in order to allow another component to get the message nothing seems to be happening. I have a spring component annotated as follows:
Listener
#Component
public class MyListener {
#SqsListener(value = "my_queue", deletionPolicy = ON_SUCCESS)
public void receive(final MyMsg msg) {
System.out.println("GOT THE MESSAGE: "+ msg.toString());
}
}
Test
#RunWith(SpringRunner.class)
#SpringBootTest(properties = "spring.profiles.active=test")
public class MyTest {
#Autowired
private AmazonSQSAsync amazonSQS;
#Autowired
private SimpleMessageListenerContainer container;
private String queueUrl;
#Before
public void setUp() {
queueUrl = amazonSQS.createQueue("my_queue").getQueueUrl();
}
#After
public void tearDown() {
amazonSQS.deleteQueue(queueUrl);
}
#Test
public void name() throws InterruptedException {
amazonSQS.sendMessage(new SendMessageRequest(queueUrl, "hello"));
System.out.println("isRunning:" + container.isRunning());
System.out.println("isActive:" + container.isActive());
System.out.println("isRunningOnQueue:" + container.isRunning("my_queue"));
Thread.sleep(30_000);
System.out.println("GOT MESSAGE: " + amazonSQS.receiveMessage(queueUrl).getMessages().size());
}
#TestConfiguration
#EnableSqs
public static class SQSConfiguration {
#Primary
#Bean(destroyMethod = "shutdown")
public AmazonSQSAsync amazonSQS() {
final AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration("http://127.0.0.1:4576", "eu-west-1");
return new AmazonSQSBufferedAsyncClient(AmazonSQSAsyncClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("key", "secret")))
.withEndpointConfiguration(endpoint)
.build());
}
}
}
In the test logs I see:
o.s.c.a.m.listener.QueueMessageHandler : 1 message handler methods found on class MyListener: {public void MyListener.receive(MyMsg)=org.springframework.cloud.aws.messaging.listener.QueueMessageHandler$MappingInformation#1cd4082a}
2018-05-31 22:50:39.582 INFO 16329 ---
o.s.c.a.m.listener.QueueMessageHandler : Mapped "org.springframework.cloud.aws.messaging.listener.QueueMessageHandler$MappingInformation#1cd4082a" onto public void MyListener.receive(MyMsg)
Followed by:
isRunning:true
isActive:true
isRunningOnQueue:false
GOT MESSAGE: 1
This demonstrates that in the 30 second pause between sending the message the container didn't pick it up and when I manually poll for the message it is there on the queue and I can consume it.
My question is, why isn't the listener being invoked and why is the isRunningOnQueue:false line suggesting that it's not auto started for that queue?
Note that I also tried setting my own SimpleMessageListenerContainer bean with autostart set to true explicitly (the default anyway) and observed no change in behaviour. I thought that the org.springframework.cloud.aws.messaging.config.annotation.SqsConfiguration#simpleMessageListenerContainer that is set up by #EnableSqs ought to configure an auto started SimpleMessageListenerContainer that should be polling for me message.
I have also set
logging.level.org.apache.http=DEBUG
logging.level.org.springframework.cloud=DEBUG
in my test properties and can see the HTTP calls create the queue, send a message and delete etc but no HTTP calls to receive (apart from my manual one at the end of the test).
I figured this out after some tinkering.
Even if the simple message container factory is set to not auto start, it seems to do its initialisation anyway, which involves determining whether the queue exists.
In this case, the queue is created in my test in the setup method - but sadly this is after the spring context is set up which means that an exception occurs.
I fixed this by simply moving the queue creation to the context creation of the SQS client (which happens before the message container is created). i.e.:
#Bean(destroyMethod = "shutdown")
public AmazonSQSAsync amazonSQS() {
final AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration("http://localhost:4576", "eu-west-1");
final AmazonSQSBufferedAsyncClient client = new AmazonSQSBufferedAsyncClient(AmazonSQSAsyncClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("dummyKey", "dummySecret")))
.withEndpointConfiguration(endpoint)
.build());
client.createQueue("test-queue");
return client;
}

Hysterix Javanica AsyncResult Future.get Throwing Exception

I have a Spring Cloud set up running on my local tomcat. I am using feign client to invoke a remote service wrapped inside Hysterix command one direct and other async as below.
#HystrixCommand(fallbackMethod = "fallBackEmployeeCall")
public List<EmployeeBean> getEmployees() {
//Call through Feign Client
return empInterface.getEmployees();
}
//Async Version
#HystrixCommand(fallbackMethod = "fallBackEmployeeCall")
public Future<List<EmployeeBean>> getEmployeesAsync() {
return new AsyncResult<List<EmployeeBean>>() {
#Override
public List<EmployeeBean> invoke() {
return empInterface.getEmployees();
}
};
}
When I am calling getEmployeesAsync().get()
I am getting below exception
java.lang.UnsupportedOperationException: AsyncResult is just a stab and cannot be used as complete implementation of Future
It is similar to below issue :-
[https://github.com/Netflix/Hystrix/issues/1179][1]
According to docs the solution is to configure HystrixCommandAspect class, which I did as below :-
#Configuration
#EnableAspectJAutoProxy
public class HystrixConfiguration {
#Bean
public HystrixCommandAspect hystrixAspect() {
return new HystrixCommandAspect();
}
}
But I am still getting the same exception. It seems I am missing some configuration.
Note :- my sync method is working fine.
you can try call getEmployeesAsync in the other class, which injected the instance of the class with getEmployeesAsync. I had this exception, too. Then I do it like this successful.

Apache camel 2.16 enrich - No consumers available on endpoint in JUnit

I upgraded to camel 2.16 and one of my route Unit Tests started failing.
Here is my route definition:
public class Route extends RouteBuilder{
#Override
public void configure() throws Exception {
from(start).enrich("second");
from("direct:second")
.log(LoggingLevel.DEBUG, "foo", "Route [direct:second] started.");
}
}
Here is my test:
#RunWith(MockitoJUnitRunner.class)
public class RouteTest extends CamelTestSupport {
private Route builder;
#Produce(uri = "direct:start")
protected ProducerTemplate template;
#Before
public void config() {
BasicConfigurator.configure();
}
#Override
protected RouteBuilder createRouteBuilder() {
builder = new Route();
return builder;
}
#Override
protected CamelContext createCamelContext() throws Exception {
SimpleRegistry registry = new SimpleRegistry();
return new DefaultCamelContext(registry);
}
#Test
public void testPrimeRouteForSubscriptionId() {
Exchange exchange = ExchangeBuilder.anExchange(new DefaultCamelContext()).build();
exchange.getIn().setBody(new String("test"));
template.send(exchange);
}
}
The error I'm getting when I run the test is:
org.apache.camel.component.direct.DirectConsumerNotAvailableException: No consumers available on endpoint: Endpoint[direct://second]. Exchange[][Message: test]
Worthy of note is the following line in the camel 2.16 notes:
http://camel.apache.org/camel-2160-release.html
The resourceUri and resourceRef attributes on and has been removed as they now support a dynamic uris computed from an Expression.
Thanks in advance for any help.
Swap the order so the the direct route is started before the enrich.
http://camel.apache.org/configuring-route-startup-ordering-and-autostartup.html
Or use seda instead of direct in your unit test: http://camel.apache.org/seda
Or use ?block=true in the direct uri to tell Camel to block and wait for a consumer to be started and ready before it sends a message to it: http://camel.apache.org/direct
This is a somewhat old issue, but since i pulled out most of my hair out last night, trying to figure out why it was ok to use to("direct:myEndpoint") but not enrich("direct:myEndpoint"), I'll post the answer anyway - maybe it'll save somebody else from getting bald spots ;-)
It turns out to be a test-issue. In case of Direct endpoints, enrich checks whether there is a running route in the context before passing the Exchange to it, but it does so by looking at the CamelContext held by the Exchange it is currently handling. Since you passed your ProducerTemplate an Exchange what was created with a new DefaultCamelContext(), it has no "direct:second" route available.
Luckily there is a couple of simple solutions. Either create the Exchange using the CamelContext from CamelTestSupport, or use the ProducerTemplate sendBody(...) method instead:
#Test
public void testWithSendBody() {
template.sendBody(new String("test"));
}
#Test
public void testPrimeRouteForSubscriptionId() {
Exchange exchange = ExchangeBuilder.anExchange(context()).build();
exchange.getIn().setBody(new String("test"));
template.send(exchange);
}
The blueprint test keeps throwing exception, No Consumers available.
My scenario was that I have an osgi svc which exposes a method which can be called from any another osgi svc.
So the exposed svc method makes a call to a direct:
#EndpointInject(uri = "direct-vm:toRestCall")
ProducerTemplate toRestCall;
svcMethod(Exchange xch){
exchange.setOut(
toRestCall.send("seda:toDirectCall", xch -> {
try{
xch.getIn().setBody("abc");
}catch (Exception ex){
ex.getMessage();
}
}
}).getIn());
And when I tested the direct that it calls, Blueprint advice with JUnit used to keep throwing the following exception:
org.apache.camel.component.direct.DirectConsumerNotAvailableException:
No consumers available on endpoint: Endpoint. Exchange[Message: {..........

Categories