How to simulate timeout in response to a Rest request in Spring? - java

I have a Rest API implemented with Spring Boot 2. To check some client behavior on timeout, how can I simulate that condition in my testing environment? The server should regularly receive the request and process it (in fact, in production timeouts happen due to random network slowdowns and large big response payloads).
Would adding a long sleep be the proper simulation technique? Is there any better method to really have the server "drop" the response?

Needing sleeps to test your code is considered bad practice. Instead you want to replicate the exception you receive from the timeout, e.g. java.net.SocketTimeoutException when using RestTemplate.exchange.
Then you can write a test as such:
public class FooTest
#Mock
RestTemplate restTemplate;
#Before
public void setup(){
when(restTemplate.exchange(...)).thenThrow(new java.net.SocketTimeoutException())
}
#Test
public void test(){
// TODO
}
}
That way you wont be twiddling your thumbs waiting for something to happen.

Sleep is one way to do it, but if you're writing dozens of tests like that then having to wait for a sleep will create a really long-running test suite.
The alternative would be to change the 'threshold' for timeout on the client side for testing. If in production your client is supposed to wait 5 seconds for a response, then in test change it to 0.5 seconds (assuming your server takes longer than that to respond) but keeping the same error handling logic.
The latter might not work in all scenarios, but it will definitely save you from having a test suite that takes 10+ mins to run.

You can do one thing which I did in my case .
Actually in my case, when my application is running in a production environment, we keep on polling trades from the API and sometimes it drops the connection by throwing an Exception SSLProtocolException.
What we did
int retryCount =5;
while (true ){
count++;
try{
//send an api request here
}catch (Exception e){
if(retryCount == count ) {
throw e
// here we can do a thread sleep. and try after that period to reconnect
}
}
}
Similarly in your case some Exception it will throw catch that Exception and put your thread in Sleep for a while and after that again try for Connection the retryCount you can modify as per your requirment, in my case it was 5.

Related

Maximum number of retries possible concurrently in resilience4j

I implemented retry mechanism offered by resilience4j in my machine for a project that makes http calls asynchronously. I can see that the http calls are being retried properly. However, these calls are asynchronous and so multiple HTTP calls will be made and it is possible that a number of these calls will fall into a retry at the same time. However, at a time I am able to see that only 7-9 retries are attempted. My question is why is there a cap on this ? Is it possible to configure this ?
Lets say i have a method as (this is a pseudocode).
#Async
#Retry(name = "retryA",fallbackMethod = "fallbackRetry")
public ResponseObj getExternalHttpRepsonse(String payload){
ClientResponse resp = webUtils.postRequest(payload);
boolean validatePredicate = responsePredciate.test(resp);
if(!validatePredicate){
throw new PredicateValidationFailedException();
}
return new ResponseObj(resp);
}
I am seeing an output of 7-9 attempts 1s failed, attempt 2s failed in the logs continuously whenever failures occur. Why is this capped between 7-9 and not more than that ?

How to wait for Redis cache to cache the information

I am using spring-data-redis and trying to have a junit with which, I can test my Caching logic. The test case sporadically works. I guess if the caching logic completes before the invocation of the second method call then it works else it fails. If some has faced a similar issue, I will like to understand how they made it work. As of now, I am using thread.sleep() but looking for an alternative.
#Test
public void getUserById() {
User user = new User("name", "1234");
when(userRepository.findbyId("1234")).thenReturn(Optional.ofNullable(user));
// first method call
User user1 = userService.findbyId("1234");
assertThat(user.getName()).isEqualTo(user1.getName());
assertThat(user.getId).isEqualTo(user1.getId());
// sleeping the thread so to provide caching aspect sufficient time
// to cache the information
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
// second method call, expecting cache to work.
userCache = userService.findbyId("1234");
verify(userRepository, never()).findbyId("1234");
assertThat(user.getName()).isEqualTo(userCache.getName());
assertThat(user.getId).isEqualTo(userCache.getId());
}
Runtime issues while waiting a short amount of time a really common in a distributed system. To remedy the need of waiting too long for a test assertion, there is a little tool called Awaitility.
With this you can basically do a much cleverer wait by querying an assertion multiple times, in certain intervals until a given timeout was reached (…and much more).
Regarding your example, try this:
Awaitility.await()
.pollInterval(new Duration(1, TimeUnit.SECONDS))
.atMost(new Duration(10, TimeUnit.SECONDS))
.untilAsserted(() ->
User user1 = userService.findbyId("1234");
assertThat(user1.getName()).isEqualTo(user.getName());
Regarding the other part of your question, in an integration test you could actually perform some kind of prewarming of your Redis instance or if you have a containerized integration test (e. g. Docker) you could fire some first requests on it or wait for a certain condition before starting with your tests.
The actual issue was not with the Thread wait time. For Redis cache to work a separate thread need to be spanned. For my service test, I tested it via a separated test case.
#Test
public void getUserById() {
User user = new User("name", "1234");
when(userRepository.findbyId("1234")).thenReturn(Optional.ofNullable(user));
// first method call
User user1 = userService.findbyId("1234");
assertThat(user.getName()).isEqualTo(user1.getName());
assertThat(user.getId).isEqualTo(user1.getId());
}
//ensure this test case is executed after getUserById. I used
//#FixMethodOrder(MethodSorters.NAME_ASCENDING)
#Test
public void getUserById_cache() {
User user1 = userService.findbyId("1234");
Mockito.verify(userRepository, never()).findbyId("1234")
assertThat(user.getName()).isEqualTo(user1.getName());
assertThat(user.getId).isEqualTo(user1.getId());
}

Akka actors are getting stopped

I'm using akka actors to achieve parallel processing of some http requests. I've initailized a pool of actors using RoundRobinPool like:
ActorRef myActorPool = actorSystem.actorOf(new RoundRobinPool(200).props(Props.create(MyActor.class, args)), MyActor.class.getSimpleName());
It is working fine. But after the process is running for sometime, i'm getting following error
java.util.concurrent.CompletionException: akka.pattern.AskTimeoutException: Recipient[Actor[akka://web_server/user/MyActor#-769383443]] had already been terminated. Sender[null] sent the message of type "com.data.model.Request".
So I've overridden postStop method added a log statement there.
#Override
public void postStop() {
log.warn("Actor is stopped");
}
Now, I can see in the logs that the actors are getting stopped. But I'm not sure for which request it is happening. Once all the actors in the pool terminates (200 is the pool size I've set), I'm getting AskTimeoutException as said before. Is there anyway to debug why the actors are getting terminated?
EDIT 1
In the controller, I'm using the created actor pool like
CompletableFuture<Object> s = ask(myActorPool, request, 1000000000).toCompletableFuture();
return s.join();
The actor processes one kind of messages only.
#Override
public AbstractActor.Receive createReceive() {
return receiveBuilder()
.match(Request.class, this::process)
.build();
}
private void process(Request request) {
try {
// code here
} catch (Exception e) {
log.error(e.getMessage(), e);
getSender().tell(new akka.actor.Status.Failure(e), getSelf());
}
}
As far as you have described the probelm it seems you are processing your data inside the ask call and taking more time than askTimeout, and you are getting the error.
What you can do is either increase the askTimeout or do less processing inside tha ask call.
you should not do CPU bound operations inside the ask call, it can cause slowing down your system it is recommended that you should do the I/O bound operations inside the ask call. That way you can leverage the actors.
For example:
val x=x+1 // CPU bound operation should not do inside the ask call
some db call you can do inside the ask call that is preferable.

How do I stop a Camel route when JVM gets to a certain heap size?

I am using Apache Camel to connect to various endpoints, including JMS topics, and write to a database. Sometimes the database connection fails (for whatever reason, database issue, network blip, etc) and the messages from the topic subscriber start backing up. At a certain point, there are so many messages backed up waiting to be written to the database that the application throws an out of memory error. So far I understand all that.
The problem I have is the following: When the application is frantically trying to garbage collect before eventually giving up and accepting that it is out of memory, the application stops working, but is still alive. This means that the topic subscriber is still seen as active by the JMS provider, but not reading anything off the topic, so the provider starts queueing up the messages. Eventually the provider falls over also when the maximum depth runs out.
How can I configure my application to either disconnect when reaching a certain heap usage, or kill itself completely much much faster when running out of memory? I believe there are some JVM parameters that allow the application to kill itself much quicker when running out of memory, but I am wondering if that is the best solution or whether there is another way?
First of all I think you should use a JDBC connection pool that is capable of refreshing failed connections. So you do not run into the described scenario in the first place. At least not if the DB/network issue is short lived.
Next I'd protect the message broker by applying producer flow control (at least thats how it is called in ActiveMQ). I.e. prevent message producers from submitting more messages if a certain memory threshold has been breached. If the thresholds are set correctly, then that will prevent your message broker from falling over.
As for your original question: I'd use JMX to monitor the VM. If some metric, e.g. memory, breaches a threshold then you can suspend or shut down the route or the whole Camel context via the MBeans Camel exposes.
You can control (start/stop and suspend/resume) Camel routes using the Camel context methods .stop(), .start(), .suspend() and .resume().
You can spin a separate thread that monitors the current VM memory and stops the required route when a certain condition is met.
new Thread() {
#Override
public void run() {
while(true) {
long free = Runtime.getRuntime().freeMemory();
boolean routeRunning = camelContext.isRouteStarted("yourRoute");
if (free < threshold && routeRunning) {
camelContext.stopRoute("yourRoute");
} else if (free > threshold && !routeRunning) {
camelContext.startRoute("yourRoute");
}
// Check every 10 seconds
Thread.sleep(10000);
}
}
}
As commented in the other answer, relying on this is not particularly robust, but at least a little more robust than getting an OutOfMemoryException. Note that you need to .stop() the route, .suspend() does not deallocate resources, which means the connection with the queue provider is still open and the service looks like it is open for business.
You can also stop the route as part of the error handling of the route itself (this is possibly more robust but would require manual intervention to restart the route once the error is cleared, or a scheduled route that periodically checks if the error condition still exists and restart the route if it is gone). The thing to keep in mind is that you cannot stop a route from the same thread that is servicing the route at the time so you need to spin a separate thread that does the stopping. For example:
route("sample").from("jms://myqueue")
// Handle SQL Exceptions by shutting down the route
.onException(SQLException.class)
.process(new Processor() {
// This processor spawns a new thread that stops the current route
Thread stop;
#Override
public void process(final Exchange exchange) throws Exception {
if (stop == null) {
stop = new Thread() {
#Override
public void run() {
try {
// Stop the current route
exchange.getContext().stopRoute("sample");
} catch (Exception e) {}
}
};
}
// start the thread in background
stop.start();
}
})
.end()
// Standard route processors go here
.to(...);

Annoying SocketTimeoutException when debugging GEA

I have Android app with GAE backend. I'm encountering java.net.SocketTimeoutException, probably due to fetch time limitations of GAE.
However, operations I'm doing there is writing pretty simple object into datastore and returning it to the user. Debug time, that eclipse generates makes it too long I guess...
What would be the way to increase timeout time in such usage:
Gameendpoint.Builder builder = new Gameendpoint.Builder(AndroidHttp.newCompatibleTransport(), new JacksonFactory(), null);
builder = CloudEndpointUtils.updateBuilder(builder);
Gameendpoint endpoint = builder.build();
try {
Game game = endpoint.createGame().execute();;
} catch (Exception e) {
e.printStackTrace();
}
Well, it was a silly mistake. The limit of such operation is 30 seconds, which should be enough. However, inside createGame() there was an infinite loop. I have a feeling that GAE framework recognizes such situation and causes SocketTimeoutException before 30 seconds actually passes.
Sockets on endpoints have a 2000 ms timeout. This is ample time if you are running short processes: a quick query(continuous queries handled differently), or a write operation. If you overload the process and try to do too much (My issue) then you will get this error. what you need to do it run a lot of different endpoint operations and not try to handle too much at one time. You can override the timeout with the HTTP transport if needed but it is not advised.

Categories