I am using spring-data-redis and trying to have a junit with which, I can test my Caching logic. The test case sporadically works. I guess if the caching logic completes before the invocation of the second method call then it works else it fails. If some has faced a similar issue, I will like to understand how they made it work. As of now, I am using thread.sleep() but looking for an alternative.
#Test
public void getUserById() {
User user = new User("name", "1234");
when(userRepository.findbyId("1234")).thenReturn(Optional.ofNullable(user));
// first method call
User user1 = userService.findbyId("1234");
assertThat(user.getName()).isEqualTo(user1.getName());
assertThat(user.getId).isEqualTo(user1.getId());
// sleeping the thread so to provide caching aspect sufficient time
// to cache the information
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
// second method call, expecting cache to work.
userCache = userService.findbyId("1234");
verify(userRepository, never()).findbyId("1234");
assertThat(user.getName()).isEqualTo(userCache.getName());
assertThat(user.getId).isEqualTo(userCache.getId());
}
Runtime issues while waiting a short amount of time a really common in a distributed system. To remedy the need of waiting too long for a test assertion, there is a little tool called Awaitility.
With this you can basically do a much cleverer wait by querying an assertion multiple times, in certain intervals until a given timeout was reached (…and much more).
Regarding your example, try this:
Awaitility.await()
.pollInterval(new Duration(1, TimeUnit.SECONDS))
.atMost(new Duration(10, TimeUnit.SECONDS))
.untilAsserted(() ->
User user1 = userService.findbyId("1234");
assertThat(user1.getName()).isEqualTo(user.getName());
Regarding the other part of your question, in an integration test you could actually perform some kind of prewarming of your Redis instance or if you have a containerized integration test (e. g. Docker) you could fire some first requests on it or wait for a certain condition before starting with your tests.
The actual issue was not with the Thread wait time. For Redis cache to work a separate thread need to be spanned. For my service test, I tested it via a separated test case.
#Test
public void getUserById() {
User user = new User("name", "1234");
when(userRepository.findbyId("1234")).thenReturn(Optional.ofNullable(user));
// first method call
User user1 = userService.findbyId("1234");
assertThat(user.getName()).isEqualTo(user1.getName());
assertThat(user.getId).isEqualTo(user1.getId());
}
//ensure this test case is executed after getUserById. I used
//#FixMethodOrder(MethodSorters.NAME_ASCENDING)
#Test
public void getUserById_cache() {
User user1 = userService.findbyId("1234");
Mockito.verify(userRepository, never()).findbyId("1234")
assertThat(user.getName()).isEqualTo(user1.getName());
assertThat(user.getId).isEqualTo(user1.getId());
}
Related
I have a Rest API implemented with Spring Boot 2. To check some client behavior on timeout, how can I simulate that condition in my testing environment? The server should regularly receive the request and process it (in fact, in production timeouts happen due to random network slowdowns and large big response payloads).
Would adding a long sleep be the proper simulation technique? Is there any better method to really have the server "drop" the response?
Needing sleeps to test your code is considered bad practice. Instead you want to replicate the exception you receive from the timeout, e.g. java.net.SocketTimeoutException when using RestTemplate.exchange.
Then you can write a test as such:
public class FooTest
#Mock
RestTemplate restTemplate;
#Before
public void setup(){
when(restTemplate.exchange(...)).thenThrow(new java.net.SocketTimeoutException())
}
#Test
public void test(){
// TODO
}
}
That way you wont be twiddling your thumbs waiting for something to happen.
Sleep is one way to do it, but if you're writing dozens of tests like that then having to wait for a sleep will create a really long-running test suite.
The alternative would be to change the 'threshold' for timeout on the client side for testing. If in production your client is supposed to wait 5 seconds for a response, then in test change it to 0.5 seconds (assuming your server takes longer than that to respond) but keeping the same error handling logic.
The latter might not work in all scenarios, but it will definitely save you from having a test suite that takes 10+ mins to run.
You can do one thing which I did in my case .
Actually in my case, when my application is running in a production environment, we keep on polling trades from the API and sometimes it drops the connection by throwing an Exception SSLProtocolException.
What we did
int retryCount =5;
while (true ){
count++;
try{
//send an api request here
}catch (Exception e){
if(retryCount == count ) {
throw e
// here we can do a thread sleep. and try after that period to reconnect
}
}
}
Similarly in your case some Exception it will throw catch that Exception and put your thread in Sleep for a while and after that again try for Connection the retryCount you can modify as per your requirment, in my case it was 5.
I have a central data cache, which is updated by a number of threads running queries on a SQL database. There is also a mechanism that checks when last each specific item (Mydata) in the data cache has been retrieved and shuts down the corresponding thread if a certain time threshold has been reached (i.e. if the data has not been retrieved in the last 30 minutes). This mechanism is in place to try reduce the demand on the database and minimize long running queries.
There are also a number of threads getting data from the cache and the hence the problem is that even though a specific item (Mydata) is no long being updated it may be requested by a thread at some point. When this happens there is a check to see if the relevant thread is running and if it's not, it's started up again, as follows...
private static HashMap<String, MyData> dataMap = new HashMap<>(10);
public static MyData getMyData(String identifier) {
if(!MyThreadManager.getInstance().isRunning(identifier)) {
LOG.info("Thread with identifier={} possibly stopped, restarting.", identifier);
MyThreadManager.getInstance().startThread(identifier);
}
MyData myData= null;
if(dataMap.containsKey(identifier)) {
myData= dataMap.get(identifier);
} else {
LOG.debug("No data found in dataMap for identifier={}, thread possibly terminated. Restarting.", identifier);
}
return myData;
}
... the mechanism itself works fine, but I run that risk that when dataMap.get(identifier) is run, it may still only contains the "outdated" version of myData (as the thread that was restarted may still be processing). I want to however guarantee that the data being returned has already been updated. To do this I could add a sleep timer after I restart the thread for a second, which should be enough time for the thread to have updated it's data in dataMap before dataMap.get(identifier) is run.
if(!MyThreadManager.getInstance().isRunning(identifier)) {
LOG.info("Thread with identifier={} possibly stopped, restarting.", identifier);
MyThreadManager.getInstance().startThread(identifier);
try {
Thread.sleep(1000L);
} catch (Exception e) {
LOG.error(e.getMessage());
}
}
The big problem I have with this implementation is that it may have a negative knock on impact on any other threads trying to retrieve information.
Question: How would I implement a thread safe / non-locking way to "wait" for the cache to be updated in the event of a thread restart, without impacting the other threads using the same data cache.
Ok, so ran a few tests. If the getMyData method is being called from within a thread...
public class MyRunnable implements Runnable {
private String identifier = "";
public MyRunnable(String identifier) {
this.identifier = identifier;
}
#Override
public void run() {
MyData myData = MyDataManager.getInstance().getMyData(identifier)
System.out.println(String.format("Data retrieved : %s", myData.toString()));
}
}
... then introducing the Thread.sleep(1000L); to restart the data gather thread will have no impact on any other concurrent threads. Unless of course one adds the synchronized method key word, which forces a one-at-a-time to avoid race conditions.
As the origination of these requests ultimately come from HTTP requests being handled by a Servlet container, the above will work fine. As servlets allow the JVM to handle each request within a separate Java thread. Thus each call to getMyData method is on it's own thread and any "waiting" will impact the other requests.
I am running a Spring Boot application and have multiple threads calling a MongoRepository. This, however, leads to weird timeout behavior.
This is my MongoRepository:
public interface EquipmentRepository extends MongoRepository<Equipment, String> {
Optional<Equipment> findByEquipmentSerialNumber(String equipmentSerialNumber);
}
This is a reduced version of my code highlighting the problem
ExecutorService taskExecutor = Executors.newFixedThreadPool(4);
taskExecutor.execute(() -> {
LOG.info("Executing query...");
Optional<Equipment> equipment = equipmentRepository.findByEquipmentSerialNumber("21133"); // guaranteed to be found
LOG.info("Query done: {}", equipment.get().getEquipmentSerialNumber());
});
taskExecutor.shutdown();
LOG.info("taskExecutor shut down");
try {
taskExecutor.awaitTermination(30, TimeUnit.SECONDS);
LOG.info("taskExecutor done");
} catch (InterruptedException e) {
System.out.println("Error");
}
That produced output looks like this
taskExecutor shut down
Executing query...
<30 second pause>
taskExecutor done
Query done: 21133
If I increase the timeout of awaitTermination() the pause increases accordingly. So somehow my code inside the execute() lambda is "paused" and only continues after the timeout is reached.
If I remove the call to equipmentRepository, everything works as expected and there is no 30-second pause.
What is keeping my code from completing without reaching the timeout?
Looks like mongo repository waits for 'main' thread to perform query (very strange though)
Not answer for your specific question, but probably solution for your problem: Spring Data can do async requests doc
I have a application in Spring MVC. In controller, user can run process(has button). Process working for one hour ( name it "longProces") .
So I want to run this method "longProces" in other thread.
When "longProces" is running, other user (when he want to run "longProces") should get message: "this process is running" and his thread is killed (not waiting for it turn).
What can I do this? Any idea?
Thank for answer
You can use ExecutorService instead of Executor.
Submit the task in executor service and maintain Future references say in some list or map.
ExecutorService threadExecutor = Executors.newSingleThreadExecutor();
Future future = threadExecutor.submit(new Task());
//submit the future in collection
on every new request you can check already submitted task and there status if completed-
//retrieve future instance from collection
if(future.isDone()){
//accept new request
}
else "process already running"
It is a design based question.
For this scenario, my straight forward solution is - maintain this long process status in database or any other persistence layer so that you can implement logic for every request first check is there any long process is active or not in the persistence. Based on this condition you can decide you flow.
One more critical point is, persistence layer approach would be the right solution to handle the multiple user request with one common shared process.
Below is my sniped code:
#RequestMapping("/generate")
public String generate(HttpServletResponse response, Model model) throws Exception {
String communique; = "The process is launching";
Runnable runnable = () -> {
try {
generatorService.runLongProces();
} catch (IOException e) {
e.printStackTrace();
}
};
Executor tasExecutor = Executors.newSingleThreadExecutor();
try{
tasExecutor.execute(runnable);
} catch (Exception ex)
{
communique = "The process is failed";
}
response.setStatus(HttpServletResponse.SC_ACCEPTED);
model.addAttribute("communique", communique);
return "institutions/download";
}
I thought I set one thread in the constructor.
Maybe I should create a singleton bean for Spring TaskExecutor ?
I have an application which accepts user commands to perform database lookups. I parse the commands/arguments and then iterate each one, calling a handler which takes the value and builds a query condition.
However, this process has been entirely synchronous until now. Each handler would execute and add the condition and after the loop was done I could execute the query.
Now, one of these handlers has to contact a remote server to convert some data which means it returns a ListenableFuture.
ListenableFuture<Profile> profile = getProfile("someUserName");
profile.addListener(new Runnable() {
#Override
public void run() {
try {
query.addCondition(Condition.of("user.$id", MatchRule.EQUALS, profile.get().getUniqueId()));
} catch (InterruptedException | ExecutionException e) {
// handle
}
}
}, MoreExecutors.sameThreadExecutor());
I'm trying to determine the best way to ensure that I don't execute my query until this ListenableFuture has completed. Part of what's confusing me, is that I need to assume that there could be multiple pending requests.
For example if someone enters p=user1,user2 we need to make two requests, so there will be two ListenableFuture<Profile>s pending.
Only once both have completed, can I execute the final query.
How can I do this?
Futures.allAsList lets you take a bunch of ListenableFutures and get out one ListenableFuture that completes when all its inputs have completed and returns the list of results. That sounds like what you need.