I have some services that provide information at different times but I would like some services to provide all the answers at the same time despite being different services.
I am using java 11 with spring boot and using api rest I will leave below the example of one of the services
#GetMapping(value = "/caract/opcoes/acoes/disponiveis")
public ResponseEntity<List<GroupByData>> getDatasCaractOpcoesAcoesDisponiveis() {
List<GroupByData> result = caractOpcoesAcoesServico.findGroupByIdentityRptDt();
if (result.isEmpty()) {
return new ResponseEntity<List<GroupByData>>(HttpStatus.NO_CONTENT);
}
return new ResponseEntity<List<GroupByData>>(result, HttpStatus.OK);
}
How can I release the response of several request that are released at different times only after everyone is ready?
There are 2 ways to do this. First I will explain the nice way which is reactive and webflux.
#GetMapping(value = "/anothertest")
public Mono<String> rest() {
log.info("request number " + reqCounter++);
CompletableFuture<String> stringCompletableFuture = sendRequestWithJavaHttpClient().thenApply(x -> "test: " + x);
Duration between = Duration.between(
LocalTime.now(),
LocalTime.parse("14:01:00")// I am assuming there is a time we send data back
);
return Mono.first(Mono.delay(between)).then(Mono.fromFuture(stringCompletableFuture));
}
private CompletableFuture<String> sendRequestWithJavaHttpClient() {
return CompletableFuture.supplyAsync(() -> {
// do some logic here
return "hello world.";
});
}
As we say to first mono to delay the response it will wait until the time comes and do the function call after. This is the nice way because with this way there is no blocking on response. all clients will need to wait. you will need to use webflux of spring.
The second and not cool way is to block the thread. this one uses spring mvc
#GetMapping(value = "/caract/opcoes/acoes/disponiveis*")
public ResponseEntity<Object> getDatasCaractOpcoesAcoesDisponiveis() throws Exception {
log.info("request number " + reqCounter++);
Duration between = Duration.between(
LocalTime.now(),
LocalTime.parse("14:10:00")
);
log.info("will sleep "+between.toMillis());
Thread.sleep(between.toMillis());
return new ResponseEntity<Object>("hello world", HttpStatus.OK);
}
This will block the server thread until the time comes. problem about this is the thread count of tomcat. the default value is 200, therefore, your application can have max 200 requests and after that tomcat can not take any more connections. You can increase it by changing server.tomcat.max-threads=500 in the application.properties
you can find running example code here https://github.com/ozkanpakdil/spring-examples/tree/master/response-wait-methods
If you ask my opinion about these two ways is not nice from a design point of view because clients should not wait. I would go with responding NOK until the time comes. and if time is good just respond to the real result. this way client-side can request as much as they want without load. and server-side won't have any load because nothing is blocked.
Related
I want to handle Flux to limit concurrent HTTP requests made by List of Mono.
When some requests are done (received responses), then service requests another until the total count of waiting requests is 15.
A single request returns a list and triggers another request depending on the result.
At this point, I want to send requests with limited concurrency.
Because consumer side, too many HTTP requests make an opposite server in trouble.
I used flatMapMany like below.
public Flux<JsonNode> syncData() {
return service1
.getData(param1)
.flatMapMany(res -> {
List<Mono<JsonNode>> totalTask = new ArrayList<>();
Map<String, Object> originData = service2.getDataFromDB(param2);
res.withArray("data").forEach(row -> {
String id = row.get("id").asText();
if (originData.containsKey(id)) {
totalTask.add(service1.updateRequest(param3));
} else {
totalTask.add(service1.deleteRequest(param4));
}
originData.remove(id);
});
for (left) {
totalTask.add(service1.createRequest(param5));
}
return Flux.merge(totalTask);
});
}
void syncData() {
syncDataService.syncData().????;
}
I tried chaining .window(15), but it doesn't work. All the requests are sent simultaneously.
How can I handle Flux for my goal?
I am afraid Project Reactor doesn't provide any implementation of either rate or time limit.
However, you can find a bunch of 3rd party libraries that provide such functionality and are compatible with Project Reactor. As far as I know, resilience4-reactor supports that and is also compatible with Spring and Spring Boot frameworks.
The RateLimiterOperator checks if a downstream subscriber/observer can acquire a permission to subscribe to an upstream Publisher. If the rate limit would be exceeded, the RateLimiterOperator could either delay requesting data from the upstream or it can emit a RequestNotPermitted error to the downstream subscriber.
RateLimiter rateLimiter = RateLimiter.ofDefaults("name");
Mono.fromCallable(backendService::doSomething)
.transformDeferred(RateLimiterOperator.of(rateLimiter))
More about RateLimiter module itself here: https://resilience4j.readme.io/docs/ratelimiter
You can use limitRate on a Flux. you need to probably reformat your code a bit but see docs here: https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#limitRate-int-
flatMap takes a concurrency parameter: https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#flatMap-java.util.function.Function-int-
Mono<User> getById(int userId) { ... }
Flux.just(1, 2, 3, 4).flatMap(client::getById, 2)
will limit the number of concurrent requests to 2.
I am new to Vertx and was exploring request-reply using event bus.
I want to implement below flow
User requests for a data
controller sends a message on event bus to a redis-processor verticle
redis-processor will wait for n seconds till value is available in redis (there will be a background process which will keep on refreshing cache, hence the wait)
redis-processor will send reply back to controller
controller responds to user
In short I want to do something like this:
Now I want to implement this in Vertx since vertx can run asynchronously. Using event bus I can isolate controller from processor. So controller can accept multiple user request and stay responsive under load.
(I hope I am right with this!)
I have implemented this in very crude fashion in java-vertx. Stuck in below part.
//receive request from controller
vertx.eventBus().consumer(REQUEST_PROCESSOR, evtHandler -> {
String txnId = evtHandler.body().toString();
LOGGER.info("Received message:: {}", txnId);
this.redisAPI.get(txnId, result -> { // <=====
String value = result.result().toString();
LOGGER.info("Value in redis : {}", value);
evtHandler.reply(value); // reply to controller
});
});
pls see line denoted by arrow. How can I wait for x seconds without blocking event loop?
Please help.
Thats actually very simple, you need a timer. Please see docs for details but you will need more or less something like this:
vertx.setTimer(1000, id -> {
this.redisAPI.get(txnId, result -> {
String value = result.result().toString();
LOGGER.info("Value in redis : {}", value);
evtHandler.reply(value); // reply to controller
});
});
You might want to store the timer IDs somewhere so that you can cancel them or that at least you know something is running when a shutdown request comes in for your verticle to delay it. But this all depends on your needs.
As #mohamnag said, you could use a Vertx timer
here is another example on how to user timer.
Note that the timer value is in ms.
As an improvement to the, I will recommend checking that the callback has succeeded before attempting to get the value from redisAPI. This is done using the succeeded() method.
In an asynchronous environment getting that result could fail due to several issues (network errors etc)
vertx.setTimer(n * 1000, id -> {
this.redisAPI.get(txnId, result -> {
if(result.succeeded()){ // the callback succeeded to get a value from redis
String value = result.result().toString();
LOGGER.info("Value in redis : {}", value);
evtHandler.reply(value); // reply to controller
} else {
LOGGER.error("Value could not be gotten from redis : {}", result.cause());
evtHandler.fail(someIntegerCode, result.cause()); // reply with failure related info
}
});
});
I'm writing a spring-boot based project where I have some synchronous (eg. RESTI API calls) and asynchronous (JMS) pieces of code (the broker I use is a dockerized instance of ActiveMQ in case there's some kind of trick/workaround).
One of the problems I'm currently struggling with is: my application receives a REST api call (I'll call it "a sync call"), it does some processing and then sends a JMS message to a queue (async) whose message in then handled and processed (let's say I have a heavy load to perform, so that's why I want it to be async).
Everything works fine when running the application, async messages are enqueued and dequeued as expecting.
When I'm writing tests, (and I'm testing the whole service, which includes the sync and async call in rapid succession) it happens that the test code is too fast, and the message is still waiting to be dequeued (we are talking about milliseconds, but that's the problem).
Basically as soon as i receive the response from the API call, the message is still in the queue, so if, for example I make a query to check for its existence -> ka-boom the test fails because (obviously) it doesn't find the object (that probably meanwhile is being processed and created).
Is there any way, or any pattern, I can use to make my test wait for that async message to be dequeued? I can attach code to my implementation if needed, It's a bachelors degree thesis project.
One obvious solution I'm temporarily using is adding a hundred milliseconds sleep between the method call and the assert section (hoping everything is done and persisted), but honestly I kinda dislike this solution because it seems so non-deterministic to me. Also creating a latch between development code and testing doesn't sound really good to me.
Here's the code I use as an entry-point to al the mess I explained before:
public TransferResponseDTO transfer(Long userId, TransferRequestDTO transferRequestDTO) {
//Preconditions.checkArgument(transferRequestDTO.amount.compareTo(BigDecimal.ZERO) < 0);
Preconditions.checkArgument(userHelper.existsById(userId));
Preconditions.checkArgument(walletHelper.existsByUserIdAndSymbol(userId, transferRequestDTO.symbol));
TransferMessage message = new TransferMessage();
message.userId = userId;
message.symbol = transferRequestDTO.symbol;
message.destination = transferRequestDTO.destination;
message.amount = transferRequestDTO.amount;
messageService.send(message);
TransferResponseDTO response = new TransferResponseDTO();
response.status = PENDING;
return response;
}
And here's the code that receives the message (although you wouldn't need it):
public void handle(TransferMessage transferMessage) {
Wallet source = walletHelper.findByUserIdAndSymbol(transferMessage.userId, transferMessage.symbol);
Wallet destination = walletHelper.findById(transferMessage.destination);
try {
walletHelper.withdraw(source, transferMessage.amount);
} catch (InsufficientBalanceException ex) {
String u = userHelper.findEmailByUserId(transferMessage.userId);
EmailMessage email = new EmailMessage();
email.subject = "Insufficient Balance in your account";
email.to = u;
email.text = "Your transfer of " + transferMessage.amount + " " + transferMessage.symbol + " has been DECLINED due to insufficient balance.";
messageService.send(email);
}
walletHelper.deposit(destination, transferMessage.amount);
String u = userHelper.findEmailByUserId(transferMessage.userId);
EmailMessage email = new EmailMessage();
email.subject = "Transfer executed";
email.to = u;
email.text = "Your transfer of " + transferMessage.amount + " " + transferMessage.symbol + " has been ACCEPTED.";
messageService.send(email);
}
Im' sorry if the code sounds "a lil sketchy or wrong" it's a primordial implementation.
I'm willing to write a utility to share with you all if that's the case, but, as you've probably noticed, I'm low on ideas right now.
I'm an ActiveMQ developer working mainly on ActiveMQ Artemis (the next-gen broker from ActiveMQ). We run into this kind of problem all the time in our test-suite given the asynchronous nature of the broker, and we developed a little utility class that automates & simplifies basic polling operations.
For example, starting a broker is asynchronous so it's common for our tests to include an assertion to ensure the broker is started before proceeding. Using old-school Java 6 syntax it would look something like this:
Wait.assertTrue(new Condition() {
#Override
public boolean isSatisfied() throws Exception {
return server.isActive();
}
});
Using a Java 8 lambda would look like this:
Wait.assertTrue(() -> server.isActive());
Or using a Java 8 method reference:
Wait.assertTrue(server::isActive);
The utility is quite flexible as the Condition you use can test anything you want as long as it ultimately returns a boolean. Furthermore, it is deterministic unlike using Thread.sleep() (as you noted) and it keeps testing code separate from the "product" code.
In your case you can check to see if the "object" being created by your JMS process can be found. If it's not found then it can keep checking until either the object is found or the timeout elapses.
I am trying to make a synchronous Volley networking request. I am using request futures to wait on a response, but the future.get() call always times out (no matter how long the timeout is set to). I have tested every component individually and nothing seems to be causing the error other than my use of futures. Any ideas on what I've don wrong here?
Activity.java: persistData()
postCampaign(campaign);
Activity.java: postCampaign()
private boolean postCampaign(final Campaign c) {
RequestFuture<String> future = RequestFuture.newFuture();
StringRequest request = new StringRequest(Request.Method.POST, url, future, future) {
#Override
protected Map<String, String> getParams()
{
Map<String, String> params = new HashMap<>();
// put data
return params;
}
};
NetworkController.getInstance(this).addToRequestQueue(request);
try {
String response = future.get(5, TimeUnit.SECONDS);
Log.d("Volley", "" + response);
return !response.contains("Duplicate");
} catch (InterruptedException|ExecutionException|TimeoutException e) {
Log.d("Volley", "[FAILED] " + e.getClass());
return false;
}
}
Strangely enough though, when stepping through the code, it appears that the RequestFuture's onResponse method is invoked with the appropriate response. So it seems like the RequestFuture just isn't handling the response properly.
I think I've come to the conclusion that either Volley is not capable of fully synchronous networking requests or at the very least it isn't worth it. I ended up just showed a spinner on start and stopped it in the Volley request's onResponse method when the server responded with a success message. As far as the user is concerned it works the same way.
I think I've come to the conclusion that either Volley is not capable of fully synchronous networking requests or at the very least it isn't worth it.
Not exactly. You can do it but it's not in the manner that we're used to when doing synchronous network calls.
There are a couple of things you need to remember when trying to do a synchronous request in volley:
The synchronous request needs to run in another thread. This is almost obvious anyway especially in recent versions of Android it will not allow you to do network calls in the main thread.
Once you launch that request in another thread you cannot block on that thread to wait for it to finish or the future.get(...) call fails.
so in the above example you can simply use an executor like this:
val campaign = ...
val executor = Executors.newSingleThreadExecutor()
val f = executor.execute {
postCampaign(campaign)
}
in that same strategy you CANNOT wait on the future f to complete by doing this:
f.get() // this blocks thread; your volley requests will timeout and you will be sad
now, you're probably asking: How do i update UI depending on the result of future.get() if i can't wait for it to finish? it's simple: the magic of closure variable capture you can still use the result of future.get() and be able to update your UI but you do that on the main thread...with coroutines this is rather easy:
val response = future.get(5, TimeUnit.SECONDS);
Log.d("Volley", "" + response);
CoroutineScope(Dispatchers.Main).launch {
// you can do whatever UI updates here based on the value of result
binding.textView.text = result
}
You can also use View::post but it has a bit more boilerplate and less elegant/flexible than coroutines (IMO).
I am using Spring #RequestMapping for REST synchronous services consuming and producing JSON. I now want to add asynchronous responses were the client sends a list of ids and the server sends back the details as it gets them to only that one client.
I been searching a while and have not found what I am looking for. I have seen two different approaches for Spring. The most common is a message broker approach where it appears that everybody gets every message by subscribing to a queue or topic. This is VERY unacceptable since this is private data. I also have a finite number of data points to return. The other approach is a Callable, AsyncResult or DeferredResult. This appears to keep the data private but I want to send multiple responses.
I have seen something similar to what I want but is uses Jersey SSE on the server. I would like to stick with Spring.
This is what I currently have using pseudo code.
#RequestMapping(value = BASE_PATH + "/balances", method = RequestMethod.POST, consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
public GetAccountBalancesResponse getAccountBalances(#RequestBody GetAccountBalancesRequest request) {
GetAccountBalancesResponse ret = new GetAccountBalancesResponse();
ret.setBalances(synchronousService.getBalances(request.getIds());
return ret;
}
This is what I am looking to do. It is rather rough since I have no clue of the details. Once I figure out sending I will work on the asynchronous part but would take any suggestions.
#RequestMapping(value = BASE_PATH + "/balances", method = RequestMethod.POST, consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
public ???<BalanceDetails> getAccountBalances(#RequestBody GetAccountBalancesRequest request) {
final ???<BalanceDetails> ret = new ???<>();
new Thread(new Runnable() {
public void run() {
List<Future<BalanceDetails>> futures = asynchronousService.getBalances(request.getIds());
while(!stillWaiting(futures)) {
// Probably use something like a Condition to block until there is some details.
ret.send(getFinishedDetails(futures));
}
ret.close();
}
}).start();
return ret;
}
Thanks, Wes.
It doesn't work like this: you are using plain Spring actions, which are intended to be processed in a single thread that possilby blocks until the full request is computed. You don't create threads inside controllers - or at least not in that way.
If the computation lasts really long and you want to give your users a visual feedback, these are the steps:
optimize the procedure :) use indices, caching, whatever
if still isn't enough, the computation still lasts forever and users demand feedback, you'll have two options
poll with javascript and show a visual feedback (easier). Basically you submit the task to a thread pool and return immediately, and there's another controller method that reads the current state of the computation and returns it to the user. This method is called by javascript every 10 seconds or so
use a backchannel (server push, websocket) - not so easy because you have to implement both the client and the server part. Obviously there are libraries and protocols that will make this only a handful lines of code long, but if you have never tried it before, you'll spend some time understanding the setup - plus debugging websockets is not as easy as regular HTTP because of the debugging tools
Took a lot of digging but it looks like Spring Web 4.2 does support server side events. I was using Spring Boot 1.2.7 that uses Spring Web 4.1.7. Switching to Spring Boot 1.3.0.RC1 adds the SseEmitter.
Here is my pseudo code.
#RequestMapping(value = BASE_PATH + "/getAccountBalances", method = RequestMethod.GET)
public SseEmitter getAccountBalances(#QueryParam("accountId") Integer[] accountIds) {
final SseEmitter emitter = new SseEmitter();
new Thread(new Runnable() {
#Override
public void run() {
try {
for (int xx = 0; xx < ids.length; xx++) {
Thread.sleep(2000L + rand.nextInt(2000));
BalanceDetails balance = new BalanceDetails();
...
emitter.send(emitter.event().name("accountBalance").id(String.valueOf(accountIds[xx]))
.data(balance, MediaType.APPLICATION_JSON));
}
emitter.send(SseEmitter.event().name("complete").data("complete"));
emitter.complete();
} catch (Exception ee) {
ee.printStackTrace();
emitter.completeWithError(ee);
}
}
}).start();
return emitter;
}
Still working out closing the channel gracefully and parsing the JSON object using Jersey EventSource but it is a lot better than a message bus.
Also spawning a new thread and using a sleep are just for the POC. I wouldn't need either since we already have an asynchronous process to access a slow back end system.
Wes.