Multiple Private WebSocket Messages with Spring - java

I am using Spring #RequestMapping for REST synchronous services consuming and producing JSON. I now want to add asynchronous responses were the client sends a list of ids and the server sends back the details as it gets them to only that one client.
I been searching a while and have not found what I am looking for. I have seen two different approaches for Spring. The most common is a message broker approach where it appears that everybody gets every message by subscribing to a queue or topic. This is VERY unacceptable since this is private data. I also have a finite number of data points to return. The other approach is a Callable, AsyncResult or DeferredResult. This appears to keep the data private but I want to send multiple responses.
I have seen something similar to what I want but is uses Jersey SSE on the server. I would like to stick with Spring.
This is what I currently have using pseudo code.
#RequestMapping(value = BASE_PATH + "/balances", method = RequestMethod.POST, consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
public GetAccountBalancesResponse getAccountBalances(#RequestBody GetAccountBalancesRequest request) {
GetAccountBalancesResponse ret = new GetAccountBalancesResponse();
ret.setBalances(synchronousService.getBalances(request.getIds());
return ret;
}
This is what I am looking to do. It is rather rough since I have no clue of the details. Once I figure out sending I will work on the asynchronous part but would take any suggestions.
#RequestMapping(value = BASE_PATH + "/balances", method = RequestMethod.POST, consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
public ???<BalanceDetails> getAccountBalances(#RequestBody GetAccountBalancesRequest request) {
final ???<BalanceDetails> ret = new ???<>();
new Thread(new Runnable() {
public void run() {
List<Future<BalanceDetails>> futures = asynchronousService.getBalances(request.getIds());
while(!stillWaiting(futures)) {
// Probably use something like a Condition to block until there is some details.
ret.send(getFinishedDetails(futures));
}
ret.close();
}
}).start();
return ret;
}
Thanks, Wes.

It doesn't work like this: you are using plain Spring actions, which are intended to be processed in a single thread that possilby blocks until the full request is computed. You don't create threads inside controllers - or at least not in that way.
If the computation lasts really long and you want to give your users a visual feedback, these are the steps:
optimize the procedure :) use indices, caching, whatever
if still isn't enough, the computation still lasts forever and users demand feedback, you'll have two options
poll with javascript and show a visual feedback (easier). Basically you submit the task to a thread pool and return immediately, and there's another controller method that reads the current state of the computation and returns it to the user. This method is called by javascript every 10 seconds or so
use a backchannel (server push, websocket) - not so easy because you have to implement both the client and the server part. Obviously there are libraries and protocols that will make this only a handful lines of code long, but if you have never tried it before, you'll spend some time understanding the setup - plus debugging websockets is not as easy as regular HTTP because of the debugging tools

Took a lot of digging but it looks like Spring Web 4.2 does support server side events. I was using Spring Boot 1.2.7 that uses Spring Web 4.1.7. Switching to Spring Boot 1.3.0.RC1 adds the SseEmitter.
Here is my pseudo code.
#RequestMapping(value = BASE_PATH + "/getAccountBalances", method = RequestMethod.GET)
public SseEmitter getAccountBalances(#QueryParam("accountId") Integer[] accountIds) {
final SseEmitter emitter = new SseEmitter();
new Thread(new Runnable() {
#Override
public void run() {
try {
for (int xx = 0; xx < ids.length; xx++) {
Thread.sleep(2000L + rand.nextInt(2000));
BalanceDetails balance = new BalanceDetails();
...
emitter.send(emitter.event().name("accountBalance").id(String.valueOf(accountIds[xx]))
.data(balance, MediaType.APPLICATION_JSON));
}
emitter.send(SseEmitter.event().name("complete").data("complete"));
emitter.complete();
} catch (Exception ee) {
ee.printStackTrace();
emitter.completeWithError(ee);
}
}
}).start();
return emitter;
}
Still working out closing the channel gracefully and parsing the JSON object using Jersey EventSource but it is a lot better than a message bus.
Also spawning a new thread and using a sleep are just for the POC. I wouldn't need either since we already have an asynchronous process to access a slow back end system.
Wes.

Related

How to limit concurrent http requests with Mono & Flux

I want to handle Flux to limit concurrent HTTP requests made by List of Mono.
When some requests are done (received responses), then service requests another until the total count of waiting requests is 15.
A single request returns a list and triggers another request depending on the result.
At this point, I want to send requests with limited concurrency.
Because consumer side, too many HTTP requests make an opposite server in trouble.
I used flatMapMany like below.
public Flux<JsonNode> syncData() {
return service1
.getData(param1)
.flatMapMany(res -> {
List<Mono<JsonNode>> totalTask = new ArrayList<>();
Map<String, Object> originData = service2.getDataFromDB(param2);
res.withArray("data").forEach(row -> {
String id = row.get("id").asText();
if (originData.containsKey(id)) {
totalTask.add(service1.updateRequest(param3));
} else {
totalTask.add(service1.deleteRequest(param4));
}
originData.remove(id);
});
for (left) {
totalTask.add(service1.createRequest(param5));
}
return Flux.merge(totalTask);
});
}
void syncData() {
syncDataService.syncData().????;
}
I tried chaining .window(15), but it doesn't work. All the requests are sent simultaneously.
How can I handle Flux for my goal?
I am afraid Project Reactor doesn't provide any implementation of either rate or time limit.
However, you can find a bunch of 3rd party libraries that provide such functionality and are compatible with Project Reactor. As far as I know, resilience4-reactor supports that and is also compatible with Spring and Spring Boot frameworks.
The RateLimiterOperator checks if a downstream subscriber/observer can acquire a permission to subscribe to an upstream Publisher. If the rate limit would be exceeded, the RateLimiterOperator could either delay requesting data from the upstream or it can emit a RequestNotPermitted error to the downstream subscriber.
RateLimiter rateLimiter = RateLimiter.ofDefaults("name");
Mono.fromCallable(backendService::doSomething)
.transformDeferred(RateLimiterOperator.of(rateLimiter))
More about RateLimiter module itself here: https://resilience4j.readme.io/docs/ratelimiter
You can use limitRate on a Flux. you need to probably reformat your code a bit but see docs here: https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#limitRate-int-
flatMap takes a concurrency parameter: https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#flatMap-java.util.function.Function-int-
Mono<User> getById(int userId) { ... }
Flux.just(1, 2, 3, 4).flatMap(client::getById, 2)
will limit the number of concurrent requests to 2.

Is there any way to wait for a JMS message to be dequeued in a unit test?

I'm writing a spring-boot based project where I have some synchronous (eg. RESTI API calls) and asynchronous (JMS) pieces of code (the broker I use is a dockerized instance of ActiveMQ in case there's some kind of trick/workaround).
One of the problems I'm currently struggling with is: my application receives a REST api call (I'll call it "a sync call"), it does some processing and then sends a JMS message to a queue (async) whose message in then handled and processed (let's say I have a heavy load to perform, so that's why I want it to be async).
Everything works fine when running the application, async messages are enqueued and dequeued as expecting.
When I'm writing tests, (and I'm testing the whole service, which includes the sync and async call in rapid succession) it happens that the test code is too fast, and the message is still waiting to be dequeued (we are talking about milliseconds, but that's the problem).
Basically as soon as i receive the response from the API call, the message is still in the queue, so if, for example I make a query to check for its existence -> ka-boom the test fails because (obviously) it doesn't find the object (that probably meanwhile is being processed and created).
Is there any way, or any pattern, I can use to make my test wait for that async message to be dequeued? I can attach code to my implementation if needed, It's a bachelors degree thesis project.
One obvious solution I'm temporarily using is adding a hundred milliseconds sleep between the method call and the assert section (hoping everything is done and persisted), but honestly I kinda dislike this solution because it seems so non-deterministic to me. Also creating a latch between development code and testing doesn't sound really good to me.
Here's the code I use as an entry-point to al the mess I explained before:
public TransferResponseDTO transfer(Long userId, TransferRequestDTO transferRequestDTO) {
//Preconditions.checkArgument(transferRequestDTO.amount.compareTo(BigDecimal.ZERO) < 0);
Preconditions.checkArgument(userHelper.existsById(userId));
Preconditions.checkArgument(walletHelper.existsByUserIdAndSymbol(userId, transferRequestDTO.symbol));
TransferMessage message = new TransferMessage();
message.userId = userId;
message.symbol = transferRequestDTO.symbol;
message.destination = transferRequestDTO.destination;
message.amount = transferRequestDTO.amount;
messageService.send(message);
TransferResponseDTO response = new TransferResponseDTO();
response.status = PENDING;
return response;
}
And here's the code that receives the message (although you wouldn't need it):
public void handle(TransferMessage transferMessage) {
Wallet source = walletHelper.findByUserIdAndSymbol(transferMessage.userId, transferMessage.symbol);
Wallet destination = walletHelper.findById(transferMessage.destination);
try {
walletHelper.withdraw(source, transferMessage.amount);
} catch (InsufficientBalanceException ex) {
String u = userHelper.findEmailByUserId(transferMessage.userId);
EmailMessage email = new EmailMessage();
email.subject = "Insufficient Balance in your account";
email.to = u;
email.text = "Your transfer of " + transferMessage.amount + " " + transferMessage.symbol + " has been DECLINED due to insufficient balance.";
messageService.send(email);
}
walletHelper.deposit(destination, transferMessage.amount);
String u = userHelper.findEmailByUserId(transferMessage.userId);
EmailMessage email = new EmailMessage();
email.subject = "Transfer executed";
email.to = u;
email.text = "Your transfer of " + transferMessage.amount + " " + transferMessage.symbol + " has been ACCEPTED.";
messageService.send(email);
}
Im' sorry if the code sounds "a lil sketchy or wrong" it's a primordial implementation.
I'm willing to write a utility to share with you all if that's the case, but, as you've probably noticed, I'm low on ideas right now.
I'm an ActiveMQ developer working mainly on ActiveMQ Artemis (the next-gen broker from ActiveMQ). We run into this kind of problem all the time in our test-suite given the asynchronous nature of the broker, and we developed a little utility class that automates & simplifies basic polling operations.
For example, starting a broker is asynchronous so it's common for our tests to include an assertion to ensure the broker is started before proceeding. Using old-school Java 6 syntax it would look something like this:
Wait.assertTrue(new Condition() {
#Override
public boolean isSatisfied() throws Exception {
return server.isActive();
}
});
Using a Java 8 lambda would look like this:
Wait.assertTrue(() -> server.isActive());
Or using a Java 8 method reference:
Wait.assertTrue(server::isActive);
The utility is quite flexible as the Condition you use can test anything you want as long as it ultimately returns a boolean. Furthermore, it is deterministic unlike using Thread.sleep() (as you noted) and it keeps testing code separate from the "product" code.
In your case you can check to see if the "object" being created by your JMS process can be found. If it's not found then it can keep checking until either the object is found or the timeout elapses.

Synchronize request responsein java

I have some services that provide information at different times but I would like some services to provide all the answers at the same time despite being different services.
I am using java 11 with spring boot and using api rest I will leave below the example of one of the services
#GetMapping(value = "/caract/opcoes/acoes/disponiveis")
public ResponseEntity<List<GroupByData>> getDatasCaractOpcoesAcoesDisponiveis() {
List<GroupByData> result = caractOpcoesAcoesServico.findGroupByIdentityRptDt();
if (result.isEmpty()) {
return new ResponseEntity<List<GroupByData>>(HttpStatus.NO_CONTENT);
}
return new ResponseEntity<List<GroupByData>>(result, HttpStatus.OK);
}
How can I release the response of several request that are released at different times only after everyone is ready?
There are 2 ways to do this. First I will explain the nice way which is reactive and webflux.
#GetMapping(value = "/anothertest")
public Mono<String> rest() {
log.info("request number " + reqCounter++);
CompletableFuture<String> stringCompletableFuture = sendRequestWithJavaHttpClient().thenApply(x -> "test: " + x);
Duration between = Duration.between(
LocalTime.now(),
LocalTime.parse("14:01:00")// I am assuming there is a time we send data back
);
return Mono.first(Mono.delay(between)).then(Mono.fromFuture(stringCompletableFuture));
}
private CompletableFuture<String> sendRequestWithJavaHttpClient() {
return CompletableFuture.supplyAsync(() -> {
// do some logic here
return "hello world.";
});
}
As we say to first mono to delay the response it will wait until the time comes and do the function call after. This is the nice way because with this way there is no blocking on response. all clients will need to wait. you will need to use webflux of spring.
The second and not cool way is to block the thread. this one uses spring mvc
#GetMapping(value = "/caract/opcoes/acoes/disponiveis*")
public ResponseEntity<Object> getDatasCaractOpcoesAcoesDisponiveis() throws Exception {
log.info("request number " + reqCounter++);
Duration between = Duration.between(
LocalTime.now(),
LocalTime.parse("14:10:00")
);
log.info("will sleep "+between.toMillis());
Thread.sleep(between.toMillis());
return new ResponseEntity<Object>("hello world", HttpStatus.OK);
}
This will block the server thread until the time comes. problem about this is the thread count of tomcat. the default value is 200, therefore, your application can have max 200 requests and after that tomcat can not take any more connections. You can increase it by changing server.tomcat.max-threads=500 in the application.properties
you can find running example code here https://github.com/ozkanpakdil/spring-examples/tree/master/response-wait-methods
If you ask my opinion about these two ways is not nice from a design point of view because clients should not wait. I would go with responding NOK until the time comes. and if time is good just respond to the real result. this way client-side can request as much as they want without load. and server-side won't have any load because nothing is blocked.

How to make a Thrift client for multiple threads?

I have a working Thrift client in the below snippet.
TTransport transport = new THttpClient(new Uri("http://localhost:8080/api/"));
TProtocol protocol = new TBinaryProtocol(transport);
TMultiplexedProtocol mp = new TMultiplexedProtocol(protocol, "UserService");
UserService.Client userServiceClient = new UserService.Client(mp);
System.out.println(userServiceClient.getUserById(100));
When running the client within multi-threaded environment
threads[i] = new Thread(new Runnable() {
#Override
public void run() {
System.out.println(userServiceClient.getUserById(someId));
}
}
I got an exception: out of sequence response
org.apache.thrift.TApplicationException: getUserById failed: out of sequence response
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:76)
I guess the reason is that Thrift generated Client is not thread safe.
But if I want multi-clients to call the same method getUserById() simultaneously, how can I make it?
Thrift clients are not designed to be shared across threads. If you need multiple client threads, set up one Thrift client per thread.
But if I want multi-clients to call the same method getUserById() simultaneously, how can I make it?
We don't know much about the context, so I have to guess a bit. If the issue is that there are a lot of such calls coming in at a time, a possible solution could be to group calls to save roundtrip time:
service wtf {
list<string> getUsersById( 1 : list<int> userIds)
}
That's just a short idea. Maybe you want to return list<user_data_struct> instead. For practical reasons I would also recommend to wrap the returned list into a struct, so the whole thing becomes extensible.

Play Framework SSE Closing Chunked Response

I'm trying to implement Server-Side Events server in Play Framework 1.2.5
How can I know if the client called EventSource.close() (or closed its browser window, for example)? This is a simplified piece of server code I'm using:
public class SSE extends Controller {
public static void updater() {
response.contentType = "text/event-stream";
response.encoding = "UTF-8";
response.status = 200;
response.chunked = true;
while (true) {
Promise<String> promise = Producer.getNextMessage();
String msg = await(promise);
response.writeChunk("data: " + msg + "\n\n");
}
}
}
Producer should deal with queuing, Promise objects, and produce the output, but I should know when to stop it (filling its queue). I would expect some exception thrown by response.writeChunk() if the output stream is closed, but there's no any.
There's a similar example that does not deal with SSE, but only chunks instead, at http://www.playframework.com/documentation/1.2.5/asynchronous#HTTPresponsestreaming
Since play.mvc.Controller doesn't let me know if the output stream is closed during the execution, I solved the problem through the Producer itself:
In Producer.getNextMessage(), the current time is remembered.
In Producer.putMessage(String), the time since last 'get' is checked. If it's greater than some threshold, we can consider that SSE channel is closed.
There's also this class play.libs.F.EventStream which can be useful within the Producer.
Plus, Producer might not be the right name here, since it's more of a dispatching queue...

Categories