Get JAX-RS AsyncResponse, but suspend later - java

Consider the following code to listen for an update with long-polling:
Map<String, List<AsyncResponse>> tagMap = new ConcurrentGoodStuff();
// This endpoint listens for notifications of the tag
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
#GET
#Path("listen/{tag}")
public void listenForUpdates(
#PathParam("tag") final String tag,
#Suspended final AsyncResponse response) {
tagMap.get(tag).add(response);
}
// This endpoint is for push-style notifications
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
#PUT
#Path("update/{tag}/{value}")
public Response updateTag(
#PathParam("tag") final String tag,
#PathParam("value") final String value) {
for(AsyncResponse response : tagMap.get(tag)) {
// Resumes all previously suspended responses
response.resume(value);
}
return Response.ok("cool whatever").build();
}
The client adds a listener with the normal Jersey client's AsyncInvoker, calls the asynchronous task, and then another task calls the update method.
When I'm testing this, I run into a race condition. Right after I add the listener asynchronously on listenForUpdates(), I make an update on the endpoint with updateTag() synchronously. But the update gets run before the listener is added, and the asynchronous response fails to resume.
A solution to this is to call the suspend() method on the response after adding it to the listeners. But it's not clear how to do that, given that #Suspended provides an already-suspended AsyncResponse object. What should I do so that the async response is suspended only after adding to listener? Will that actually call the suspend method? How can I get this to work with the Jersey async client, or should I use a different long-polling client?
For solutions, I'm open to different libraries, like Atmosphere or Guava. I am not open to adding a Thread.sleep() in my test, since that is an intermittent failure waiting to happen.

I ended up using RxJava, but not before coming up with a just-as-good solution using BlockingQueue instead of List in the Map. It goes something like this:
ConcurrentMap<String, BlockingQueue<AsyncResponse>> tagMap = new ConcurrentGoodStuff();
// This endpoint initiates a listener array for the tag.
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
#GET
#Path("initListen/{tag}")
public void listenForUpdates(
#PathParam("tag") final String tag) {
tagMap.putIfAbsent(tag, new LinkedBlockingQueue<>());
}
// This endpoint listens for notifications of the tag
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
#GET
#Path("listen/{tag}")
public void listenForUpdates(
#PathParam("tag") final String tag,
#Suspended final AsyncResponse response) {
BlockingQueue<AsyncResponse> responses = tagMap.get(tag);
if (responses != null) {
responses.add(response);
}
}
// This endpoint is for push-style notifications
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
#PUT
#Path("update/{tag}/{value}")
public Response updateTag(
#PathParam("tag") final String tag,
#PathParam("value") final String value) {
BlockingQueue<AsyncResponse> responses = tagMap.get(tag);
if (responses == null) {
return Response.noContent().build();
}
if (responses.isEmpty()) {
// Block-wait for an async listener
try {
AsyncResponse response = tagMap.poll(15, TimeUnit.SECONDS);
if (response == null) {
return Response.noContent().build();
}
response.resume(value);
} catch (InterruptedException e) {
return Response.noContent().build();
}
} else {
for (AsyncResponse response : responses) {
// Resumes all previously suspended responses
response.resume(value);
}
}
return Response.ok("cool whatever").build();
}
I haven't tested this exact code, but I used some version of it in the past. As long as you call the initListen endpoint synchronously first, you can call the asynchronous listen endpoint and then the synchronous update endpoint and there won't be any significant race condition.
There is a slight hint of a race condition in the update endpoint, but it's minor. The responses blocking queue could become empty on iteration, or it may be updated by multiple sources differently. To alleviate this, I've used the drainTo(Collection) method on a per-request instantiated data structure. This still does not solve the use case where multiple clients may try updating the same tag of listeners, but I do not need this use case.

Related

Multiple blocking calls wrapped in fromCallable in WebFlux

I'm using Feign Client in Reactive Java. The Feign client has an interceptor that sends a blocking request to get auth token and adds it as a header to the feign request.
the feign request is wrapped in Mono.FromCallable with Schedulers.boundedElastic().
my question is: does the inner call to get the auth token considered as a blocking call?
I get that both calls will be on a different thread from Schedulers.boundedElastic() but not sure is ok to execute them on the same thread or I should change it so they'll run on different threads.
Feign client:
#FeignClient(name = "remoteRestClient", url = "${remote.url}",
configuration = AuthConfiguration.class, decode404 = true)
#Profile({ "!test" })
public interface RemoteRestClient {
#GetMapping(value = "/getSomeData" )
Data getData();
}
interceptor:
public class ClientRequestInterceptor implements RequestInterceptor {
private IAPRequestBuilder iapRequestBuilder;
private String clientName;
public ClientRequestInterceptor(String clientName, String serviceAccount, String jwtClientId) {
this.iapRequestBuilder = new IAPRequestBuilder(serviceAccount, jwtClientId);
this.clientName = clientName;
}
#Override
public void apply(RequestTemplate template) {
try {
HttpRequest httpRequest = iapRequestBuilder.buildIapRequest(); <---- blocking call
template.header(HttpHeaders.AUTHORIZATION, httpRequest.getHeaders().getAuthorization());
} catch (IOException e) {
log.error("Building an IAP request has failed: {}", e.getMessage(), e);
throw new InterceptorException(String.format("failed to build IAP request for %s", clientName), e);
}
}
}
feign configuration:
public class AuthConfiguration {
#Value("${serviceAccount}")
private String serviceAccount;
#Value("${jwtClientId}")
private String jwtClientId;
#Bean
public ClientRequestInterceptor getClientRequestInterceptor() {
return new ClientRequestInterceptor("Entitlement", serviceAccount, jwtClientId);
}
}
and feign client call:
private Mono<Data> getData() {
return Mono.fromCallable(() -> RemoteRestClient.getData()
.publishOn(Schedulers.boundedElastic());
}
You can sort of tell that it is a blocking call since it returns a concrete class and not a Future (Mono or Flux). To be able to return a concrete class, the thread needs to wait until we have the response to return it.
So yes it is most likely a blocking call.
Reactor recommends that you use the subscribeOn operator when doing blocking calls, this will place that entire chain of operators on its own thread pool.
You have chosen to use the publishOn and it is worth pointing out the following from the docs:
affects where the subsequent operators execute
This in practice means that up until the publishOn operator all actions will be executed using any available anonymous thread.
But all calls after will be executed on the defined thread pool.
private Mono<Data> getData() {
return Mono.fromCallable(() -> RemoteRestClient.getData()
.publishOn(Schedulers.boundedElastic());
}
You have chosen to place it after so the thread pool switch will be done after the call to getData.
publishOns placing in the chain matters while subscribeOn affects the entire chain of operator which means it's placing does not matter.
So to answer your question again, yes it is most likely a blocking call (i can't confirm by 100% since i have not looked into the source code) and how you wish to solve it with either publishOn on subscribeOn is up to you.
Or look into if there is an reactive alternative library to use.

multiple volley requests at same time

I need to send 4 http requests at same time and wait until all of them finished (i'm using volley)
I've tried to send them separately in 4 threads and use thread.join but it seems that onResponse and onError methods are running in main thread so the request threads finishes after call queue.add(jsonArrayRequest).
I can't use countdownlatch because as I know first it doesn't run threads at same time (it runs them in a queue) and second it blocks main thread.
what's your suggestion? let me know if there's better way to do this using Retrofit , OkHttp or other libraries.
To achieve it without using any patterns or other libraries, you can mark the request as finished if it responded, and call the method, in each of them, you want to execute if all the requests are finished. On that method, you just need to check if all the requests are done.
Example:
isRequest1Finished = false;
isRequest2Finished = false;
response1 = null;
response2 = null;
volleyRequest1(new Response.Listener<Something>() {
#Override
public void onResponse(Something response) {
isRequest1Finished = true;
response1 = response;
doSomething();
}
})
volleyRequest2(new Response.Listener<Something>() {
#Override
public void onResponse(Something response) {
isRequest2Finished = true;
response2 = response;
doSomething();
}
})
//....do on all of your requests
and in your doSomething() method:
void doSomething() {
if (isRequest1Finished && isRequest2Finished) {
// do something with the response1, response2, etc....
}
}
But my suggestion is using RxJava, where you can apply zip operator, in which it combines all of your asynchronous responses into one result:
Example:
Observable request1 = getRequest1();
Observable request2 = getRequest2();
Observable.zip(request1, request2,
new BiFunction<Something, Something, Pair<Something, Something>() {
#Override
public Pair<Something, Something> apply(Something response1, Something response2) {
return new Pair(response1, response2); // you can create a custom object to handle all of the responses
}
})
.map( // do something with your responses )

How do i get the status code for a response i subscribe to using the JDK's HttpClient?

Java 11 introduced a new standard HTTP client. A request is sent using HttpClient:send, which returns a HttpResponse.
The HttpResponse::statusCode method can be used to find the HTTP status of the response.
HttpClient::send also takes a BodyHandler which is used to handle the body of the response. A useful family of BodyHandlers are those which wrap a Flow.Subscription, created with BodyHandlers::fromSubscriber and relatives. These are a useful way of dealing with infinite streams of data, such as server-sent events.
However, it seems that if you use one of these BodyHandlers, the flow is delivered on the thread which called HttpClient::send, and so for an infinite stream, that method never returns. Since it never returns, you never get a HttpResponse with which you can determine the status.
So, how do i get the status code for a response i subscribe to?
As noted in the documentation, these BodyHandlers
do not examine the status code, meaning the body is always accepted
with a hint that
a custom handler can be used to examine the status code and headers, and return a different body subscriber, of the same type, as appropriate
There does not seem to be a convenience method or class for this, but such a thing is moderately straightforward:
// a subscriber which expresses a complete lack of interest in the body
private static class Unsubscriber implements HttpResponse.BodySubscriber<Void> {
#Override
public CompletionStage<Void> getBody() {
return CompletableFuture.completedStage(null);
}
#Override
public void onSubscribe(Flow.Subscription subscription) {
subscription.cancel();
}
#Override
public void onNext(List<ByteBuffer> item) {}
#Override
public void onError(Throwable throwable) {}
#Override
public void onComplete() {}
}
// wraps another handler, and only uses it for an expected status
private static HttpResponse.BodyHandler<Void> expectingStatus(int expected, HttpResponse.BodyHandler<Void> handler) {
return responseInfo -> responseInfo.statusCode() == expected ? handler.apply(responseInfo) : new Unsubscriber();
}
// used like this
Flow.Subscriber<String> subscriber = createSubscriberSomehow();
HttpResponse<Void> response = HttpClient.newHttpClient()
.send(HttpRequest.newBuilder()
.uri(URI.create("http://example.org/api"))
.build(),
expectingStatus(200, HttpResponse.BodyHandlers.fromLineSubscriber(subscriber)));

Handling on Jersey (JAX-RS) on XMLHttpRequest abort

I'm using Jersey for the application's REST API, consider the function below
#POST
public String writeSomething() {
someVeryIntensiveTaskWhichTaking("5 seconds");
Log.info("get request fulfilled") // don't want the logging to happen if user cancelled on UI
return "ok";
}
Assume this POST request is doing some intensive task and might takes up to 5 seconds. Then on the UI, the user decided to cancel the POST request via XMLHttpRequest.abort() at 2 second,
is there any way to track this abortion and prevent some action being done? something like checking IsClientConnected?
Update #1
Thanks to peeskillet's tips, but i still unable to get the callback being triggered upon the XHR's abortion. The below is my code
#POST
public String writeSomething(#Suspended final AsyncResponse asyncResponse) {
asyncResponse.register(new ConnectionCallback() {
public void onDisconnect(AsyncResponse asyncResponse) {
System.out.println("This is canceled, do whatever you want"); // this is not triggered after XHR aborted
}
});
someVeryIntensiveAsynTaskWhichTaking("5 seconds", asyncResponse); // this asyn function will trigger asyncResponse.resume() upon completion
}

Jersey/JAX-RS 2 AsyncResponse - how to keep track of current long-polling callers

My goal is to support long-polling for multiple web service callers, and to keep track of which callers are currently "parked" on a long poll (i.e., connected). By "long polling," I mean that a caller calls a web service and the server (the web service) does not return immediately, but keeps the caller waiting for some preset period of time (an hour in my application), or returns sooner if the server has a message to send to the caller (in which case the server returns the message by calling asyncResponse.resume("MESSAGE")).
I'll break this into two questions.
First question: is this a reasonable way to "park" the callers who are long-polling?
#GET
#Produces(MediaType.TEXT_PLAIN)
#ManagedAsync
#Path("/poll/{id}")
public Response poller(#Suspended final AsyncResponse asyncResponse, #PathParam("id") String callerId) {
// add this asyncResponse to a HashMap that is persisted across web service calls by Jersey.
// other application components that may have a message to send to a caller will look up the
// caller by callerId in this HashMap and call resume() on its asyncResponse.
callerIdAsyncResponseHashMap.put(callerId, asyncResponse);
asyncResponse.setTimeout(3600, TimeUnit.SECONDS);
asyncResponse.setTimeoutHandler(new TimeoutHandler() {
#Override
public void handleTimeout(AsyncResponse asyncResponse) {
asyncResponse.resume(Response.ok("TIMEOUT").build());
}
});
return Response.ok("COMPLETE").build();
}
This works fine. I'm just not sure if it's following best practices. It seems odd to have the "return Response..." line at the end of the method. This line is executed when the caller first connects, but, as I understand it, the "COMPLETE" result is never actually returned to the caller. The caller either gets "TIMEOUT" response or some other response message sent by the server via asyncResponse.resume(), when the server needs to notify the caller of an event.
Second question: my current challenge is to accurately reflect the population of currently-polling callers in the HashMap. When a caller stops polling, I need to remove its entry from the HashMap. A caller can leave for three reasons: 1) the 3600 seconds elapse and so it times out, 2) another application component looks up the caller in the HashMap and calls asyncResponse.resume("MESSAGE"), and 3) the HTTP connection is broken for some reason, such as somebody turning off the computer running the client application.
So, JAX-RS has two callbacks I can register to be notified of connections ending: CompletionCallback (for my end-poll reasons #1 and #2 above), and ConnectionCallback (for my end-poll reason #3 above).
I can add these to my web service method like this:
#GET
#Produces(MediaType.TEXT_PLAIN)
#ManagedAsync
#Path("/poll/{id}")
public Response poller(#Suspended final AsyncResponse asyncResponse, #PathParam("id") String callerId) {
asyncResponse.register(new CompletionCallback() {
#Override
public void onComplete(Throwable throwable) {
//?
}
});
asyncResponse.register(new ConnectionCallback() {
#Override
public void onDisconnect(AsyncResponse disconnected) {
//?
}
});
// add this asyncResponse to a HashMap that is persisted across web service calls by Jersey.
// other application components that may have a message to send to a caller will look up the
// caller by callerId in this HashMap and call resume() on its asyncResponse.
callerIdAsyncResponseHashMap.put(callerId, asyncResponse);
asyncResponse.setTimeout(3600, TimeUnit.SECONDS);
asyncResponse.setTimeoutHandler(new TimeoutHandler() {
#Override
public void handleTimeout(AsyncResponse asyncResponse) {
asyncResponse.resume(Response.ok("TIMEOUT").build());
}
});
return Response.ok("COMPLETE").build();
}
The challenge, as I said, is to use these two callbacks to remove no-longer-polling callers from the HashMap. The ConnectionCallback is actually the easier of the two. Since it receives an asyncResponse instance as a parameter, I can use that to remove the corresponding entry from the HashMap, like this:
asyncResponse.register(new ConnectionCallback() {
#Override
public void onDisconnect(AsyncResponse disconnected) {
Iterator<Map.Entry<String, AsyncResponse>> iterator = callerIdAsyncResponseHashMap.entrySet().iterator();
while (iterator.hasNext()) {
Map.Entry<String, AsyncResponse> entry = iterator.next();
if (entry.getValue().equals(disconnected)) {
iterator.remove();
break;
}
}
}
});
For the CompletionCallback, though, since the asyncResponse is already done or cancelled at the time the callback is triggered, no asyncResponse parameter is passed in. As a result, it seems the only solution is to run through the HashMap entries checking for done/cancelled ones and removing them, like the following. (Note that I don't need to know whether a caller left because resume() was called or because it timed out, so I don't look at the "throwable" parameter).
asyncResponse.register(new CompletionCallback() {
#Override
public void onComplete(Throwable throwable) {
Iterator<Map.Entry<String, AsyncResponse>> iterator = callerIdAsyncResponseHashMap.entrySet().iterator();
while (iterator.hasNext()) {
Map.Entry<String, AsyncResponse> entry = iterator.next();
if (entry.getValue().isDone() || entry.getValue().isCancelled()) {
iterator.remove();
}
}
}
});
Any feedback would be appreciated. Does this approach seem reasonable? Is there a better or more Jersey/JAX-RS way to do it?
Your poller() method does not need to return a Response in order to participate in asynchronous processing. It can return void. If you are doing anything complex in the poller however you should consider wrapping the whole method in a try/catch block that resumes your AsyncResponse object with the exception to ensure that any RuntimeExceptions or other unchecked Throwables are not lost. Logging these exceptions in the catch block here also seems like a good idea.
I'm currently researching the question of how to reliably catch an asynchronous request being cancelled by the client and have read at one question that suggests the mechanism isn't working for the questioner[1]. I'll leave it to others to fill out this information for the moment.
[1] AsyncResponse ConnectionCallback does not fire in Jersey

Categories