I am developing a REST based web application, which will call a 3rd system asynchronously for some data (using websockets). So
Browser -> REST -> My WebApp -> Another App -> My WebApp -> Browser
The communication between My WebApp to Another APP is asynchronous and I can only track the responses for a request using some identifiers.
So, I request C as <counter>.C and the response will be <counter>.Response where both counters will be same.
To map the response to request, I am setting the command, counter, flag to a bean. I keep a while loop which keeps on checking whether the flag has been set or not. Once I get the response, the set the flag, while loop exits and I know that the data is available.
Is this the right way? Is there a way I can make this better, because I feel (I might be wrong!) that keeping an open while loop is incorrect.
Class bean is set like below,
public void setAllProperties(){
bean.setCommand(commandString);
bean.setCounter(counter);
bean.hasResponse(false);
}
The snippet in webservice is
bean.setAllProperties();
sendToApplication(bean);
int checkCounter = 0;
while(!bean.hasResponse && checkCounter > 0){
if(bean.hasResponse){
checkCounter++;
// loggers and other logic here
}
}
The loop defeats a lot of the value of the asynchronous operation. It also consumes a significant amount of CPU time (try adding a delay - a quick sleep - when using such a loop).
I recommend to use "wait()" and "notify()"/"notifyAll()" instead.
In the code that's waiting for a response, do something like this:
synchronized ( bean ) {
while ( ! bean.hasResponse ) {
bean.wait();
}
}
In the code that processes the response and updates the bean:
synchronized ( bean ) {
bean.hasResponse = true;
bean.notifyAll();
}
Related
I am new to Vertx and was exploring request-reply using event bus.
I want to implement below flow
User requests for a data
controller sends a message on event bus to a redis-processor verticle
redis-processor will wait for n seconds till value is available in redis (there will be a background process which will keep on refreshing cache, hence the wait)
redis-processor will send reply back to controller
controller responds to user
In short I want to do something like this:
Now I want to implement this in Vertx since vertx can run asynchronously. Using event bus I can isolate controller from processor. So controller can accept multiple user request and stay responsive under load.
(I hope I am right with this!)
I have implemented this in very crude fashion in java-vertx. Stuck in below part.
//receive request from controller
vertx.eventBus().consumer(REQUEST_PROCESSOR, evtHandler -> {
String txnId = evtHandler.body().toString();
LOGGER.info("Received message:: {}", txnId);
this.redisAPI.get(txnId, result -> { // <=====
String value = result.result().toString();
LOGGER.info("Value in redis : {}", value);
evtHandler.reply(value); // reply to controller
});
});
pls see line denoted by arrow. How can I wait for x seconds without blocking event loop?
Please help.
Thats actually very simple, you need a timer. Please see docs for details but you will need more or less something like this:
vertx.setTimer(1000, id -> {
this.redisAPI.get(txnId, result -> {
String value = result.result().toString();
LOGGER.info("Value in redis : {}", value);
evtHandler.reply(value); // reply to controller
});
});
You might want to store the timer IDs somewhere so that you can cancel them or that at least you know something is running when a shutdown request comes in for your verticle to delay it. But this all depends on your needs.
As #mohamnag said, you could use a Vertx timer
here is another example on how to user timer.
Note that the timer value is in ms.
As an improvement to the, I will recommend checking that the callback has succeeded before attempting to get the value from redisAPI. This is done using the succeeded() method.
In an asynchronous environment getting that result could fail due to several issues (network errors etc)
vertx.setTimer(n * 1000, id -> {
this.redisAPI.get(txnId, result -> {
if(result.succeeded()){ // the callback succeeded to get a value from redis
String value = result.result().toString();
LOGGER.info("Value in redis : {}", value);
evtHandler.reply(value); // reply to controller
} else {
LOGGER.error("Value could not be gotten from redis : {}", result.cause());
evtHandler.fail(someIntegerCode, result.cause()); // reply with failure related info
}
});
});
As I wrote in title we need in project notify or execute method of some thread by another. This implementation is part of long polling. In following text describe and show my implementation.
So requirements are that:
UserX send request from client to server (poll action) immediately when he got response from previous. In service is executed spring async method where thread immediately check cache if there are some new data in database. I know that cache is usually used for methods where for specific input is expected specific output. This is not that case, because I use cache to reduce database calls and output of my method is always different. So cache help me store notification if I should check database or not. This checking is running in while loop which end when thread find notification to read database in cache or time expired.
Assume that UserX thread (poll action) is currently in while loop and checking cache.
In that moment UserY (push action) send some data to server, data are stored in database in separated thread, and also in cache is stored userId of recipient.
So when UserX is checking cache he found id of recipient (id of recipient == his id in this case), and then break loop and fetch these data.
So in my implementation I use google guava cache which provide manually write.
private static Cache<Long, Long> cache = CacheBuilder.newBuilder()
.maximumSize(100)
.expireAfterWrite(5, TimeUnit.MINUTES)
.build();
In create method I store id of user which should read these data.
public void create(Data data) {
dataRepository.save(data);
cache.save(data.getRecipient(), null);
System.out.println("SAVED " + userId + " in " + Thread.currentThread().getName());
}
and here is method of polling data:
#Async
public CompletableFuture<List<Data>> pollData(Long previousMessageId, Long userId) throws InterruptedException {
// check db at first, if there are new data no need go to loop and waiting
List<Data> data = findRecent(dataId, userId));
data not found so jump to loop for some time
if (data.size() == 0) {
short c = 0;
while (c < 100) {
// check if some new data added or not, if yes break loop
if (cache.getIfPresent(userId) != null) {
break;
}
c++;
Thread.sleep(1000);
System.out.println("SEQUENCE: " + c + " in " + Thread.currentThread().getName());
}
// check database on the end of loop or after break from loop
data = findRecent(dataId, userId);
}
// clear data for that recipient and return result
cache.clear(userId);
return CompletableFuture.completedFuture(data);
}
After User X got response he send poll request again and whole process is repeated.
Can you tell me if is this application design for long polling in java (spring) is correct or exists some better way? Key point is that when user call poll request, this request should be holded for new data for some time and not response immediately. This solution which I show above works, but question is if it will be works also for many users (1000+). I worry about it because of pausing threads which should make slower another requests when no threads will be available in pool. Thanks in advice for your effort.
Check Web Sockets. Spring supports it from version 4 on wards. It doesn't require client to initiate a polling, instead server pushes the data to client in real time.
Check the below:
https://spring.io/guides/gs/messaging-stomp-websocket/
http://www.baeldung.com/websockets-spring
Note - web sockets open a persistent connection between client and server and thus may result in more resource usage in case of large number of users. So, if you are not looking for real time updates and is fine with some delay then polling might be a better approach. Also, not all browsers support web sockets.
Web Sockets vs Interval Polling
Longpolling vs Websockets
In what situations would AJAX long/short polling be preferred over HTML5 WebSockets?
In your current approach, if you are having a concern with large number of threads running on server for multiple users then you can trigger the polling from front-end every time instead. This way only short lived request threads will be triggered from UI looking for any update in the cache. If there is an update, another call can be made to retrieve the data. However don't hit the server every other second as you are doing otherwise you will have high CPU utilization and user request threads may also suffer. You should do some optimization on your timing.
Instead of hitting the cache after a delay of 1 sec for 100 times, you can apply an intelligent algorithm by analyzing the pattern of cache/DB update over a period of time.
By knowing the pattern, you can trigger the polling in an exponential back off manner to hit the cache when the update is most likely expected. This way you will be hitting the cache less frequently and more accurately.
I am using the StreamObserver class found in the grpc-java project to set up some bidirectional streaming.
When I run my program, I make an undetermined number of requests to the server, and I only want to call onCompleted() on the requestObserver once I have finished making all of the requests.
Currently, to solve this, I am using a variable "inFlight" to keep track of the requests that have been issued, and when a response comes back, I decrement "inFlight". So, something like this.
// issuing requests
while (haveRequests) {
MessageRequest request = mkRequest();
this.requestObserver.onNext(request);
this.inFlight++;
}
// response observer
StreamObserver<Message> responseObserver = new StreamObserver<Message> {
#Override
public void onNext(Message response) {
if (--this.onFlight == 0) {
this.requestObserver.onCompleted();
}
// work on message
}
// other methods
}
A bit pseudo-codey, but this logic works. However, I would like to get rid of the "inFlight" variable if possible. Is there anything within the StreamObserver class that allows this sort of functionality, without the need of an additional variable to track state? Something that would tell the number of requests issued and when they completed.
I've tried inspecting the object within the intellij IDE debugger, but nothing is popping out to me.
To answer your direct question, you can simply call onComplete from the while loop. All the messages passed to onNext. Under the hood, gRPC will send what is called a "half close", indicating that it won't send any more messages, but it is willing to receive them. Specifically:
// issuing requests
while (haveRequests) {
MessageRequest request = mkRequest();
this.requestObserver.onNext(request);
this.inFlight++;
}
requestObserver.onCompleted();
This ensures that all responses are sent, and in the order that you sent them. On the server side, when it sees the corresponding onCompleted callback, it can half-close its side of the connection by calling onComplete on its observer. (There are two observers on the server side one for receiving info from the client, one for sending info).
Back on the client side, you just need to wait for the server to half close to know that all messages were received and processed. Note that if there were any errors, you would get an onError callback instead.
If you don't know how many requests you are going to make on the client side, you might consider using an AtomicInteger, and call decrementAndGet when you get back a response. If the return value is 0, you'll know all the requests have completed.
I want to draw five different routes in Google Maps API v3, GWT 2.5.1. I initialize a route which sets its DirectionDisplay and DirectionsRequest in this class.
When I start my web project, sometimes only my first route is shown, sometimes all five, so I decided to make a System.out.print(m);.
The results:
01234 -> as expected, all routes shown
10234 -> error, only first route shown.
Why does Google serve my second request before my first? I tried to use Thread.sleep(1000) to ensure that my requests have time to come back in order, also Timer/TimerTasks, no success. Any ideas?
DirectionsService o = DirectionsService.newInstance();
for (Integer i = 0; i < 4; i++) { //routes.size()
final int m = i;
final Route route = new Route("Route " + i.toString());
route.initRoute(m, getRoutingPresenter(), adressData, addressIndex);
//here i initialize the DirectionsRequests and its Displays, which
//i set in this class after execution.
o.route(directionsRequest, new DirectionsResultHandler() {
#Override
public void onCallback(DirectionsResult result,DirectionsStatus status) {
if (status == DirectionsStatus.OK) {
System.out.print(m);
...
}
}
);
}
}
Google can take as long as they like to handle your requests and you should code accordingly. This is true of any HTTP traffic. Even if the remote server guaranteed a fixed service time for all requests, the Internet does not and your requests could be taking any old route through it.
You can either right your handling code so that the response order doesn't matter, or write it so that it waits until all responses are back and then sort out the order itself.
I would recommend the first unless there are very specific an important reasons not to.
I don't have much knowledge on Java EE but am currently learning it.
I've come up with a project which involves a long running task (up to several minutes) invoked by the user. The task consists of several steps. Of course I would like to show the progress to the user.
The project uses Java EE with JPA, JSF and Icefaces. It runs on Glassfish.
An experienced colleague adviced the following pattern to me:
Create a stateless, asynchronous EJB which creates a response object and processes the request
Persist the response object after each step
In the backing bean, query and display the response object
This works well. My only problem is to update the status site to mirror the progress. Currently I am doing this with a simple JavaScript page reload every x seconds.
Do you know a way/pattern to reflect the current step from the stateless ejb to the jsf backing bean?
Or, and I would prefer that, do you know a way to query the value of a backing bean every x seconds?
Edit:
I am aware of the Icefaces push mechanism, but I want the status update site to be decoupled from the calculation EJB for the following reasons:
The backing bean might already be destroyed because the user left the site and return later to fetch the result
Multiple sessions and therefore multiple beans may exist for one user
Having a clean design
There are several options to pass back this information. If EJB is living in the same JVM,
you may as well use some singleton Map and store progress under certain key (session ID)
If this is not the case, you will need some shared state or comminucation. There are several options
store it on database accessible from both tiers ( sql, JNDI, LDAP - better solution would be key-value store , like redis - if you got it )
use some messaging to deposit state of processing on web tier side
store state in a hash it on EJB tier side, and provide another SLSB method to rtrieve this state
Your choice is not easy - all of these solution suckin a different ways.
I accomplished this using a threaded polling model in conjunction with a ProgressBar component.
public void init()
{
// This method is called by the constructor.
// It doesn't matter where you define the PortableRenderer, as long as it's before it's used.
PushRenderer.addCurrentSession("fullFormGroup");
portableRenderer = PushRenderer.getPortableRenderer();
}
public void someBeanMethod(ActionEvent evt)
{
// This is a backing bean method called by some UI event (e.g. clicking a button)
// Since it is part of a JSF/HTTP request, you cannot call portableRenderer.render
copyExecuting = true;
// Create a status thread and start it
Thread statusThread = new Thread(new Runnable() {
public void run() {
try {
// message and progress are both linked to components, which change on a portableRenderer.render("fullFormGroup") call
message = "Copying...";
// initiates render. Note that this cannot be called from a thread which is already part of an HTTP request
portableRenderer.render("fullFormGroup");
do {
progress = getProgress();
portableRenderer.render("fullFormGroup"); // render the updated progress
Thread.sleep(5000); // sleep for a while until it's time to poll again
} while (copyExecuting);
progress = getProgress();
message = "Finished!";
portableRenderer.render("fullFormGroup"); // push a render one last time
} catch (InterruptedException e) {
System.out.println("Child interrupted.");
}
});
statusThread.start();
// create a thread which initiates script and triggers the termination of statusThread
Thread copyThread = new Thread(new Runnable() {
public void run() {
File someBigFile = new File("/tmp/foobar/large_file.tar.gz");
scriptResult = copyFile(someBigFile); // this will take a long time, which is why we spawn a new thread
copyExecuting = false; // this will caue the statusThread's do..while loop to terminate
}
});
copyThread.start();
}
As you are using icefaces you could use the ICEpush mechanism for rendering your updates.