I'm experiencing some issues using DeferredResult with Spring, I think I misunderstand something about them.
These DeferredResult are used for long polling.
I have an application on which multiple users can be logged in, and which displays a list of items from a database. The same items are visible for all the connected users. Any user can, at a given time, select one of these items and interact with it, but then the other users cannot interact with this specific item anymore (say like an item is "locked" for the others as soon as someone select it until the user "releases" it).
Anyway, I'm using long polling to notify the others when one user "takes" an item, that this is item is now "locked", to update their interface.
Say I have for example a url for the polling like:
/polling and another like : /take/{itemId}
I have a web application using Spring MVC with the "old-style" XML configuration with the parameters for asynchronous processes:
<task:annotation-driven> in my servlet config, and more importantly (I think) <async-supported>true</async-supported> in the web.xml.
When a user calls /polling, the request is returning a DeferredResult:
private final BlockingQueue<DeferredResult<List<MyItem>>> pendingRequests = new LinkedBlockingQueue<>();
#RequestMapping("/polling")
public DeferredResult<List<MyItem>> poll(...) {
final DeferredResult<List<MyItem>> def = new DeferredResult<>(60000l);
// ... deferred result init like set onCompletion(), onTimeout()...
// add the deferred to a queue
pendingRequests.add(def);
return def;
}
Then when a user "takes" an item and calls /take/{itemId}, something like this:
#RequestMapping("/take/{itemId}")
public void takeItem(...) {
// ..marking the item as taken and saving it into DB..
// and then, notifying the other pending requests
// the item has been taken by someone
List<MyItem> updatedItemsList = getLastItemsFromDb();
for (DeferredResult<...> d : pendingRequests) {
d.setResult(updatedItemsList);
}
}
Note that in the updatedItemsList list, the specific is now marked as "taken".
So here it is, this seems to work fine that way. Immediately after setting result on each items, the corresponding request resumes and "breaks" the long polling process without waiting for the request timeout so that the front javascript can update the list and make a new long request.
The problem is that I recently tried to "convert" this web application by using Spring Boot, and all the "Java/annotation-based" configuration.
And this behavior does not work anymore. Indeed, it's like setting the DeferredResult (in the for loop in the 2nd request) does not trigger the requests to resume anymore, and it has to wait for the timeout to return the result.
However, I found that calling the setResult method on the DeferredResult objects in a TaskExecutor for example, makes everything work like before.
My question is, why ? Why does this work fine with the XML-style configuration, without any executor or explicit background setResult() call, and this does not work anymore with the Java based configuration ?
Did I miss something ?
FYI, the Java configuration is rather "classic", I set #EnableAutoConfiguration, #EnableAsync, extends from WebMvcConfigurerAdapter.
Thanks in advance for reading this and taking time to reply !
Related
I have a Spring Rest Controller endpoint:
#PostMapping("/someheavyjob")
public Response indexAllReports() {
someService.startLongRunningOperation();
return Response.ok().build();
}
and then in the service class I have a method which performs many computations (databse read, other API calls etc.) :
public void startLongRunningOperation(){
List<String> involvedIds = otherApi.getAllActiveMembersIds(..);
involvedIds.forEach(id -> {
anotherComputatioMethod();
});
}
I know that this approach is bloking user's request until complete the job. I can solve it, it's in that state just to make it clear.
Quesiton: What I am considering about it is:
There should be only one instance of this heavy method running at a time (method: startLongRunningOperation).
Right now, as this method can be invoked from Rest controller, each api call will start this heavy method in new thread. There are mechanisms to rate limit users requests in java (eg. bucket4j), but wanted to ask you guys, what is the best way to handle this case ? Just one instance of long running task fired from Rest API calling Spring Service (which is and should be stateless) .
Edit 1:
To make it clear - blocking: I mean user have to wait for response unit whole task is finished - it can take minutes.
What I want to achieve is not to make this service method synchronized, but when the request comes, and in that time this long running task is working, then reject that 2nd request.
I build a CorDapp based on the Java Template. On top of that, I created a React front-end. Now, I want to start a flow from my front-end. To do so, I modified the template server, so that the controller starts my flow:
#GetMapping(value = "/templateendpoint", produces = "text/plain")
private String templateendpoint() {
proxy.startTrackedFlowDynamic(issueTokens.class, 30, "O=Bob, L=Berlin, C=DE");
return "The flow was started";
}
This operation does start the flow that issues 30 tokens to Bob. I can see that the flow was successful, by querying Bob's vault. However, I get the following error on the template server:
RPCClientProxyHandler.onRemoval - A hot observable returned from an RPC was never subscribed to.
This wastes server-side resources because it was queueing observations for retrieval.
It is being closed now, but please adjust your code to call .notUsed() on the observable to close it explicitly. (Java users: subscribe to it then unsubscribe).
If you aren't sure where the leak is coming from, set -Dnet.corda.client.rpc.trackRpcCallSites=true on the JVM command line and you will get a stack trace with this warning.
After this first transaction, I cannot start another flow. The .notUsed() method only works for Kotlin. However, I couldn't find a working way to subscribe and then unsubscribe from the observable.
Could anyone give me an example on how to implement this with the Corda flow? Moreover, what is the most practical way to pass information from the front-end to the controller class, in order to use that as flow arguments?
The reason that the error appears is that the Observable on the client-side gets garbage collected.
The solution is provided has been provided in the bracket-
(Java users: subscribe to it then unsubscribe)
So in your case, you can do something like this:
Subscription subs = updates.subscribe();
subs.unsubscribe();
Probably a more practical way is to keep the observable instance as a private attribute - such that it won't get garbage-collected. ie.
private Observable observable;
Ref: https://docs.corda.net/docs/corda-os/4.4/clientrpc.html#observables
I am maintaining code which looks like that
#Asynchronous
#TransactionTimeout(value = 1, unit = TimeUnit.HOUR)
public void downloadFile(Long fileId) {
//This method takes more than 1hour
service.download(fileId)
//this method should be called even when download finished with error
service.fileDownloadedFinishedNotification(fileId);
}
This is just an example code, to the fileDownloadedFinished we are passing message which we want to display etc, and inside of that we want to mark process as finished with error/success.
So as you can see on download we can get timeout, and after that the fileDownloadedFinishedNotification wont be called, because transaction failed because of the timeout.
I was thinking about extracting notification to other method and call it like this:
#Asynchronous
#TransactionTimeout(value = 1, unit = TimeUnit.HOUR)
public Future<String> downloadFile(Long fileId) {
//This method takes more than 1hour
service.download(fileId)
return new AsyncResult<String>("Test");
}
public void example(){
long id = 15;
String msg = "default stuff";
try {
msg = downloadFile(id).get();
}
catch (Exception e) {
e.printStackTrace();
}
service.fileDownloadedFinishedNotification(fileId, string);
}
But I am not sure if it is good idea, or maybe there is some other functionality, which I can call when timeout is reaced. Something like onTimeout.
Some considerations :
There is no simple way to handle transaction timeout with a listener AFAIK
Annotations use dynamic proxies under the cover, they won't so be applied on a inner call, you have to call your downloadFile from outside (on a bean injected in your caller).
The current transaction will be aborted when fileDownloadedFinishedNotification will be called and so all operations on a transacted resource (DB, etc...) will be rolled back (you may have to invoke the method within a dedicated transaction (e.g. annotate your method with #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW).
Assuming the download method retrieves the content across the network and unless you access this last through a dedicated JCA adapter, no exception will be thrown on transaction timeout, the Transaction reaper only marks the current transaction as aborted and release related resources but does not interrupt the thread, only a subsequent access to a MANAGED resource (Datasource, JMS, etc) will throw an exception.
Regarding the last point, while interacting with a un-managed resource the only way to know if the current transaction is still active is to regularly check its state using EJBContext.getRollbackOnly() or by making a dummy access to any managed resource.
There are ways of achieving what you want but a proper implementation would need more information about your level of access to application change.
There are many places where transaction propagation is explained but giving you are running your app in an EJB containeer I would start from here:
https://docs.oracle.com/javaee/6/tutorial/doc/bncih.html
I would read it all chapter but most specific for your case is the container managed transactions here:
https://docs.oracle.com/javaee/6/tutorial/doc/bncij.html
Now assuming you have full access and you can change your database structure the way I would implemented this would be:
You are running your service in a parent transaction T1
Before you invoking the download method call another service to record the download started and the maximum expected time to finish. Do this in a REQUIRED_NEW transaction. This no time consuming database interaction will run in an autonomous transaction T2
Once the above T2 transaction commits your download started record is committed and available to query
Once back in the parent T1 start your download
Record the success in the same record you persisted in T2 if the download successfully finishes.
If you get a timeout the above will never be recorded and the database will still show the download as started and maximum expected time to finish
Define a monitoring process that would kick off at regular times and check the download status. If the expected time to finish have been passed over have that monitoring process alert or record failure or trigger another retry or whatever your business rules are
Hope it helped. Sorry for not codding examples but I thing you will have enough to start with.
Cheers
Since you guys have been very helpful in my early steps into the Play Framework (thanks for that), here it goes again:
We have a working registration controller, that POSTS all credentials to the database.
But then, we want to make it possible to be immeadiately logged in afterwards. Below is the code that makes this work:
public static void doRegistration(#Valid User user) {
//registering the user
try{
SecureController.authenticate(user.username, user.password, false, "MainController.index");
}catch(Throwable ex){
MainController.index();
}
This works fine, but it is not very safe because it GETs all the credentials to the server. I know I have to edit my routes file somehow, but I can't see how.
The routes file:
* /account SecureController.login
POST /account/register RegistrationController.doRegistration
GET /account/register SecureController.login
Somewhere should be the action SecureController.authenticate, but what do I have to put in the column after the POST... It can't be /account/register, because that fails...
Thank you beforehand!
I am not sure I understand your issue. The routes file is just a way to configure your URLs to be pretty URLs. If you don't specify them, then it falls back on default {controller}/{method} syntax.
The issue you are having, is that when you call another controller Play performs a redirect to that controller's method, which involves sending a request back to your browser telling it to redirect (this ensures that the state of the application is reflected in the URL within the browser). A redirect needs therefore to send a GET request, and included in the GET request will be your parameters.
what you are trying to do, as you said, is not safe. What you should do (not the only option, only one possibility) is:
Maintain your current doRegistration action for the user
Create a service class (that does not inherit Controller). It can be static or require instantiation (with static methods should be enough though).
Add a #Before method to a common controller that will be executed always. One way is to create a controller with a #Before method and add this controller to all other controllers via the #With annotation, so that #Before will be executed always for all controllers. It requires you to add a #With to each new controller, but I believe it keeps the code quite clean.
The idea would be that the controller calls the authenticate method from the service class. It's a simple static This method checks the user (if it's enabled, has proper license, whatever) and sets some parameters in the session (via Session object).
To help with this you may want to create another authenticate method in the user that returns the attributes to set (for example in a Map, if it contains an "error" key the user can't be authenticated for some reason). How to do this step can change according to your requirements.
Once the Session has been set, you redirect to the page of your election (main, profile, etc). As you have the common #Before method, this will be executed. This method should verify the credentials in the session (user authenticated, license type, etc) and act accordingly. You have an example in the Secure controller of Play, but you could create your own.
With this, you could user the authenticate method of the service from any controller, allowing authentication via multiple methods, and using a common point to verify the session.
I have the following code in a listener method:
FacesContext.getCurrentInstance().getExternalContext().getRequestMap().put("time", new Date());
When a button is clicked the following code is executed
System.out.println(FacesContext.getCurrentInstance().getExternalContext().getRequestMap().get("time"));
One could except that "time" is null when the listener was not executed while processing the current request, but: it seems like the "time" object survives the request processing. So when "time" has been set sometimes in the past it stays there... can anybody explain this? Thanks.
Found the answer here:
http://wiki.icefaces.org/display/ICE/Compatibility
Scopes
By default, ICEfaces 1.x operated under what was referred to as extended request scope. In a nutshell, extended request scope refers to the behaviour that a new request is only associated with a change in view. This means that Ajax requests that occur within an existing view are not treated by ICEfaces as new requests. A request is not considered a new request unless it results in a new view so request-scoped beans would not be recreated until a new view was created. This behaviour was configurable to allow for the more standard definition of request scope but was considered necessary at the time because the existing standard scopes (request, session, application, none) were not granular enough.