I'm wondering if there is a cleaner and more re-usable way to implement the following use case:
I have a search form that sends a RPC to the server with the user entered values. The callback returns the items that match the search criteria. A search can take several seconds and we want the user to be able to alter his search criteria even if the previous search hasn't completed yet. If he does this, all previous search results should be ignored and the last result should be presented. If we don't take this in account, sometimes the newest call will return before the older ones resulting in the newest results getting overridden, which means the presented results won't match the currently entered criteria.
We currently solve this by assigning the callback to a "lastCallback" field. Each callback instance can access this field and checks whether the field is equal to itself. If not, a newer search request has been sent in the meantime and thus the results of this old call are ignored.
Does GWT provide a built-in way to handle this use case?
You can cancel the previous calls. Have your async methods return com.google.gwt.http.client.Request instead of void, so you can call cancel() on it. This would make sure your previous callback will never be called and you only ever have one ongoing request (from the client's point of view at least; you'll have several being processed on the server because cancelling a request is a client and network thing, and isn't exposed to the servlet server-side)
I think your approach is correct.
This is not a GWT issue - once a browser makes a call to the server, the server is going to respond - whether you need this response or not. So you have to have a flag someone to decide which response is unwanted.
One optimization could be to just return null or a specific exception from the server for the first request, when a second request arrives during a search progress.
Related
I have to automate certain operations of PUT/POST operation in my case, I have those endpoints already-in-place which will do their part.
My planning is to have another method which will drive this whole automation, consider this method as new POST endpoint which would gonna call each either POST and PUT endpoint from the same service which I already mentioned.
I will gonna call those existing PUT and POST based on input, if consider the input is new I will call existing POST and if given input exists in database I will going to call PUT.
Till I am good, But I have a question in my mind, Which is bugging me a lot that my new endpoint which is of POST is calling PUT as well as POST, I each method type has to do its type of operations only but here I am calling PUT as well as POST whereas my parent calling method type is POST.
I am not sure if I am working in right direction to achieve my use-case.
Please correct me in a different way.
Note - I am having Spring Boot application which would always need some endpoint to trigger any logic which I am talking about.
Update my question for better understanding.
I dont really know what you mean exactly. The HTTP methods are considered to do a specific task, but yet again its ok to use POST to update something - might be not best practice, but works. If you want to seperate the concerns (adding, updating), then just implement two different endpoints, one handling the creation the other one the update. The client (whether its a web-app or desktop app or whatever) has to handle this issue.
We have a REST API that reads and deletes the record from database and returns the read value back to the client, all in same call. We have exposed it using HTTP POST. Should this be exposed as HTTP GET? What will be the implications in terms of Caching in case we expose it as GET.
First, you should keep in mind that one of the reasons that we care that a request is safe or idempotent is that the network is unreliable. Some non zero number of responses to the query are going to be lost, and what do you want to do about that?
A protocol where the client uses GET to request the resource, and then DELETE to acknowledge receipt, may be a more reliable choice than burning the resource on a single response.
Should this be exposed as HTTP GET?
Perhaps. I would not be overly concerned with the fact that the the second GET returns a different response than the first. Safe/idempotent doesn't promise that the response will be the same every time, it just promises that the second request doesn't change the effects.
DELETE, for example, is idempotent, because deleting something twice is the same as deleting it once, even though you might return 200 to the first request and 404/410 to the second.
HTTP does not attempt to require the results of a GET to be safe. What it does is require that the semantics of the operation be safe, and therefore it is a fault of the implementation, not the interface or the user of that interface, if anything happens as a result that causes loss of property (money, BTW, is considered property for the
sake of this definition).
I think the thing to pay attention to here is "loss of property". What kind of damage does it cause if generic components think that GET means GET? and act accordingly (for example, by pre-fetching the resource, or by crawling the API).
But you definitely need to be thinking about the semantics -- are we reading the document, and the delete of the database record is a side effect? or are we deleting the record, and receiving a last known representation as the response?
POST, of course, is also fine -- POST can mean anything.
What will be the implications in terms of Caching in case we expose it as GET.
RFC 7234 - I don't believe there are any particularly unusual implications. You should be able to get the caching behavior you want by specifying the appropriate headers.
If I'm interpreting your use case correctly, then you may want to include a private directive, for example.
As per the above discussion, it looks like PUT request. You should not use GET as it is idempotent because the same data is not available for the second time call. POST is used to create a new resource. So it will be better to use PUT http method for this kind of requirement. Refer below the link for more details.
https://restfulapi.net/http-methods/
I am wondering what the correct way is to handle this type of request. I have a delete requests from a UI and it's a list of ID's which are integers. So the request can look like :
www.myui.com/delete/1,2,3,4
which is a well formatted request. But if the request for any reason came from a curl request or postman etc it may be formatted like:
www.myui.com/delete/1,,3,4
In this case the 2nd index will contain null since it's inspecting Integers. However if we were expecting a list of String it would be simple an empty string or an n amount of white space characters if it was formatted like /1, ,2,3, 4, so I would have to loop through the request and check if a string in the list of string is only white space and throw back a 404.
Should I be doing this in the controller or allow this type of request to pass on by and have the eventually have the exception thrown in the dao since it's going to try and delete an id that is either null or just white space which won't exist in the DB.
Below is an example of how I am currently handling the request which is a List of Integers.
#DeleteMapping(value="/delete/{ids}")
public ResponseEntity delete(#PathVariable("ids") List<Integer> ids)
throws DatabaseException {
if (ids.contains(null)) {
return new ResponseEntity<>(HttpStatus.BAD_REQUEST);
}
service.delete(ids);
return new ResponseEntity<>(HttpStatus.NO_CONTENT);
}
In short, fail fast is a better approach as it helps detect malfunctioning very early and quickly, although you might also consider business requirements and general design guidelines of your application, if it should be lenient, then you might go with something like:
service.delete(ids.stream().filter(Objects::nonNull).collect(Collectors.toList()));
and return a response body containing at least a number of items deleted.
If your application has to be strict, then a bad request should be returned as soon as possible as you are already doing.
Also, you have to take in consideration that your services and/or DAOs are not exclusively called from controllers, so validations and/or checks have to be implemented there as well, and try not to let malformed requests hit the database if you already know they would lead to errors, it would be just wasted traffic.
Finally, I hope the integer ids in your case are not DB generated, in which case it would be a major security issue since you are exposing persistence details over your api, an attacker could just wipe out your database or parts of it by just sending list of incremented integers. I would suggest you use some kind of randomly generated unique ids to expose over the api (this does not mean that you should get rid of integer base indices).
It is correct to handle the errors as soon as you detect them. In this case, the bad request is detected at controller level so it is the best option to handle it there.
Though your approach is fine, looking at #ResponseHandler may be instructive, as it can be used to generalize the handling of known exceptions, on Controller level.
I have to make request with only one parameter for example:
example.com/confirm/{unique-id-value}
I don't expect to get any data in body, only interested in Response Code.
Need advice which method to use GET or POST
GET i think is also OK because, making request with pathparam, but on the other hand POST is also right to use, because i don't expect to receive any data from body, just making informative request and interested in only status code of request result.
The confirm suggests that a request to this URL will change some state on a server by 'confirming' some 'task' that is identified by a unique ID. So we talk about the Reource (the R in REST) of a 'task confirmation'. A GET request will get the current state of such a Resource. GET must not have side effects like changing the state of the 'task confirmation' resource. If it is unconfirmed before a GET request, it must be unconfirmed after such a request.
If you want to change the state of the 'task confirmation' Resource, you must use a different HTTP verb. But since you write that you will not pass any request body, it is hard to recommend a RESTful approach.
One disadvantage of using GET is that its response is often cached, so if you inquire about the same ID repeatedly, you might not get the results you expect, unless you do some shenanigans to prevent caching (such as appending a unique timestamp to the GET URL for every request). POST requests, on the other hand, are never cached, so you would get the correct result every time without any additional work.
I'm writing a Java webservice with CXF. I have the following problem: A client calls a method from the webservice. The webservice has to do two things in parallel and starts two threads. One of the threads needs some additional information from the client. It is not possible to add this information when calling the webservice method, because it is dependent from the calculation done in the webservice. I cannot redesign the webservice becuase it is part of a course assignement and the assignements states that I have to do it this way. I want to pause the thread and notify it when the client delivers the additional information. Unfortunately it is not possible in Java to notify a particular thread. I can't find any other way to solve my problem.
Has anybody a suggestion?
I've edited my answer after thinking about this some more.
You have a fairly complex architecture and if your client requires information from the server in order to complete the request then I think you need to publish one or more 'helper' methods.
For example, you could publish (without all the Web Service annotation):
MyData validateMyData(MyData data);
boolean processMyData(MyData data);
The client would then call validateMyData() as many times as it liked, until it knew it had complete information. The server can modify (through calculation, database look-up, or whatever) the variables in MyData in order to help complete the information and pass it back to the client (for updating the UI, if there is one).
Once the information is complete the client can then call processMyData() to process the complete request.
This has the advantage that the server methods can be implemented without the need for background threads as they should be able to do their thing using the request-thread supplied by the server environment.
The only caveat to this is if MyData can get very large and you don't want to keep passing it back and forth between client and server. In that case you would need to come up with a smaller class that just contains the changes the server wants to make to MyData and exclude data that doesn't need correcting.
IMO it's pretty odd for a web service request to effectively be incomplete. Why can't the request pass all the information in one go? I would try to redesign your service like that, and make it fail if you don't pass in all the information required to process the request.
EDIT: Okay, if you really have to do this, I wouldn't actually start a new thread when you receive the first request. I would store the information from the first request (whether in a database or just in memory if this is just a dummy one) and then when the second request comes in, launch the thread.