We have a REST API that reads and deletes the record from database and returns the read value back to the client, all in same call. We have exposed it using HTTP POST. Should this be exposed as HTTP GET? What will be the implications in terms of Caching in case we expose it as GET.
First, you should keep in mind that one of the reasons that we care that a request is safe or idempotent is that the network is unreliable. Some non zero number of responses to the query are going to be lost, and what do you want to do about that?
A protocol where the client uses GET to request the resource, and then DELETE to acknowledge receipt, may be a more reliable choice than burning the resource on a single response.
Should this be exposed as HTTP GET?
Perhaps. I would not be overly concerned with the fact that the the second GET returns a different response than the first. Safe/idempotent doesn't promise that the response will be the same every time, it just promises that the second request doesn't change the effects.
DELETE, for example, is idempotent, because deleting something twice is the same as deleting it once, even though you might return 200 to the first request and 404/410 to the second.
HTTP does not attempt to require the results of a GET to be safe. What it does is require that the semantics of the operation be safe, and therefore it is a fault of the implementation, not the interface or the user of that interface, if anything happens as a result that causes loss of property (money, BTW, is considered property for the
sake of this definition).
I think the thing to pay attention to here is "loss of property". What kind of damage does it cause if generic components think that GET means GET? and act accordingly (for example, by pre-fetching the resource, or by crawling the API).
But you definitely need to be thinking about the semantics -- are we reading the document, and the delete of the database record is a side effect? or are we deleting the record, and receiving a last known representation as the response?
POST, of course, is also fine -- POST can mean anything.
What will be the implications in terms of Caching in case we expose it as GET.
RFC 7234 - I don't believe there are any particularly unusual implications. You should be able to get the caching behavior you want by specifying the appropriate headers.
If I'm interpreting your use case correctly, then you may want to include a private directive, for example.
As per the above discussion, it looks like PUT request. You should not use GET as it is idempotent because the same data is not available for the second time call. POST is used to create a new resource. So it will be better to use PUT http method for this kind of requirement. Refer below the link for more details.
https://restfulapi.net/http-methods/
Related
what is the technical drawback of deleting a row on a GET method call in REST i know that it is not a standard way of doing but will there be having any issues ?
The technical drawback is that indexers (think Google) come along and GET all of the links that they can find, just to see what's there. General purpose components that see a link to your thing might do a GET on it as a way of pre-caching the results in case the client wants them.
Fielding, writing in 2002
HTTP does not attempt to require the results of a GET to be safe. What
it does is require that the semantics of the operation be safe, and
therefore it is a fault of the implementation, not the interface
or the user of that interface, if anything happens as a result that
causes loss of property (money, BTW, is considered property for the
sake of this definition)
I have a use case where some context needs to be transferred from the UI to the backend and backend needs to decide and send the response based on that context.
This can be achieved by sending the context through request body and at the server side, by parsing the request body, the representation can be sent in the response body.
My doubt is which http method is suitable for this?
GET: If we use GET, we can send the request body but it is advised that the body should not have any semantics related to the request.
See this: http-get-with-request-body
So I am left with POST or PUT but these corresponds to updating or creating a resource and using them might be little misleading.
So my question is what is the appropriate HTTP method that could be used in this scenario which is acceptable in the RESTful design standpoint.
Appreciate the response.
I am thinking to use POST or PUT as there are no restrictions on consuming the request body on the server side.
EDIT:
I think POST would serve my purpose.
The rfc HTTP RFC 7231 says that POST can be used for:
Providing a block of data, such as the fields entered into an HTML form, to a data-handling process
So the data handling process for me is the backend server and HTML Form is equivalent to any UI element.
So I can make use POST method to send the data to backend and send the existing resource representation as response body with http-status code being 200
Please bear in mind that GET must be used for data retrieval only, without side effects. That is, GET is both safe and idempotent (see more details here).
If the operation is meant to be idempotent, go for PUT:
4.3.4. PUT
The PUT method requests that the state of the target resource be created or replaced with the state defined by the representation enclosed in the request message payload. A successful PUT of a given representation would suggest that a subsequent GET on that same target resource will result in an equivalent representation being sent in a 200 (OK) response. [...]
Otherwise, go for POST, which is a catch all verb:
4.3.3. POST
The POST method requests that the target resource process the representation enclosed in the request according to the resource's own specific semantics. [...]
I would go for POST because in REST, PUT is used to create a new resource like user.
There is a PATCH post method, that is for changing things maybe thats what you are looking for
So my question is what is the appropriate HTTP method that could be used in this scenario which is acceptable in the RESTful design standpoint.
The world wide web is about as RESTful an example as you are going to find, and HTML forms only support GET (which should not have a request body) and POST. So POST must be fine (and it is).
More generally, POST can be used for anything; the other methods should be used when they are a better semantic fit. For example, you can use POST to make a resource unavailable, but DELETE is more explicit, and generic components can do sensible things because they recognize the semantics. PUT is a better choice than POST when what you are intending is to provide the server with a new representation of a resource, and so on.
I am not able to understand why the payload of the HTTP GET is forbidden
Payload of the HTTP GET is forbidden because the standard says "don't do that".
I believe it is written that way to simplify the rules for caching the response. As written, cache implementations only have to worry about header data (including information on the start-line).
But it could be as simple as the fact that the older versions of the standard didn't require that generic components do anything specific with the message-body of a GET request, and therefore modern specifications say "don't do that" in order to maintain backward compatibility. (One of the important constraints in designing long-lived systems is that you don't break older implementations.)
I am bit confused, whether I should validate the data returned from the remote call to another Microserivce Or should I rely on the contract between these Microservice.
I know putting extra checks will not hurt anyone but I would like to know what's the right approach?
in theory, you don't even know how the data you get back from a microservices is created since you only know the interface (API) and what it is returning.
By that, you should take the data response of this API as given.
Sure, additional validation may not harm in the first place.
But consider a case where some business-logic got changed which lead to a change in one of the services. Could be a simple thing like adapting the definition of a KPI leading to a different response (datawise, not structurewise) from the microservice.
Your validation would fail too as false-positive. You would need to adapt your validation for basically nothing.
I have to make request with only one parameter for example:
example.com/confirm/{unique-id-value}
I don't expect to get any data in body, only interested in Response Code.
Need advice which method to use GET or POST
GET i think is also OK because, making request with pathparam, but on the other hand POST is also right to use, because i don't expect to receive any data from body, just making informative request and interested in only status code of request result.
The confirm suggests that a request to this URL will change some state on a server by 'confirming' some 'task' that is identified by a unique ID. So we talk about the Reource (the R in REST) of a 'task confirmation'. A GET request will get the current state of such a Resource. GET must not have side effects like changing the state of the 'task confirmation' resource. If it is unconfirmed before a GET request, it must be unconfirmed after such a request.
If you want to change the state of the 'task confirmation' Resource, you must use a different HTTP verb. But since you write that you will not pass any request body, it is hard to recommend a RESTful approach.
One disadvantage of using GET is that its response is often cached, so if you inquire about the same ID repeatedly, you might not get the results you expect, unless you do some shenanigans to prevent caching (such as appending a unique timestamp to the GET URL for every request). POST requests, on the other hand, are never cached, so you would get the correct result every time without any additional work.
The HATEOAS principle "Clients make state transitions only through actions that are dynamically identified within hypermedia by the server"
Now I have a problem with the word dynamically, though I guess it's the single most important word there.
If I change one of my parameters from say optional to mandatory in the API, I HAVE to fix my client else the request would fail.
In short, all HATEOAS does is give the server side developer extreme liberty to change the API at will, at the cost of all clients using his/her API.
Am I right in saying this, or am I missing something like versioning or maybe some other media-type than JSON which the server has to adopt?
Any time you change a parameter from optional to mandatory in an API, you will break consumers of that API. That it is a REST API that follows HATEOAS principles does not change this in any way. Instead, if you wish to maintain compatibility you should avoid making such changes; ensure that any call made or message sent by a client written against the old API will continue to function as expected.
On the other hand, it is also a good idea to not write clients to expect the set of returned elements to always be identical. They should be able to ignore additional information given by a server if the server chooses to provide it. Again, this is just good API design.
HATEOAS isn't the problem. Excessively rigid API expectations are the problem. HATEOAS is just part of the solution to the problem (as it potentially relieves clients from having to know vast amounts about the state model of the service, even if it doesn't necessarily make it straight-forward).
Donal Fellows has a good answer, but there's another side to the same coin. The HATEOAS principle doesn't have anything to say itself about the format of your messages (other parts of REST do); instead, it means essentially that the client should not try to know which URI's to act upon out of band. Instead, the server should tell the client which URI's are of interest via hyperlinks (or forms/templates which construct hyperlinks). How it works:
The client starts at state 0.
The client requests a well-known resource.
The server's response moves the client to a new state N. There may be multiple states achievable at this point depending on the response code and payload.
The response includes links (or forms/templates) which tell the client, in band, the set of potential next states.
The client selects one of the potential next states by issuing a method on a URI.
Repeat 3 through 5 to states N+1 and beyond until the client's application needs are met.
In this fashion, the server is free to change the URI that moves the client from state N to state N+1 without breaking the client.
It seems to me that you misunderstood the quoted principle. Your question suggests that you think about the resources and that they could be "dynamically" defined. Like a mandatory property added to certain resource type at the application runtime. This is not what the principle says and this was correctly pointed out in other answers. The quoted principle says that the actions within the hypermedia should be dynamically identified.
The actions available for a given resource may change in time (e.g. because someone added/removed a relationship in the meantime) and there may be different actions available for the same resource but for different users (e.g. because users have different authorization levels). The idea of HATEOAS is that clients should not have any assumptions about actions available for certain resource at any given time. The client should identify available actions each time it reads that resource.
Edit: The below paragraph may be wrong. See the comments for discussion about it.
On the other hand clients may have expectation for the data available in the resource. Like that a book resource must have a title and that it there may be links to the book's author or authors. There is no way of avoiding the coupling introduced by these assumptions but both service providers and clients should use backward-compatibility and versioning techniques to deal with it.