Using GET for delete in REST - java

what is the technical drawback of deleting a row on a GET method call in REST i know that it is not a standard way of doing but will there be having any issues ?

The technical drawback is that indexers (think Google) come along and GET all of the links that they can find, just to see what's there. General purpose components that see a link to your thing might do a GET on it as a way of pre-caching the results in case the client wants them.
Fielding, writing in 2002
HTTP does not attempt to require the results of a GET to be safe. What
it does is require that the semantics of the operation be safe, and
therefore it is a fault of the implementation, not the interface
or the user of that interface, if anything happens as a result that
causes loss of property (money, BTW, is considered property for the
sake of this definition)

Related

Appropriate HTTP Method for 'Single Read' REST API

We have a REST API that reads and deletes the record from database and returns the read value back to the client, all in same call. We have exposed it using HTTP POST. Should this be exposed as HTTP GET? What will be the implications in terms of Caching in case we expose it as GET.
First, you should keep in mind that one of the reasons that we care that a request is safe or idempotent is that the network is unreliable. Some non zero number of responses to the query are going to be lost, and what do you want to do about that?
A protocol where the client uses GET to request the resource, and then DELETE to acknowledge receipt, may be a more reliable choice than burning the resource on a single response.
Should this be exposed as HTTP GET?
Perhaps. I would not be overly concerned with the fact that the the second GET returns a different response than the first. Safe/idempotent doesn't promise that the response will be the same every time, it just promises that the second request doesn't change the effects.
DELETE, for example, is idempotent, because deleting something twice is the same as deleting it once, even though you might return 200 to the first request and 404/410 to the second.
HTTP does not attempt to require the results of a GET to be safe. What it does is require that the semantics of the operation be safe, and therefore it is a fault of the implementation, not the interface or the user of that interface, if anything happens as a result that causes loss of property (money, BTW, is considered property for the
sake of this definition).
I think the thing to pay attention to here is "loss of property". What kind of damage does it cause if generic components think that GET means GET? and act accordingly (for example, by pre-fetching the resource, or by crawling the API).
But you definitely need to be thinking about the semantics -- are we reading the document, and the delete of the database record is a side effect? or are we deleting the record, and receiving a last known representation as the response?
POST, of course, is also fine -- POST can mean anything.
What will be the implications in terms of Caching in case we expose it as GET.
RFC 7234 - I don't believe there are any particularly unusual implications. You should be able to get the caching behavior you want by specifying the appropriate headers.
If I'm interpreting your use case correctly, then you may want to include a private directive, for example.
As per the above discussion, it looks like PUT request. You should not use GET as it is idempotent because the same data is not available for the second time call. POST is used to create a new resource. So it will be better to use PUT http method for this kind of requirement. Refer below the link for more details.
https://restfulapi.net/http-methods/

Data Validation from remote call -Microservices

I am bit confused, whether I should validate the data returned from the remote call to another Microserivce Or should I rely on the contract between these Microservice.
I know putting extra checks will not hurt anyone but I would like to know what's the right approach?
in theory, you don't even know how the data you get back from a microservices is created since you only know the interface (API) and what it is returning.
By that, you should take the data response of this API as given.
Sure, additional validation may not harm in the first place.
But consider a case where some business-logic got changed which lead to a change in one of the services. Could be a simple thing like adapting the definition of a KPI leading to a different response (datawise, not structurewise) from the microservice.
Your validation would fail too as false-positive. You would need to adapt your validation for basically nothing.

In java streams using .peek() is regarded as to be used for debugging purposes only, would logging be considered as debugging? [duplicate]

This question already has answers here:
In Java streams is peek really only for debugging?
(10 answers)
Closed 4 years ago.
So I have a list of objects which I want part or whole to be processed, and I would want to log those objects that were processed.
consider a fictional example:
List<ClassInSchool> classes;
classes
.stream()
.filter(verifyClassInSixthGrade())
.filter(classHasNoClassRoom())
.peek(classInSchool -> log.debug("Processing classroom {} in sixth grade without classroom.", classInSchool)
.forEach(findMatchingClassRoomIfAvailable());
Would using .peek() in this instance be regarded as unintended use of the API?
To further explain, in this question the key takeaway is: "Don't use the API in an unintended way, even if it accomplishes your immediate goal." My question is whether or not every use of peek, short from debugging your stream until you have verified the whole chain works as designed and removed the .peek() again, is unintended use. So if using it as a means to log every object actually processed by the stream is considered unintended use.
The documentation of peek describes the intent as
This method exists mainly to support debugging, where you want to see the elements as they flow past a certain point in a pipeline.
An expression of the form .peek(classInSchool -> log.debug("Processing classroom {} in sixth grade without classroom.", classInSchool) fulfills this intend, as it is about reporting the processing of an element. It doesn’t matter whether you use the logging framework or just print statements, as in the documentation’s example, .peek(e -> System.out.println("Filtered value: " + e)). In either case, the intent matters, not the technical approach. If someone used peek with the intent to print all elements, it would be wrong, even if it used the same technical approach as the documentation’s example (System.out.println).
The documentation doesn’t mandate that you have to distinguish between production environment or debugging environment, to remove the peek usage for the former. Actually, your use would even fulfill that, as the logging framework allows you to mute that action via the configurable logging level.
I would still suggest to keep in mind that for some pipelines, inserting a peek operation may insert more overhead than the actual operation (or hinder the JVM’s loop optimizations to such degree). But if you do not experience performance problems, you may follow the old advice to not try to optimize unless you have a real reason…
Peek should be avoided as for certain terminal operations it may not be called, see this answer. In that example it would probably be better to do the logging inside the action of forEach rather than using peek. Debugging in this situation means temporary code used for fixing a bug or diagnosing an issue.
In java streams using .peek() is regarded as to be used for debugging purposes only, would logging be considered as debugging?
It depends on whether your logging code is going to be a permanent fixture of your code, or not.
Only you can really know the real purpose of your logging ...
Also note that the javadoc says:
In cases where the stream implementation is able to optimize away the production of some or all the elements (such as with short-circuiting operations like findFirst, or in the example described in count()), the action will not be invoked for those elements.
So, you are liable to find that in some circumstances peek won't be a reliable way to log (or debug) your pipeline.
In general, adding peek is liable to change the behavior of the pipeline and / or the JVM's ability to optimize it ... in a current or future generation JVM.
Eh, it's somewhat open to interpretation. Intent is something that's not always easy to determine.
I think the API note was mostly added to discourage an overzealous usage of peek when almost all desirable behaviour can be accomplished without it. It was just too useful for the developers to exclude it completely but they wanted to be clear that its inclusion was not to be taken as an unqualified endorsement; they saw the potential for misuse and they tried to address it.
I suspect - though I'm only speculating - that there were mixed opinions on whether to include it at all, and that including a version with a caveat in the JavaDoc was the compromise.
With that in mind, I think my suggestion for deciding when to use peek would simply be: don't use it unless you have a very good reason to.
In your case, you definitely don't have a good reason to. You're iterating over everything and passing the result to the method findMatchingClassRoomIfAvailable (well, presumably - your example wasn't very good). If you want to log something for each item in the stream then just log it at the top that method.
Is it misuse? I don't think so. Would I write it this way? No.

combined vs. separate backend calls

I try to figure out the best solution for a use case I'm working on. However, I'd appreciate getting some architectural advice from you guys.
I have a use case where the frontend should display a list of users assigned to a task and a list of users who are not assigned but able to be assigned to the same task.
I don't know what the better solution is:
have one backend call which collects both lists of users and sends them
back to the frontend within a new data class containing both lists.
have two backend calls which collect one of the two lists and send them
back separately.
The first solution's pro is the single backend call whereas the second solution's pro is the reusability of the separate methods in the backend.
Any advice on which solution to prefer and why?
Is there any pattern or standard I should get familiar with?
When I stumble across the requirement to get data from a server I start with doing just a single call for, more or less (depends on the problem domain), a single feature (which I would call your task-user-list).
This approach saves implementation complexity on the client's side and saves protocol overhead for transactions (TCP header, etc.).
If performance analysis shows that the call is too slow because it requests too much data (user experience suffers) then I would go with your 2nd solution.
Summed up I would start with 1st approach. Optimize (go with more complex solution) when it's necessary.
I'd prefer the two calls because of the reusability. Maybe one day you need add a third list of users for one case and then you'd need to change the method if you would only use one method. But then there may be other use cases which only required the two lists but not the three, so you would need to change code there as well. Also you would need to change all your testing methods. If your project gets bigger this makes your project hard to update or fix. Also all the modifications increase the chances of introducing new bugs as well.
Seeing the methods callable by the frontend of the backend like an interface helps.
In general an interface should be open for extension but closed on what the methods return and require. As otherwise a slight modification leads to various more modifications.

Implementing retractions in google dataflow

I read the "The Dataflow Model: A Practical Approach to Balancing Correctness, Latency, and Cost in MassiveScale, Unbounded, Out of Order Data Processing" paper. Alas, the SDK does not yet expose the accumulating & retracting triggering mode (section 2.3).
I was wondering if there was a workaround for getting similar semantics?
I have been reading the source and have figured out that StateTag or StateNamespace may be the way i can store the "last emitted value of the window" and hence can be used to calculate the retraction message down the pipeline. Is this the correct path or are there other classes/ways I can/should look at.
The upcoming state API is indeed your best bet for emulating retractions. Those classes you mentioned are part of the state API, but everything in the com.google.cloud.dataflow.sdk.util is for internal use only; we technically make no guarantees that the APIs won't change drastically, or even remain unreleased. That said, releasing that API is on our roadmap, and I'm hopeful we'll get it released relatively soon.
One thing to keep in mind: all the code downstream of your custom retractions will need to be able to differentiate them from normal records. This is something we'll do automatically for you once bonafide retraction support is ready, but in the mean time, you'll just need to make sure all the code you write that might receive a retraction knows how to recognize and handle it as such.

Categories