How could the PUT method be idempotent but not safe? Can someone explain it out?
HTTP Method Idempotent Safe
OPTIONS yes yes
GET yes yes
HEAD yes yes
PUT yes no
POST no no
DELETE yes no
PATCH no no
Safe method doesn't change anything internally (resources)
Safe methods are methods that can be cached, prefetched without any repercussions to the resource.
Idempotent method doesn't change anything externally (response)
idempotent HTTP method is a HTTP method that can be called many times without different outcomes.
It's all in the specification:
4.2.2. Idempotent Methods
A request method is considered "idempotent" if the intended effect on
the server of multiple identical requests with that method is the same
as the effect for a single such request. Of the request methods
defined by this specification, PUT, DELETE, and safe request methods
are idempotent.
Like the definition of safe, the idempotent property only applies to
what has been requested by the user; a server is free to log each
request separately, retain a revision control history, or implement
other non-idempotent side effects for each idempotent request.
Idempotent methods are distinguished because the request can be
repeated automatically if a communication failure occurs before the
client is able to read the server's response. For example, if a client
sends a PUT request and the underlying connection is closed before any
response is received, then the client can establish a new connection
and retry the idempotent request. It knows that repeating the request
will have the same intended effect, even if the original request
succeeded, though the response might differ.
(https://greenbytes.de/tech/webdav/rfc7231.html#idempotent.methods)
Related
Our project consists of multiple microservices. These microservices form a boundary to which the entry point is not strictly defined meaning each of microservices can be requested and can request other services.
The situation we need to handle in this bounded microservice context is following:
client (other application) makes the request to perform some logic and change the data (PATCH),
request times out,
while request is being processed client fires the same request to repeat the operation,
operation successfully completes,
second request is being processed the same way and completes within it's time and client gets response.
Now what happened is that the same was processed two times because of first timeout.
We need to make sure the same request won't get processed and application will respond with former response and status code.
The subsequent request is identified by the same uuid.
Now, I understand it's the client that should do requesting more precisely or we should have a single request entry point in out micorservices bounded context, but in enterprise projects the team doesn't own the whole system therefore we are a bit constrained with the solutions we propose for the problem. with this in mind while trying to not reinvent the wheel this comes to my mind:
The microservices should utilize some kind of session sharing (spring-session?) with the ability to look up the request by it's id before it gets processed and in described case, when first is being processed and second arrives, wait for the completion of the 1st and respond to the second with data of the first that has timed out for a client.
What I am struggling with is imagining handling the asynchronicity of replying to the second one and how to listen for session state of the first request.
If spring-session would be used (for example with hazelcast) I'm lacking some kind of concrete session state handler which would get fired when request ends. Is there something like this to listen for?
No code written yet. It's an architectural thought experiment that I want to discuss.
If unsure of understanding, read second time please, then I'm happy to expand.
EDIT: first idea:
process would be as follows (with numbering on the image):
(1) first request fired
(3) processing started; (2) request timed out meanwhile;
(4) client repeats the same request; program knows it has received the same request before because it knows the req. id.
program checks the cache and the state of that request id 'pending' so it WAITS (async).
computed result of first request is saved into the cache - orange square
(5) program responds to the first request with the data that was meant to be for the first one
idea is that result checking and responding to the repeated request would be done in the filter chain so it won't actually hit the controller when the second request is asynchronously waiting for the operation triggered by the first request to be done (I see hazelcast has some events when rows are added/updated/evicted from the cache - dunno if it's working yet) and when complete just respond (somehow write to the HttpServletResponse). result would be saved into the cache in postHandling filter.
Thanks for insights.
I'd consider this more of a caching paradigm. Stick your request/responses into an external cache provider (REDIS or similar), indexed by uuid. Having a TTL will allow your responses to automatically get cleaned up for requests that are never coming back, and the high-speed implementation (o1) should allow this to scale nicely. It will also out-of-the-box give you an asynchronous model (not a stated goal, but always a nice option).
I have a queue and I have this consumer written in java for this queue. After consuming, we are executing an HTTP call to a downstream partner and this is a one-way asynchronous call. After executing this request, the downstream partner will send an HTTP request back to our system with the response for the initial asynchronous call. This response is needed for the same thread that we executed the initial asynchronous call. This means we need to expose an endpoint within the thread so the downstream system can call and send the response back. I would like to know how can I implement a requirement like this.
PS : We also can get the same response to a different web service and update a database row with the response. But I'm not sure how to stop the main thread and listen to the database row when the response is needed.
Hope you understood what I want with this requirement.
My response based on some assumptions. (I didn't wait for you respond to my comment since I found the problem had some other interesting features anyhow.)
the downstream partner will send an HTTP request back to our system
This necessitates that you have a listening port (ie, a server) running on this side. This server could be in the same JVM or a different one. But...
This response is needed for the same thread
This is a little confusing because at a high level, reusing the thread programmatically itself is not usually our interest but reusing the object (no matter in which thread). To reuse threads, you may consider using ExecutorService. So, what you may try to do, I have tried to depict in this diagram.
Here are the steps:
"Queue Item Consumer" consumes item from the queue and sends the request to the downstream system.
This instance of the "Queue Item Consumer" is cached for handling the request from the downstream system.
There is a listener running at some port within the same JVM to which the downstream system sends its request.
The listener forwards this request to the "right" cached instance of "Queue Item Consumer" (you have to figure out a way for this based on your caching mechanism). May be some header has to be present in the request from the downstream system to identify the right handler on this side.
Hope this works for you.
From Javascript, I am calling a REST method which is computationally intensive. Would it be possible to stop that REST call, if you are no longer interested in what it returns.
I understand, it is possible to abort a request in JS. But it won't stop the thread which gets triggered due to the REST call. This is how I am aborting the ajax call in JS.
Abort Ajax requests using jQuery
The REST interface is written in Java. And internally this thread may create multiple threads also.
I would like to stop a Java thread. But from the caller. From JS, where I have triggered it.
How to properly stop the Thread in Java?
As Chris mentioned in the comments above, REST calls should be quick, definitely not an hour long. If the server needs to do a lot of work which takes considerably amount of time, you should modify your design to async. Either provide a callback that the server will use once it's done (also called push approach), or pull every few minutes, by sending a new request to the server to see if it's done.
In order to implement it you'll need the server to return a unique-id for each request in order to be able to identify in the callback/check-call what's the status of that specific request.
The unique-id should be implemented on the server-side in order to avoid two clients send the same ID - overriding each other.
In the link that I posted above you can see an example of how to implement a "stop thread" mechanism which can be implemented on the server-side and called by the client whenever is needed.
You could send a unique identifier along with your request, and then make another request that instructs the server to abort the operation started for that ID.
I'd like to repeat an HTTP request automatically if a database deadlock occurs; however, FilterChain.doFilter() is defined as a unidirectional chain (so I cannot reset its state).
In cases where it's safe to do so, is it possible to repeat an HTTP request without having the client re-submit the request?
UPDATE: I just discovered a problem with this approach. Even if you repeat the request, you will need to buffer the request InputStream. This means that if a user uploads 100MB of data, you'll be forced to buffer that data regardless of whether a deadlock occurs.
I am exploring the idea of getting the client to repeat the request here: Is it appropriate to return HTTP 503 in response to a database deadlock?
Answering my own question:
Don't attempt to repeat an HTTP request. In order to do so you are going to be forced to buffer the InputStream for all requests, even if a deadlock never occurs. This opens you up to denial-of-service attacks if you are forced to accept large uploads.
I recommend this approach instead: Is it appropriate to return HTTP 503 in response to a database deadlock?
You can then break down large uploads into multiple requests stitched together using AJAX. Not pretty but it works and on the whole your design should be easier to implement.
UPDATE: According to Brett Wooldridge:
You want a small pool of a few dozen connections at most, and you want the rest of the application threads blocked on the pool awaiting connections.
Just as Hikari recommends a small number of threads with a long queue of requests, I believe the same holds true for the web server. By limiting the number of active threads, we limit the number of InputStreams we need to buffer (the remaining requests get blocked before sending the HTTP body).
To further reinforce this point, Craig Ringer recommends recovering from failures on the server side where possible.
You can do a 'forward' of the original request like below.
RequestDispatcher rd= request.getRequestDispatcher("/originalUrl");
rd.forward(request, response);
Here request and response represent HttpServletRequest/HttpServletResponse respectively.Refer
http://docs.oracle.com/javaee/5/api/index.html?javax/servlet/RequestDispatcher.html
Alternatively you can do a redirect on the response. This however will send a response to the browser asking it to issue a new request for the provided url. This is shown below
response.sendRedirect("originalUrl?neededParam=abc");
Per the javadoc:
Indicates that the given #WebMethod has only an input message and no output. Typically, a oneway method returns the thread of control to the calling application prior to executing the actual business method. A 181 processor should report an error if an operation marked #Oneway has a return value or Holder parameters, or declares any checked exceptions.
Can I assume then, that if I need exception handling (checked or unchecked) that this annotation is not recommended ? I don't return anything from the business logic, however I still have an interest in being aware of timeouts and other various errors specific to act of calling a SOAP method. Does this annotation mean I don't have access to HTTP return codes or thrown exceptions ?
Question: Am I better off threading this out on my own to get a truly asynchronous call, and removing the #Oneway annotation ?
#Oneway means nothing will ever escape your method, neither response nor exception. This is for two reasons:
technically exception is just another type of response (SOAP fault), thus it cannot be returned from a one-way method (which can't return anything)
often one-way methods are executed asynchronously by the web service framework (I know apache-cxf odes that). The framework returns immediately, so your customer might have received an empty response even before the handling of one-way method even started. When the exception is thrown, the original HTTP connection is long gone.
So if you want to propagate exceptions or timeouts, use standard SOAP method with empty response* and few faults declared explicitly. If you want to timeout your call after some time, you'll need separate thread pool and blocking waiting for response gor a given period of time.
* please do not confuse empty SOAP response (an XML document with no content, just root tag, wrapped in a SOAP envelope) with empty HTTP response (nothing was sent back). Remember that SOAP is not limited to HTTP. For example if you use JMS or e-mail transport, empty response (or fault) of ordinary two-way function is yet another message being sent from server to client. one-way method is just one reauest message and nothing sent back.