Java SOAP/REST webservices : client timeout but server does not rollback - java

I have a java client app and a java server app.
My client can have network slowdown.
My client performs SOAP webservices to my server app. The problem is that sometimes the client reach its timeout (40sec) because the network is really, really bad.
For the client app this request is a fail, and it retry the same call a bit later. But the server had already integrated the data from client, and I get violated keys error from my ORM.
I do not want to prolong the timeout on the client side.
My question is: when the client timeout, is there a way to rollback everything on the server side ?
Thanks

One of the options to solve it is to set some flag/status in the database when request is accepted by server. Something like inProcessing. And change this flag to Complete after successful data processing.
When client will retry the same call later you can check this flag and if flag=inProcessing or Complete don't do any date processing.

Related

Simulate an Http client disconnection before the server reply

The problem:
I am having some strange behaviour from a Jetty server (rest over https) when some client connections are closed (client-side) before the server has had time to reply. Normally this is well managed and expected by a webserver/application server but in a specific instance something breaks the server that stops replying.
I am trying to reproduce programmatically and locally the issue, opening a client connection and closing it before the server has had time to reply, but I do not have much experience with a situation like this, normally the clients I write are expected to not die immediately.
I am not interested in the language/application I have to use to replicate my case, it can be a Java program, a netcat command, telnet, dotnetcore... The only limit I have is that it should run on a Kubernetes pod, if possible.
I am trying to use Java to open a socket then close it immediately, or to create an Http client and stop it immediately after a request sent, but with no luck at the moment.
At the same time I am looking at netcat, but I fear it's too low level for a rest request.

While doing gRPC client-side streaming, how server will behave when it does not receive all of requests

I'm wondering about the scenario that the client is going to do data streaming. During that process, it will send three requests. Let's assume that the server will receive only two.
How in current situation server will react? I guess that server will never notify the client about the finished request (it knows the number of requests that are expected) and the process will get terminated as long as the deadline has been defined. Do my assumptions are valid?
I'm working on the Java implementation of gRPC.
That's correct. If the server is waiting for the 3rd request which it never receives, the call will be terminated by the deadline.

Jersey/Servlet Serverside handling of network failures

-- EDIT: --
To rephrase the question.
Does HTTP know anything about the status of underlying TCP connection?
TCP is a reliable protocol. When the server sends data to the client it expects an acknowledgment signal from that client. What happens in HTTP when the underlying server side TCP connection fails to receive the ACK signal?
-- ORIGINAL Question: --
I am trying to solve a design issue on our HTTP client/server app.
Here is the situation:
The server runs on Tomcat, and we are somewhat limited to using Jersey or Servlets for the server side implementation.
The client requests data from the Server, which once read is deleted.
Data must not be deleted if the client has not received it.
There is no confirmation from the client if the data is received or not.
The client impl cannot be changed in any way.
The network connection is unstable and can be interrupted for long periods of time (e.g. 30 sec.) and also often.
The problem: if the client made a request and shortly after lost connection to the server, the server will not recognize this and it will delete and send the data to the client over the dead connection.
Ideally, we want to get an IOException when flushing the data stream to the client and handle it accordingly:
try (ServletOutputStream outputStream = httpServletResponse.getOutputStream()) {
outputStream.write(bytes);
outputStream.flush();
} catch (Exception e) {
// TODO: do something ...
}
I simulated this locally by killing the client shortly after sending the request or by setting a very low client read timeout value. In both cases I got a server side exception (with bioth Jersey and Servlets).
The last test was sending a request over a network and pulling the network cable in the process.
Unfortunately I did not get the expected result. The server streamed the data back without recognizing the interrupted connection.
So, does anyone have an idea how to force a Server side exception when the connection to the client is broken?
Any other ideas that don't involve using Sockets or confirmation calls from the client?
Thanks in advance!
Instead of deleting the file in real time, you can write a message on a queue in order to delete it later. The delete would have to check a database where you write if the client received the file completely.
I don't think there's a way to know for certain whether the data arrived to the client unless the client sends an acknowledgement message.
The only solution seems to be not actually deleting the data, but keeping it and setting a 'deleted' flag. But since I don't know the particular use case, I'm not sure if this helps...
TCP is a two way protocol.
If you set up an input stream and call InputStream.read(), this should return -1 if the client has disconnected.
More detail here:
Java Sockets: check if client is able to receive message from server

REST Server to client communication

I'm developing a Java API for an Adndroid app in Spring. Right now my API is 100% REST and stateless. For the client to receive data, it must send a request first.
However, what I need is the server to send data to the to the client /not the client to the server fisrt/ whenever it is ready with it's task.
I think that some kind of session must be created between the two parties.
My question is: How can I achieve this functionality of the SERVER sending data to the CLIENT when it's ready with it's task? /It is unknown how long the task will take./
What kind of API should I develop for this purpose?
One idiotic workaround is sending a request to the server every n seconds but I'm seeking for a more intelligent approach.
There are multiple options available. You can choose what suits best for you.
Http Long Polling - In this, server holds the request until it's ready with its task (in your case). Here, you don't have to make multiple requests every few seconds (Which is Http Polling).
Server Sent Events - In this, server sends update to the client without long-polling. It is a standardized part of HTML 5 - https://www.w3.org/TR/eventsource/
Websockets - Well, websockets work in duplex mode and in this a persistent TCP connection is established. Once TCP connection is established, both server and client sends data to and fro. Supported by most modern browsers. You can check for Android Websocket Library like autobahn and Java websocket.
SockJs - I would recommend to go with this option instead of plain WebSocket. http://docs.spring.io/spring/docs/current/spring-framework-reference/htmlsingle/#websocket-fallback-sockjs-enable

How to use Netty clients within Netty server

I'm going to create an authentication server which itself interacts with
a set of different Oauth2.0 servers.
Netty seems to be a good candidate to implement network part here.
But before start I need to clear some details about netty as I'm new to it.
The routine will be as follows:
The server accepts an HTTPS connection from a client.
Then, not closing this first connection, it makes another connection
via HTTPS to a remote Oauth2.0 server and gets data
After all, the server sends the result back to the client which is supposed to keep the connection alive.
How to implement this scenario with Netty?
Do I have to create a new netty client and/or reconnect it each time I need to connect to a remote Oauth2.0 server?
If so, I'll have to create a separate thread for every
outgoing connection which will drastically reduce performance.
Another scenario is to create a sufficient number of Netty clients
within a server at the beginning (when server starts)
and keep them constantly connected to the Oauth2.0 servers via HTTPS.
That's easily done with Netty. First you set up your Netty server using the ServerBootstrap and then in a ChannelHandler that handles your connection from the client you can use e.g. the client Bootstrap to connect to the OAuth server and fetch the data. You don't need to worry about creating threads or similar. You can do it all in a non-blocking fashion. Take a look at and try to understand how this example works:
https://github.com/netty/netty/blob/master/example/src/main/java/io/netty/example/proxy/HexDumpProxyFrontendHandler.java#L44.

Categories