GRPC Server getting Client cancellation - java

i have created a microservice for a webcam-stream with grpc. the streaming works fine but the cancellation of the stream works only on client side.
if the client calls CancellableContext.cancel, the streaming of video stops but the server is still streaming a video with the cam. if the cancellation is called the server throws a Transport faild Exception.
could this exception can be catched for stopping streaming or other operations on the server-side?
ClientCall<KameraStreamRequest, KameraStreamResponse> call = (ClientCall) imageStreamBlockingStub.getChannel().newCall(ImageStreamServiceGrpc.METHOD_IMAGE_DATA_STREAM, imageStreamBlockingStub.getCallOptions());
call.sendMessage(KameraStreamRequest.newBuilder().setStreamState(StreamState.STOP).build());
The streaming is started with a simple request which has an enum with State.START. if i call the code above to change the state to STOP i become an exception:
Exception in thread "main" java.lang.IllegalStateException: Not started
at com.google.common.base.Preconditions.checkState(Preconditions.java:174)
at io.grpc.internal.ClientCallImpl.sendMessage(ClientCallImpl.java:388)
at org.cpm42.grpcservice.ImageStreamClient.cancelStream(ImageStreamClient.java:70)
at org.cpm42.main.StreamClientMainClass.main(StreamClientMainClass.java:21)
i have read that this could be a bug.
is it possible to get Sessions or Connections or something else on server-side?
thanks

This may be a slightly different problem, but for a simple client/server project I needed to add: channel.shutdown().awaitTermination(5, SECONDS); to the client method making requests to get rid of the Exception.
io.grpc.netty.NettyServerTransport notifyTerminated
SEVERE: Transport failed
java.io.IOException: An existing connection was forcibly closed by the
remote host.
What does the rest of your stacktrace say?

Not sure if it helps. But you could try on the server side to poll the gRPC Context if the call has been cancelled and act accordingly:
import io.grpc.Context;
{ ...
Context.current().isCancelled()
...}

Related

Simulate an Http client disconnection before the server reply

The problem:
I am having some strange behaviour from a Jetty server (rest over https) when some client connections are closed (client-side) before the server has had time to reply. Normally this is well managed and expected by a webserver/application server but in a specific instance something breaks the server that stops replying.
I am trying to reproduce programmatically and locally the issue, opening a client connection and closing it before the server has had time to reply, but I do not have much experience with a situation like this, normally the clients I write are expected to not die immediately.
I am not interested in the language/application I have to use to replicate my case, it can be a Java program, a netcat command, telnet, dotnetcore... The only limit I have is that it should run on a Kubernetes pod, if possible.
I am trying to use Java to open a socket then close it immediately, or to create an Http client and stop it immediately after a request sent, but with no luck at the moment.
At the same time I am looking at netcat, but I fear it's too low level for a rest request.

JMeter 5.4.1: java.net.SocketException: Socket Closed at the end of test

No matter how many threads I use, Jmeter shows errors in the end of the test. Just in the end - until that moment there are no errors. When threads are being closed, the last few of them are failed due to:
Non HTTP response code: javax.net.ssl.SSLException message:Non HTTP response message: java.net.SocketException: Socket Closed
or
Non HTTP response code: javax.net.ssl.SSLException message:Non HTTP response message: java.net.SocketException: Socket Closed
or
Non HTTP response code: java.lang.IllegalStateException message:Non HTTP response message: Connection pool shut down
Three of them can be found in some of the failed threads most of the time.
I've tried almost every solution I've found in the net (including those on stackoverflow) but none of them fixed the problem. Below are links to examples I tried:
https://cwiki.apache.org/confluence/display/jmeter/JMeterSocketClosed
https://www.xtivia.com/blog/fixing-jmeter-socket-errors
The setup of the script:
bzm - Concurrency Thread Group
User Defined Variables
CSV Data Set Config
HTTP Cache Manager
HTTP Cookie Manager
HTTP Request Defaults
One of HTTP Requests
Can anybody help me?
The issue seems to be connected with abnormal threads termination when your "Hold target rate time" ends.
Are you sure you're running test in non-GUI mode and following other JMeter Best Practices
The options are in:
Ignore the errors as they are client-side errors
Introduce ramp-down so the threads will be terminated gradually, it can be done using Throughput Shaping Timer
Remove last requests which are failing from the .jtl results file using Filter Results Tool
Reach out to the plugin developers and/or maintainers and report the issue there

Java SOAP/REST webservices : client timeout but server does not rollback

I have a java client app and a java server app.
My client can have network slowdown.
My client performs SOAP webservices to my server app. The problem is that sometimes the client reach its timeout (40sec) because the network is really, really bad.
For the client app this request is a fail, and it retry the same call a bit later. But the server had already integrated the data from client, and I get violated keys error from my ORM.
I do not want to prolong the timeout on the client side.
My question is: when the client timeout, is there a way to rollback everything on the server side ?
Thanks
One of the options to solve it is to set some flag/status in the database when request is accepted by server. Something like inProcessing. And change this flag to Complete after successful data processing.
When client will retry the same call later you can check this flag and if flag=inProcessing or Complete don't do any date processing.

How to handle occasional SocketException: Unexpected end of file from server?

I have a REST service that calls another remote service.
Most of the time the communication works fine, but occasionally, I encounter
org.apache.cxf.jaxrs.client.ClientWebApplicationException:
org.apache.cxf.interceptor.Fault: Could not send Message.
org.apache.cxf.interceptor.Fault: Could not send Message.
SocketException invoking https://url: Unexpected end of file from server
I did some research and found it's the remote server shut down the connection unexpectedly.
It really puzzles me, because everything (input, header, etc) is the same and I was testing with a small amount of requests only like (50-100), I have tried both in sequence and in parallel, only a few will encounter this issue.
Why would this happen? Do I have to ask the remote server to reconfigure, or do I have to implement a retry pattern in my service in this case?
Any hint?
Thanks
P.S
I am using
org.apache.cxf.jaxrs.client.WebClient
to invoke the remote service. Will it make any difference if I switch to HttpClient?

JAX-WS web service gets a 400 Bad Request error on client and Broken Pipe error on server for long operations

I have a Java-based client that receives data from a Tomcat 6.0.24 server webapp via JAX-WS. I recently upgraded the server with new functionality that can take a long time (over 30 seconds) to run for certain inputs.
It turns out that for these long operations, some kind of timeout is occurring. The client gets an HTTP 400 Bad Request error, and shortly afterwards (according to my log timestamps, at least) the server reports a Broken Pipe.
Here's the client's error message:
com.sun.xml.internal.ws.client.ClientTransportException: The server sent HTTP status code 400: Bad Request
And the server's:
javax.xml.ws.WebServiceException: javax.xml.stream.XMLStreamException: ClientAbortException: java.net.SocketException: Broken pipe
I've tried experimenting with adding timeout settings on the service's BindingProvider, but that doesn't seem to change anything. The default timeout is supposed to be infinite, right?
I don't know if it's relevant, but it might be worth noting that the client is an OSGI bundle running in a Karaf OSGI framework.
Bottom line, I have no idea what's going on here. Note that the new functionality does work when it doesn't have to run for too long. Also note that the size of the new functionality's response is not any larger than usual - it just takes longer to calculate.
In the end, the problem was caused by some sort of anti-DoS measure on the server's public gateway. Unfortunately, the IT department refused to fix it, forcing me to switch to polling-based communication. Oh well.

Categories