I have to implement a webservice client to a given WSDL file.
I used the SDK's 'wsimport' tool to create Java classes from the WSDL as well as a class that wrap's the webservice's only method (enhanceAddress(auth, param, address)) into a simple java method. So far, so good. The webservice is functional and returning results correcty. The code looks like this:
try {
EnhancedAddressList uniservResponse = getWebservicePort().enhanceAddress(m_auth, m_param, uniservAddress);
//Where the Port^ is the HTTP Soap 1.2 Endpoint
}catch (Throwable e) {
throw new AddressValidationException("Error during uniserv webservice request.", e);
}
The Problem now: I need to get Information about the connection and any error that might occur in order to populate various JMX values (such as COUNT_READ_TIMEOUT, COUNT_CONNECT_TIMEOUT, ...)
Unfortunately, the method does not officially throw any Exceptions, so in order to get details about a ConnectException, i need to use getCause() on the ClientTransportException that will be thrown.
Even worse: I tried to test the read timeout value, but there is none. I changed the service's location in the wsdl file to post the request to a php script that simply waits forever and does not return. Guess what: The web service client does not time out but waits forever as well (I killed the app after 30+ minutes of waiting). That is not an option for my application as i eventually run out of tcp connections if some of them get 'stuck'.
The enhanceAddress(auth, param, address) method is not implemented but annotated with javax.jws.* Annotations, meaning that i cannot see/change/inspect the code that is actually executed.
Do i have any option but to throw the whole wsimport/javax.jsw-stuff away and implement my own soap client?
to setup read-timeout and connect timeouts you can configure the binding parameters when you setup your Service and Port instances:
Service = new Service();
Port = Service.getPort();
((BindingProvider) Port).getRequestContext().put(
BindingProvider.ENDPOINT_ADDRESS_PROPERTY,
"http://localhost:8080/service");
((BindingProvider) Port).getRequestContext().put(
BindingProviderProperties.CONNECT_TIMEOUT,
30);
((BindingProvider) Port).getRequestContext().put(
BindingProviderProperties.REQUEST_TIMEOUT,
30);
now whenever you execute a service via "Port" you will get response timeouts and/or connection timeouts if the backend is slow to respond. the values follow the timeout values of the Socket Class.
when these timeouts are exceeded you will get timeout exeption or a connection exception and you can put counter-code to keep track of how many you get.
Related
I'm facing strange behavior when i request my own ping server with java HttpComponent client.
Sometimes, an unknownHostException is thrown with no reason.
This exception is principally thrown after switching network, for example when i changed the default network route (from eth0 to wifi or from wifi to other mobile NIC)
For information, my wifi connection is enabled through mobile access point. (Could this point reason of my issue ? )
I'm running on a linux OS, in embededded context with limited linux command.
Below, code example:
HttpGet get = new HttpGet("someUri");
if (networkInterfaceName != null) { // used to ping through specific route
get.setConfig(RequestConfig.custom()
.setLocalAddress(networkInterfaceToInetAdress(networkInterfaceName ))
.setConnectionRequestTimeout(15000)
.setConnectTimeout(15000)
.setSocketTimeout(15000).build());
}
HttpResponse response = HttpClientBuilder.create().useSystemProperties().build().execute(get);
So, if a specific interface is provided, local address is set in the requestConfig, otherwise we use the default route.
I'm currently not abled to identify the root cause and some actions are tried to identify the root cause.
First action, i checked on each httpRequest, route and /etc/resolv.conf and everything seems ok. Ping cmd result is also ok when unknown host exception is thrown.
I checked the httpClient code, and it seems that it create a new client on each call, so there is no httpClient cache for me.I used HttpClientBuilder class and it seems that create method call return always a new builder. Also, cloaseableHttpClient is not closed explicitly, should it impact next call ?
I checked httpClient method, it enable setting a dnsResolver, but in my case, but i've no control to remote/targeted ip.
I set JVM system properties in code (and not as Java ARGS directly):
java.security.Security.setProperty("networkaddress.cache.ttl", "0");
java.security.Security.setProperty("networkaddress.cache.negative.ttl", "0");
System.setProperty("java.net.preferIPv4Stack", "true");
System.setProperty("sun.net.inetaddr.ttl", "0");
Also, i'm going to use dnsjava lib when unknowhostexception is thrown as dig/dnslookup are not available in my limited OS.
Any hint about this issue ? or other thing that i need to ckeck ? may be it's not a DNS issue but something in Java socket or connection that i passed by ?
UPDATE
I tried to run same logic in another JVM and httpGet request are ok while httpGet fail in my original program that is running in another JVM
Thanks for your help.
I have a controller in spring which getting a POST request which is handling as asynchronous(using DeferredResult object as a return value).
The response for this request is writing bytes to the HTTP stream directly (HttpServletResponse.getWriter().print()) and when it's done writing it sets result on the DeferredResult object for close the connection.
I'm writing my response in stream chunks.
I have an issue in this request handling because the client is closing the connection if I'm not writing to it for 1 minute. (I can write some chunks and then stop writing for 1 minute - therefore the connection will be closed in the middle of my procedure).
I want to control the closing connection procedure - I want to send keep alive when I'm not writing any data to the stream so that the connection won't be closed until I decided to close it from the server-side.
I didn't find out how should I get control of the connection from the controller in the server.
Please assist.
Thanks.
There is no such thing as a "keep alive" during an ongoing request or response in HTTP which can help with idle timeouts when receiving a request or response.
HTTP keep alive is only about keeping the TCP connection open after a response in order to process more requests on the same connection. TCP keep alive is instead used to detect connection loss without TCP shutdown and can also be used to prevent idle timeouts in stateful packet filters (as used in firewalls or NAT routers) in between client and server. It does not prevent idle timeouts at the application level though since it does not transport any data visible to the application level.
Note that the way you want to use HTTP is contrary to how HTTP was designed originally. It was designed for a client sending a full request and the server sending a full response immediately and not for the server sending some parts of the response, idling some time and then send some more. The proper way to implement such behavior would be by using WebSockets. With WebSockets both client and server can send new messages at any time (i.e. no request-response schema) and it also supports keep-alive messages. If WebSockets are not an option you can instead implement a polling client which regularly polls for new data from the server with a new request.
I ran into similar need just recently. The server code executes a long running operation that can take as long as 30 minutes to return, and the client times out long before that. The solution was to have the long running operation send periodic "keep alive" packets of data to the client via a "callback" argument provided by the request handler method. The callback is nothing more than a function (think of Lambda in Java) that takes as parameter the "keep alive" data packet to send to client, and then writes that data packet to the client via the java.io.PrintWriter reference that you can get off of javax.servlet.http.HttpServletResponse. Below code is the handler method that does this. I had to refactor the code in the call hierarchy to accept this new "callback" parameter until the "callback" can reach the method that is performing the long running operation, and inside that code I invoke the "callback" every so often, for example every time 10 records are processed. Not that below is Groovy code (scripting code on top of Java that runs on the JVM) and the server-side framework is Spring,
...
#Autowired
DataImporter dataImporter
#PostMapping("/my/endpoint")
void importData(#RequestBody MyDto myDto, HttpServletResponse response) {
// Callback to allow servant code deep in the call hierarchy to report back to client any arbitrary message
Closure<Void> callback = { String str ->
response.writer.print str
response.writer.flush()
}
// This leads to the code that is performing a long running operation. Using
// this "hook" that code has a direct connection to the client whereby
// it can send packets of data to keep the connection from timing out.
dataImporter.importData(myDto, callback)
}
}
I have developed a standalone Javase client which performs an EJB Lookup to a remote server and executes its method.The Server application is in EJB 3.0
Under some strange magical but rare situations my program hangs indefinetly, on looking inside the issue it seems that while looking up the ejb on the server, I never get the response from the server and it also never times out.
I would like to know if there is a property or any other way through which we can setup the lookup time in client or at the server side.
There is a very nice article that discusses ORB configuration best practices at DeveloperWorks here. I'm quoting the three different settings that can be configured at client (you, while doing a lookup and executing a method at a remote server);
Connect timeout: Before the client ORB can even send a request to a server, it needs to establish an IIOP connection (or re-use an
existing one). Under normal circumstances, the IIOP and underlying TCP
connect operations should complete very fast. However, contention on
the network or another unforeseen factor could slow this down. The
default connect timeout is indefinite, but the ORB custom property
com.ibm.CORBA.ConnectTimeout (in seconds) can be used to change the
timeout.
Locate request timeout: Once a connection has been established and a client sends an RMI request to the server, then LocateRequestTimeout
can be used to limit the time for the CORBA LocateRequest (a CORBA
“ping”) for the object. As a result, the LocateRequestTimeout should
be less than or equal to the RequestTimeout because it is a much
shorter operation in terms of data sent back and forth. Like the
RequestTimeout, the LocateRequestTimeout defaults to 180 seconds.
Request timeout: Once the client ORB has an established TCP connection to the server, it will send the request across. However, it
will not wait indefinitely for a response, by default it will wait for
180 seconds. This is the ORB request timeout interval. This can
typically be lowered, but it should be in line with the expected
application response times from the server.
You can try the following code, which performs task & then waits at most the time specified.
Future<Object> future = executorService.submit(new Callable<Object>() {
public Object call() {
return lookup(JNDI_URL);
}
});
try {
Object result = future.get(20L, TimeUnit.SECONDS); //- Waiting for at most 20 sec
} catch (ExecutionException ex) {
logger.log(LogLevel.ERROR,ex.getMessage());
return;
}
Also, the task can be cancelled by future.cancel(true).
Remote JNDI uses the ORB, so the only option available is com.ibm.CORBA.RequestTimeout, but that will have an affect on all remote calls. As described in the 7.0 InfoCenter, the default value is 180 (3 minutes).
Every five seconds a function :
private boolean ping() {
try {
URL pingServerUrl = new URL(serverResourceLocator);
HttpURLConnection connection = (HttpURLConnection) pingServerUrl.openConnection();
if(connection.getResponseCode() == 200) {
lastPingSuccessful = true;
}
System.out.println("pinged");
}catch(Exception exc) {
exc.printStackTrace();
return lastPingSuccessful;
}
return lastPingSuccessful;
}
is called. It is type of ping function. It tries to connect to the servlet on the server and it sends some credentials along with the URL serverResourceLocator . The thing that bothers me is that a new connection is opened every 5 seconds.
How can I avoid it ?
You can't recycle an HTTP connection. HTTP is a stateless protocol. The best you can do is to not close the connection once it is opened and keep it alive and send a heartbeat message from the server down to the client.
If the use of the function is what I am guessing it to be, i.e. to test the servlet, it is best to leave the connection part in; you get more coverage out of your test script. A connection every 5 seconds is hardly anything. The server would not feel anything.
Else just store the connection as a global variable. And reuse it every time you make a request.
I don't think there is any alternative to this. You will have to make http connection to get status of the URL.
Below 2 are lightweight alternatives of Java
Unix curl utility
Ajax call from your java script
Let's say we want to remove client's data after EXPIRY_TIME.
I think one of the two methods can be followed:
You can check with some cron job (scheduler) in session modified time (which suggests last access time)
For each client request, set access log in flat file system and check with cron job in access log
In any of the case, if last access time > EXPIRY_TIME then remove client's data.
This approach will save round trips and help to reduce traffic.
You can release the connection once the job is done.Anyhow java webserver is designed to take multiple request.Even though every 5 seconds request is coming to the server,its not a problem.
I have a webservice which is accepting a POST method with XML. It is working fine then at some random occasion, it fails to communicate to the server throwing IOException with message The target server failed to respond. The subsequent calls work fine.
It happens mostly, when i make some calls and then leave my application idle for like 10-15 min. the first call which I make after that returns this error.
I tried couple of things ...
I setup the retry handler like
HttpRequestRetryHandler retryHandler = new HttpRequestRetryHandler() {
public boolean retryRequest(IOException e, int retryCount, HttpContext httpCtx) {
if (retryCount >= 3){
Logger.warn(CALLER, "Maximum tries reached, exception would be thrown to outer block");
return false;
}
if (e instanceof org.apache.http.NoHttpResponseException){
Logger.warn(CALLER, "No response from server on "+retryCount+" call");
return true;
}
return false;
}
};
httpPost.getParams().setParameter(HttpMethodParams.RETRY_HANDLER, retryHandler);
but this retry never got called. (yes I am using right instanceof clause). While debugging this class never being called.
I even tried setting up HttpProtocolParams.setUseExpectContinue(httpClient.getParams(), false); but no use. Can someone suggest what I can do now?
IMPORTANT
Besides figuring out why I am getting the exception, one of the important concerns I have is why isn't the retryhandler working here?
Most likely persistent connections that are kept alive by the connection manager become stale. That is, the target server shuts down the connection on its end without HttpClient being able to react to that event, while the connection is being idle, thus rendering the connection half-closed or 'stale'. Usually this is not a problem. HttpClient employs several techniques to verify connection validity upon its lease from the pool. Even if the stale connection check is disabled and a stale connection is used to transmit a request message the request execution usually fails in the write operation with SocketException and gets automatically retried. However under some circumstances the write operation can terminate without an exception and the subsequent read operation returns -1 (end of stream). In this case HttpClient has no other choice but to assume the request succeeded but the server failed to respond most likely due to an unexpected error on the server side.
The simplest way to remedy the situation is to evict expired connections and connections that have been idle longer than, say, 1 minute from the pool after a period of inactivity. For details please see the 2.5. Connection eviction policy of the HttpClient 4.5 tutorial.
Accepted answer is right but lacks solution. To avoid this error, you can add setHttpRequestRetryHandler (or setRetryHandler for apache components 4.4) for your HTTP client like in this answer.
HttpClient 4.4 suffered from a bug in this area relating to validating possibly stale connections before returning to the requestor. It didn't validate whether a connection was stale, and this then results in an immediate NoHttpResponseException.
This issue was resolved in HttpClient 4.4.1. See this JIRA and the release notes
Solution: change the ReuseStrategy to never
Since this problem is very complex and there are so many different factors which can fail I was happy to find this solution in another post: How to solve org.apache.http.NoHttpResponseException
Never reuse connections:
configure in org.apache.http.impl.client.AbstractHttpClient:
httpClient.setReuseStrategy(new NoConnectionReuseStrategy());
The same can be configured on a org.apache.http.impl.client.HttpClientBuilder builder:
builder.setConnectionReuseStrategy(new NoConnectionReuseStrategy());
Although accepted answer is right, but IMHO is just a workaround.
To be clear: it's a perfectly normal situation that a persistent connection may become stale. But unfortunately it's very bad when the HTTP client library cannot handle it properly.
Since this faulty behavior in Apache HttpClient was not fixed for many years, I definitely would prefer to switch to a library that can easily recover from a stale connection problem, e.g. OkHttp.
Why?
OkHttp pools http connections by default.
It gracefully recovers from situations when http connection becomes stale and request cannot be retried due to being not idempotent (e.g. POST). I cannot say it about Apache HttpClient (mentioned NoHttpResponseException).
Supports HTTP/2.0 from early drafts and beta versions.
When I switched to OkHttp, my problems with NoHttpResponseException disappeared forever.
Nowadays, most HTTP connections are considered persistent unless declared otherwise. However, to save server ressources the connection is rarely kept open forever, the default connection timeout for many servers is rather short, for example 5 seconds for the Apache httpd 2.2 and above.
The org.apache.http.NoHttpResponseException error comes most likely from one persistent connection that was closed by the server.
It's possible to set the maximum time to keep unused connections open in the Apache Http client pool, in milliseconds.
With Spring Boot, one way to achieve this:
public class RestTemplateCustomizers {
static public class MaxConnectionTimeCustomizer implements RestTemplateCustomizer {
#Override
public void customize(RestTemplate restTemplate) {
HttpClient httpClient = HttpClientBuilder
.create()
.setConnectionTimeToLive(1000, TimeUnit.MILLISECONDS)
.build();
restTemplate.setRequestFactory(
new HttpComponentsClientHttpRequestFactory(httpClient));
}
}
}
// In your service that uses a RestTemplate
public MyRestService(RestTemplateBuilder builder ) {
restTemplate = builder
.customizers(new RestTemplateCustomizers.MaxConnectionTimeCustomizer())
.build();
}
This can happen if disableContentCompression() is set on a pooling manager assigned to your HttpClient, and the target server is trying to use gzip compression.
Same problem for me on apache http client 4.5.5
adding default header
Connection: close
resolve the problem
Use PoolingHttpClientConnectionManager instead of BasicHttpClientConnectionManager
BasicHttpClientConnectionManager will make an effort to reuse the connection for subsequent requests with the same route. It will, however, close the existing connection and re-open it for the given route.
I have faced same issue, I resolved by adding "connection: close" as extention,
Step 1: create a new class ConnectionCloseExtension
import com.github.tomakehurst.wiremock.common.FileSource;
import com.github.tomakehurst.wiremock.extension.Parameters;
import com.github.tomakehurst.wiremock.extension.ResponseTransformer;
import com.github.tomakehurst.wiremock.http.HttpHeader;
import com.github.tomakehurst.wiremock.http.HttpHeaders;
import com.github.tomakehurst.wiremock.http.Request;
import com.github.tomakehurst.wiremock.http.Response;
public class ConnectionCloseExtension extends ResponseTransformer {
#Override
public Response transform(Request request, Response response, FileSource files, Parameters parameters) {
return Response.Builder
.like(response)
.headers(HttpHeaders.copyOf(response.getHeaders())
.plus(new HttpHeader("Connection", "Close")))
.build();
}
#Override
public String getName() {
return "ConnectionCloseExtension";
}
}
Step 2: set extension class in wireMockServer like below,
final WireMockServer wireMockServer = new WireMockServer(options()
.extensions(ConnectionCloseExtension.class)
.port(httpPort));