Im building one application that requests one http connection, thus I have two different situations, one while Im programming at home and another while Im at my work. In my work I have one proxy server and at home not. At home my request is always ok, in my job I receive random failures.
In my console, the stack trace points:
HTTP error fetching URL. Status=503, URL=http://www.google.com/sorry/?continue=http://google.com/search%3Fq%3Dmake%2BSearch
Which means:
503 Service Unavailable. The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state.
So, the stack traces points the error to the google server, as its not available, but since I dont receive this error at home Im almost sure the error is with the proxy server. Copying the link of the error to the browser I could see that google is blocking the request since is a sort of automatic request. Anyway, Im using:
System.setProperty("http.proxyHost", "");
System.setProperty("http.proxyPort", "");
to handle the proxy connection, even though its not working fully correct, sometimes the connection works, sometimes not returning the error up there. Is there anything I could possibly do to solve this situation such as refresh my IP?
Related
I'm trying to configure the WSO2 API Manager. (version - v4.0.0)
When I try to create REST API and point to the endpoints I"m getting a Connection error message for the given endpoints. I have hosted the API Manager and the back end services on the same server(backend services are running on the tomcat application on the same server in port 8080)
API Manager Log produces the following message :
ERROR {org.wso2.carbon.apimgt.rest.api.publisher.v1.impl.ApisApiServiceImpl} - Error occurred while sending the HEAD request to the given endpoint url: org.apache.commons.httpclient.ConnectTimeoutException: The host did not accept the connection within timeout of 4000 ms
would really like to what has caused the issue.
P.S: I can access the backend services directly without any connection issues using a REST client.
It's difficult to answer the question without knowing the exact details of your deployment and the backend. But let me try. Here is what I think is happening. As you can clearly see, the error is a connection timeout The host did not accept the connection within timeout of 4000 ms.
Let me explain what happens when you click on the Check Endpoint Status button. When you click on the Check Endpoint Status button, the Browser is not directly sending a request to the Backend to validate it. The Backend URL will be passed to the APIM Server, and the Server will perform the validation by sending an HTTP HEAD request to the BE service.
So there can be two causes. First may be your backend doesn't know how to handle a HEAD request which is preventing it from accepting the request. But given the error indicated it's a network issue, I doubt it even reached the BE.
The second one is, that your Backend is not accessible from the place API Manager is running. If you are running API Manager on Server A and trying to access API Manager via browser from Server B(Local Machine). Although you can access the BE from Server B may be from Server A it's not accessible. When I say BE is not accessible from API Manager server, it means it's not accessible with the same URL that was used in API Manager. It doesn't really matter if it runs in the same Server if you are using a different DNS other than localhost to access it. So go to the server API Manager is running and send a request using the same URL that was used in API Manager and see whether it's accessible from there.
First try doing a curl request by login into the server where APIM is running (not from your local machine). Maybe due to some firewall rules within the server, the hostname given in the URL may not be accessible. Also, try sending a HEAD request as well. You might be able to get some idea why this is happening
I'm using the tyrus-standalone-client-1.12.jar to maintain a connection to a Websocket server (or set of servers) I have no control over. I'm creating a ClientManager instance that I configure and then use clientManager.asyncConnectToServer(this, new URI(server)), where this is the instance of a class with annotated methods like #OnOpen, #OnMessage and so on.
I also have a ClientManager.ReconnectHandler registered that handles onDisconnect and onConnectFailure and of course outputs debug messages.
Most of the time it connects just fine, but especially when the server has issues and I loose connection, reconnecting sometimes doesn't work.
I first noticed it when I simply returned true in onDisconnect and it just wouldn't reconnect sometimes (which in this case the ReconnectHandler should have done for me, which it usually did as well). The rest of the program keeps running just fine, but my websocket client just wouldn't do anything after the debug message in onDisconnect.
Since then I changed it to only use the ReconnectHandler to just connect again via asyncConnectToServer (on a delay), in order to be able to switch to another server (I couldn't find a way to do that with just the ReconnectHandler). Even then, the asyncConnectToServer sometimes just seems to not do anything. I'm sure it does something, but it doesn't output the debug message in onConnectFailure and also doesn't call onOpen, even after hours, so the client ends up just being stuck there.
Again, this isn't always the case. It can reconnect just fine several times, both triggered by onDisconnect or by onConnectFailure, and then on one attempt suddenly just hang. When I had two instances of the program running at the same time, both reconnected a few times and then both hang on asyncConnectToServer at the same reconnect attempt, which for me seems to indicate that it is caused by some state of the server or connection.
One time it even failed to connect when intially connecting (not reconnecting), during the same time where the server seemed to have issues.
Does anyone have an idea what could cause this?
Am I missing some property to set a connection attempt timeout?
Am I missing some way to retrieve connection failure info other than ReconnectHandler.onConnectFailure?
Am I even allowed to reuse the same ClientManager to connect several times (after the previous connection closed)?
Could there be something in my client endpoint implementation that somehow prevents onOpen or onConnectFailure to be called?
It's kind of hard to tell what's going on without getting any error message and without being able to reproduce it. I used JConsole's Detect Deadlock button on the program with a client hanging like this and it didn't detect anything (not sure if it would, but I thought I'd mention it).
I have a REST service that calls another remote service.
Most of the time the communication works fine, but occasionally, I encounter
org.apache.cxf.jaxrs.client.ClientWebApplicationException:
org.apache.cxf.interceptor.Fault: Could not send Message.
org.apache.cxf.interceptor.Fault: Could not send Message.
SocketException invoking https://url: Unexpected end of file from server
I did some research and found it's the remote server shut down the connection unexpectedly.
It really puzzles me, because everything (input, header, etc) is the same and I was testing with a small amount of requests only like (50-100), I have tried both in sequence and in parallel, only a few will encounter this issue.
Why would this happen? Do I have to ask the remote server to reconfigure, or do I have to implement a retry pattern in my service in this case?
Any hint?
Thanks
P.S
I am using
org.apache.cxf.jaxrs.client.WebClient
to invoke the remote service. Will it make any difference if I switch to HttpClient?
I'm creating a small utility which receives a lot of HTTP requests. It is written in java and uses embedded-jetty to handle requests via https.
I have a load-testing tool for it, but when it is being run for some time it starts to throw exceptions:
java.net.BindException: Address already in use: connect
(note, this is on sender's side, not in my project)
As I understand this means no more free sockets were found in system when another connect was called. Throughput is about 1000 requests per second, and failures start to appear somewhere after 20000 to 50000 requests.
However when I use the same load testing tool with another program (a kind of simple consumer, written in scala using netty by some colleague - it simply receives all requests and returns empty ok response) - there is no so problem with sockets (though typical speed is 1.5-2 times slower).
I wonder if this could be fixed by telling Jetty somehow to close connections immediately after response was sent. Anyway each new request is sent via new connection. I tried to play with Connector#setIdleTimeout - it seems to be 30000 by default but have not succeeded.
What can I do to fix this - or at least to research the matter deeper to find its cause (if I am wrong in my suggestions)?
UPD Thanks for suggestions, I think I am not allowed to post the source, but I get the idea that I should study client's code (this will make me busy for some time since it is written in scala).
I found that really there was a problem with client - it sends requests with Connection: Keep-Alive in header, though creates new HttpURLConnection for each request and calls disconnect() method after it.
To solve this trouble on the server-side it was sufficient to send Connection: close in response header, since I have no allowance to change testing utility.
I have a Java-based client that receives data from a Tomcat 6.0.24 server webapp via JAX-WS. I recently upgraded the server with new functionality that can take a long time (over 30 seconds) to run for certain inputs.
It turns out that for these long operations, some kind of timeout is occurring. The client gets an HTTP 400 Bad Request error, and shortly afterwards (according to my log timestamps, at least) the server reports a Broken Pipe.
Here's the client's error message:
com.sun.xml.internal.ws.client.ClientTransportException: The server sent HTTP status code 400: Bad Request
And the server's:
javax.xml.ws.WebServiceException: javax.xml.stream.XMLStreamException: ClientAbortException: java.net.SocketException: Broken pipe
I've tried experimenting with adding timeout settings on the service's BindingProvider, but that doesn't seem to change anything. The default timeout is supposed to be infinite, right?
I don't know if it's relevant, but it might be worth noting that the client is an OSGI bundle running in a Karaf OSGI framework.
Bottom line, I have no idea what's going on here. Note that the new functionality does work when it doesn't have to run for too long. Also note that the size of the new functionality's response is not any larger than usual - it just takes longer to calculate.
In the end, the problem was caused by some sort of anti-DoS measure on the server's public gateway. Unfortunately, the IT department refused to fix it, forcing me to switch to polling-based communication. Oh well.