Proxy. Thinking process. How can i kill them? - java

Some times work throuw the proxy server and reading prom buffer content my program to think so more time... until i close them. How set program code that from some seconds if do not have any answer from server to take another server?
URL url = new URL(linkCar);
String your_proxy_host = new String(proxys.getValueAt(xProxy, 1).toString());
int your_proxy_port = Integer.parseInt(proxys.getValueAt(xProxy, 2).toString());
Proxy proxy = null;
// System.out.println(proxys.getValueAt(xProxy, 3).toString());
// if (proxys.getValueAt(xProxy, 3).toString().indexOf("HTTP") > 0)
// {
proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(your_proxy_host, your_proxy_port));
// } else {
// proxy = new Proxy(Proxy.Type.SOCKS, new InetSocketAddress(your_proxy_host, your_proxy_port));
// }
HttpURLConnection connection = (HttpURLConnection)url.openConnection(proxy);
connection.setConnectTimeout(1000);
connection.connect();
String line = null;
StringBuffer buffer_page = new StringBuffer();
BufferedReader buffer_input = new BufferedReader(new InputStreamReader(connection.getInputStream(),"cp1251"));
int cc = 0;
//this is thinking place!!!
while ((line = buffer_input.readLine()) != null && cc < 7000) {
buffer_page.append(line);
cc++;
}
doc = Jsoup.parse(String.valueOf(buffer_page));
connection.disconnect();
i tried to use counter but it not work... What exception i can use to catch this situation by my control?

You need to use URLConnection.setReadTimeout. From the specification,
Sets the read timeout to a specified timeout, in milliseconds. A non-zero value specifies the timeout when reading from Input stream when a connection is established to a resource. If the timeout expires before there is data available for read, a java.net.SocketTimeoutException is raised. A timeout of zero is interpreted as an infinite timeout.
As you can see, reads that time-out will throw SocketTimeoutException, which you can catch appropriately, e.g.
try (BufferedReader buffer_input = new BufferedReader(
new InputStreamReader(connection.getInputStream(), "cp1251"))) {
String line;
while ((line = buffer_input.readLine()) != null) {
buffer_page.append(line);
}
} catch (SocketTimeoutException ex) {
/* handle time-out */
}
Note that you need to be careful when using readLine as above -- this will strip all \r and \n from the input.

Related

How to stop a BufferedReader after a certain amount of time in Android

I've got a thread that opens a socket that sends data to a server. The server may send data back depending if someone is at the work station. However I'm trying to put a timeout on the BufferedReader so that after 30 seconds it continues saying the response was null.
//Receive Response
bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
response = new StringBuilder();
while ((line = bufferedReader.readLine()) != null ) {
response.append(line);
}
This is my BufferedReader, pretty standard, I've looked at a bunch of timers and other posts but haven't found a working solution.
You could call setSoTimeout() on your socket instance.
Itt will raise a SocketTimeoutException when the timeout is reached. Wrap your reading logic in a try-catch block, catch that exception and handle it as you wish.
Something like this:
try {
socket.setSoTimeout(30000);
bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
while ((line = bufferedReader.readLine()) != null ) {
response.append(line);
}
} catch (SocketTimeoutException ste) {
// timeout reached
} catch (Exception e) {
// something else happened
} finally {
// some general processing
}

Effective measurement of dns lookup and site content download duration

I am implementing a Java method that measures a number of metrics while loading a webpage. The metrics include : resolve time, the connect time and download time.
The challenge seems to be the name resolution, since the code should never trigger two NS look-ups by any means (even when DNS caching is disabled).
My first thought was to trigger the name resolution before connecting to the server, and then prevent java from running a second one upon connect.
Using InetAddress.getByName() for the name lookup and then HttpURLConnection and it's setRequestProperty method to set the a host header seemed to do the trick.
So here is my question: Do those two snippets below have the same effect? Do they always give the exact same result for all possible hosts? If not, what other options do I have?
VERSION 1: Implicit name resolution
/**
* Site content download Test
*
* #throws IOException
*/
public static void testMethod() throws IOException {
String protocol = "http";
String host = "stackoverflow.com";
String file = "/";
// create a URL object
URL url = new URL(protocol, host, file);
// create the connection object
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
// connect
conn.connect();
// create a stream reader
BufferedReader in = new BufferedReader(new InputStreamReader(conn.getInputStream()));
String inputLine;
// read contents and print on std out
while ((inputLine = in.readLine()) != null) {
System.out.println(inputLine);
}
// close the stream
in.close();
}
VERSION 2: Explicit name resolution
/**
* Enhanced Site content download Test
*
* #throws IOException
*/
public static void testMethod2() throws IOException {
String protocol = "http";
String host = "stackoverflow.com";
String file = "/";
// Do a name lookup.
// If a literal IP address is supplied, only the validity of the address format is checked.
InetAddress address = InetAddress.getByName(host);
// create a URL object
URL url = new URL(protocol, address.getHostAddress(), file);
// create the connection object
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
// allow overriding Host and other restricted headers
System.setProperty("sun.net.http.allowRestrictedHeaders", "true");
// set the host header
conn.setRequestProperty("Host", host);
// connect
conn.connect();
// create a stream reader
BufferedReader in = new BufferedReader(new InputStreamReader(conn.getInputStream()));
String inputLine;
// read contents and print on std out
while ((inputLine = in.readLine()) != null) {
System.out.println(inputLine);
}
// close the stream
in.close();
}
TIA for the help.
-Dimi
I've browsed through Java's source code to see what happens when you pass a domain name to HttpURLConnection and it eventually ends up in NetworkClient.doConnect:
if (connectTimeout >= 0) {
s.connect(new InetSocketAddress(server, port), connectTimeout);
} else {
if (defaultConnectTimeout > 0) {
s.connect(new InetSocketAddress(server, port), defaultConnectTimeout);
} else {
s.connect(new InetSocketAddress(server, port));
}
}
As you see, the domain resolution is always handled by InetSocketAddress:
public InetSocketAddress(String hostname, int port) {
if (port < 0 || port > 0xFFFF) {
throw new IllegalArgumentException("port out of range:" + port);
}
if (hostname == null) {
throw new IllegalArgumentException("hostname can't be null");
}
try {
addr = InetAddress.getByName(hostname);
} catch(UnknownHostException e) {
this.hostname = hostname;
addr = null;
}
this.port = port;
}
As you can see, InetAddress.getByName is called everytime. I think that you method is safe.

Establish connection by using multiple thread

I am writing a Java program to compute the http connection time for (lets say) 5 http connection (to different IP).
The first scenario is, without threading, the program connect and testing the http server one by one which mean when finish one server testing then proceed to another. In this scenario, the time taken is very long. Moreover, the timeout is not working properly, for example, I have set the
setConnectTimeout(5 * 1000);
setReadTimeout(5 * 1000);
but the time return by
long starTime = System.currentTimeMillis();
c.connect();
String line;
BufferedReader in = new BufferedReader(new InputStreamReader(uc.getInputStream()));
while ((line = in.readLine()) != null){
page.append(line);
elapseTime = System.currentTimeMillis() - starTime;
can be more than 5 second, some even go up to 30 second (but I set 5 second as timeout only).
So, I make the implementation to be multithreading. But the result is more rediculous. I can't even get one successful connection now.
Now my question is, can we establish multiple connection by using multiple thread? If answer is yes, what I have to notice to avoid the issue above?
Thank.
*Extra info*
1) I am computing the proxy connection speed, so, ya, the connection is proxy connection.
2) The threads that I created is around 100. I think it should be fine right?
How are you setting up your connections? Are you using a socket connection? If so, depending on how you setup your socket, you may find that the connection timeout value may be ignored
Socket sock = new Socket("hostname", port);
sock.setSoTimeout(5000);
sock.connect();
Will actually not set the connect timeout value, as the constructor will already attempt to connect.
SocketAddress sockaddr = new InetSocketAddress(host, port);
Socket sock = new Socket();
sock.connect(sockaddr, 5000);
Will more accurately connect with a timeout value. This may explain why your socket timeouts are not working.
public float getConnectionTime(){
long elapseTime = 0;
Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(ipAdd, portNum));
URL url;
StringBuilder page = new StringBuilder();
HttpURLConnection uc = null;
try {
uc = (HttpURLConnection)Main.targetMachine.openConnection(proxy);
// uc = (HttpURLConnection)Main.targetMachine.openConnection();
uc.setConnectTimeout(Main.timeOut);
uc.setReadTimeout(Main.timeOut);
long starTime = System.currentTimeMillis();
uc.connect();
// if (uc.getResponseCode() == HttpURLConnection.HTTP_OK){
// System.out.println("55555");
// }else System.out.println("88888");
String line;
BufferedReader in = new BufferedReader(new InputStreamReader(uc.getInputStream()));
while ((line = in.readLine()) != null){
page.append(line);
elapseTime = System.currentTimeMillis() - starTime;
}
} catch (SocketTimeoutException e) {
System.out.println("time out lo");
// e.printStackTrace();
return 9999; //if time out, use 9999 signal.
} catch (IOException e){
System.out.println("open connection error, connect error or inputstream error");
// e.printStackTrace();
return 9999;
}finally{
if (uc != null)
uc.disconnect();
}
// System.out.println(page);
return (float)elapseTime / 1000;
}

Read Data From Http Response rarely throws BindException: Address already in use

I use following code to read data form http request.
In general cases it works good, but some time "httpURLConnection.getResponseCode()" throws java.net.BindException: Address already in use: connect
............
URL url = new URL( strUrl );
httpURLConnection = (HttpURLConnection)url.openConnection();
int responseCode = httpURLConnection.getResponseCode();
char charData[] = new char[HTTP_READ_BLOCK_SIZE];
isrData = new InputStreamReader( httpURLConnection.getInputStream(), strCharset );
int iSize = isrData.read( charData, 0, HTTP_READ_BLOCK_SIZE );
while( iSize > 0 ){
sbData.append( charData, 0, iSize );
iSize = isrData.read( charData, 0, HTTP_READ_BLOCK_SIZE );
}
.................
finally{
try{
if( null != isrData ){
isrData.close();
isrData = null;
}
if( null != httpURLConnection ){
httpURLConnection.disconnect();
httpURLConnection = null;
}
strData = sbData.toString();
}
catch( Exception e2 ){
}
The code running on Java 1.6, Tomcat 6.
Thank you
Get rid of the disconnect() and close the Reader instead. You are running out of local ports, and using disconnect() disables HTTP connection pooling which is the solution to that.
You need to close() the Reader after completely reading the stream. This will free up underlying resources (sockets, etc) for future reuse. Otherwise the system will run out of resources.
The basic Java IO idiom for your case is the following:
Reader reader = null;
try {
reader = new InputStreamReader(connection.getInputStream(), charset);
// ...
} finally {
if (reader != null) try { reader.close(); } catch (IOException logOrIgnore) {}
}
See also:
Java IO tutorial
How to use URLConnection?

HttpURLConnection.getResponseCode() returns -1 on second invocation

I seem to be running into a peculiar problem on Android 1.5 when a library I'm using (signpost 1.1-SNAPSHOT), makes two consecutive connections to a remote server. The second connection always fails with a HttpURLConnection.getResponseCode() of -1
Here's a testcase that exposes the problem:
// BROKEN
public void testDefaultOAuthConsumerAndroidBug() throws Exception {
for (int i = 0; i < 2; ++i) {
final HttpURLConnection c = (HttpURLConnection) new URL("https://api.tripit.com/oauth/request_token").openConnection();
final DefaultOAuthConsumer consumer = new DefaultOAuthConsumer(api_key, api_secret, SignatureMethod.HMAC_SHA1);
consumer.sign(c); // This line...
final InputStream is = c.getInputStream();
while( is.read() >= 0 ) ; // ... in combination with this line causes responseCode -1 for i==1 when using api.tripit.com but not mail.google.com
assertTrue(c.getResponseCode() > 0);
}
}
Basically, if I sign the request and then consume the entire input stream, the next request will fail with a resultcode of -1. The failure doesn't seem to happen if I just read one character from the input stream.
Note that this doesn't happen for any url -- just specific urls such as the one above.
Also, if I switch to using HttpClient instead of HttpURLConnection, everything works fine:
// WORKS
public void testCommonsHttpOAuthConsumerAndroidBug() throws Exception {
for (int i = 0; i < 2; ++i) {
final HttpGet c = new HttpGet("https://api.tripit.com/oauth/request_token");
final CommonsHttpOAuthConsumer consumer = new CommonsHttpOAuthConsumer(api_key, api_secret, SignatureMethod.HMAC_SHA1);
consumer.sign(c);
final HttpResponse response = new DefaultHttpClient().execute(c);
final InputStream is = response.getEntity().getContent();
while( is.read() >= 0 ) ;
assertTrue( response.getStatusLine().getStatusCode() == 200);
}
}
I've found references to what seems to be a similar problem elsewhere, but so far no solutions. If they're truly the same problem, then the problem probably isn't with signpost since the other references make no reference to it.
Any ideas?
Try set this property to see if it helps,
http.keepAlive=false
I saw similar problems when server response is not understood by UrlConnection and client/server gets out of sync.
If this solves your problem, you have to get a HTTP trace to see exactly what's special about the response.
EDIT: This change just confirms my suspicion. It doesn't solve your problem. It just hides the symptom.
If the response from first request is 200, we need a trace. I normally use Ethereal/Wireshark to get the TCP trace.
If your first response is not 200, I do see a problem in your code. With OAuth, the error response (401) actually returns data, which includes ProblemAdvice, Signature Base String etc to help you debug. You need to read everything from error stream. Otherwise, it's going to confuse next connection and that's the cause of -1. Following example shows you how to handle errors correctly,
public static String get(String url) throws IOException {
ByteArrayOutputStream os = new ByteArrayOutputStream();
URLConnection conn=null;
byte[] buf = new byte[4096];
try {
URL a = new URL(url);
conn = a.openConnection();
InputStream is = conn.getInputStream();
int ret = 0;
while ((ret = is.read(buf)) > 0) {
os.write(buf, 0, ret);
}
// close the inputstream
is.close();
return new String(os.toByteArray());
} catch (IOException e) {
try {
int respCode = ((HttpURLConnection)conn).getResponseCode();
InputStream es = ((HttpURLConnection)conn).getErrorStream();
int ret = 0;
// read the response body
while ((ret = es.read(buf)) > 0) {
os.write(buf, 0, ret);
}
// close the errorstream
es.close();
return "Error response " + respCode + ": " +
new String(os.toByteArray());
} catch(IOException ex) {
throw ex;
}
}
}
I've encountered the same problem when I did not read in all the data from the InputStream before closing it and opening a second connection. It was also fixed either with System.setProperty("http.keepAlive", "false"); or simply just looping until I've read the rest of the InputStream.
Not completely related to your issue, but hope this helps anyone else with a similar problem.
Google provided an elegant workaround since it's only happening prior to Froyo:
private void disableConnectionReuseIfNecessary() {
// HTTP connection reuse which was buggy pre-froyo
if (Integer.parseInt(Build.VERSION.SDK) < Build.VERSION_CODES.FROYO) {
System.setProperty("http.keepAlive", "false");
}
}
Cf. http://android-developers.blogspot.ca/2011/09/androids-http-clients.html
Or, you can set HTTP header in the connection (HttpUrlConnection):
conn.setRequestProperty("Connection", "close");
Can you verify that the connection is not getting closed before you finish reading the response? Maybe HttpClient parses the response code right away, and saves it for future queries, however HttpURLConnection could be returning -1 once the connection is closed?

Categories