Processes locking on multicore machine - java

I recently started running my java program on my new multicore machine. I am suddenly seeing a problem which never occurred on my old single core Pentium. I suspect that the issue has to do with some sort of contention between my program and the various browsers I am running at the same time. When the processes get into this state, no amount of killing of processes seems to help (there's always some residual firefox or chrome process), so I end up restarting the machine. My program does a lot of opening and reading of URLs essentially using the following lines:
URL url = new URL( urlString );
URLConnection yc = url.openConnection();
BufferedReader in = new BufferedReader(new InputStreamReader(yc.getInputStream()));
........
while ((inputLine = in.readLine()) != null ) {...}
Every so often the URL my program tries to hit does not exist. In these cases, the call to create the BufferedReader eventually times out. I am going to modify the program to use a shorter time out, but I suspect that this in itself is not going to fix the problem.
Any suggestions would be appreciated.

I think the system change is a red herring. When you working with raw URL connection on the jdk there might be an issue. There is no in built retry mechanism and you will have to write all the code yourself. Try the HTTP client library from Apache. That should more or less solve any problem you face with URLConnection - http://hc.apache.org/httpclient-3.x/

Related

Java Can't Connect To PHP Web Service

Edit:
As I've just seen, it happens even with the simplest setup:
InputStream stream = new URL("http://xx.xx.xxx.xxx/GetAll.php").openStream();
Gives the same timeout error. I think I'm missing some basic configuration.
I used HTTPGet to connect to a PHP web service I have.
I saw it's deprecated so I've been trying to switch to the recommended HttpUrlConnection but with no success.
The HttpURLConnection does not seem to be able connect to the service, even though I can connect from my web browser without any problem.
My connection code:
URL myUrl = new URL("http://xx.xx.xxx.xxx/GetAll.php");
HttpURLConnection request = (HttpURLConnection)myUrl.openConnection();
request.setRequestProperty("Content-Type","text/xml;charset=UTF-8");
InputStream stream = request.getInputStream();
The GetAll.php file:
<?
require_once('MysqliDb.php'); //Helper class
$db = new MysqliDb();
//All closest events by date
$All = $db->query("SELECT * FROM Event;");
//Return in JSON
echo json_encode($All);
The result I am getting from the file:
[{"EventID":1,"StartTime":1300,"Duration":1,"EventDate":"2015-05-17","EventOrder":1,"Type":0,"Name":"\u05e2\u05d1\u05e8\u05d9\u05ea AND ENGLISH","Organiser":"Neta","Phone":"012345678","Location":"Loc","Description":"Desc"}]
Thank you,
Neta
I want to share my solution, as this has cost me hours of hair tearing.
As it turns out, "Timed out" exception has nothing to do with the code, it's a network connectivity issue. The phone I used to debug the app sometimes appears to be connected to Wifi even though it really isn't.
Anyway, if you have this exception, try checking your network connection.
Good luck!

HttpURLConnection slow to disconnect - Java / Android

I want to get the file size of a file on a remote connection without actually downloading the (large) file. I am using the "Content-Length" header of the file. The relevant code is:
URL obj = new URL(FILES_URL + fileName);
String contentLength = "";
HttpURLConnection conn = null;
try {
conn = (HttpURLConnection) obj.openConnection();
conn.setConnectTimeout(3000);
conn.setReadTimeout(3000);
contentLength = conn.getHeaderField("Content-Length");
int responseCode = conn.getResponseCode();
Log.d(TAG, "responseCode: " + responseCode);
} finally {
Log.d(TAG, "pre-disconnect");
if (conn!=null) conn.disconnect();
Log.d(TAG, "post-disconnect");
}
return contentLength;
The command "conn.disconnect();" sometimes seems to take forever. I have seen 23 seconds! Admittedly, this is connecting to a secondary local device which is running a web server, but the WiFi signal is strong, relatively fast, and I have never had any such problems using "curl" from my laptop. I do not have control over the web server I am connecting too.
The problem possibly is enhanced when making multiple similar connections to different files one after another, not sure. This is, however, creating entirely new HttpURLConnection's and not reusing the old one. Could reusing the connection help?
I never actually download the file or access the inputstream.
I could just not call disconnect, but I understand it is not recommended because resources would not be released. Is this not correct? I notice URLConnection doesn't have a disconnect. It is just suggested to close any streams you open.
This code is in an asynctask. I guess I could try moving the disconnect call itself to a further asynctask because I don't do anything afterwards. Not sure if that is even possible.
Do you have any suggestions? Should I try something other than HttpURLConnection to get the file size without downloading the file?
Thanks to EJP in the comments. Changing the request method to "HEAD" made the disconnect almost instantaneous:
conn.setRequestMethod("HEAD");
From what I have read, HttpURLConnection.disconnect() will skip through the entire response object if it hasn't been read. Therefore, for very large files, it will take a long time. Using the request method "HEAD" force the response body to be empty and solves the issue.
I suggest you to use either Volley or Okhttp for faster networking but depending on your requirement . Got through Comparison Of Volley And OkHttp and Retrofit and decide which library to use.
As suggestion if you putting this code inside AsyncTask then Read Dark Side of AsyncTask.

Who is tampering with my data stream?

The piece of code below downloads a file from some URL and saves it to a local file. Piece of cake. What could possible be wrong here?
protected long download(ProgressMonitor montitor) throws Exception{
long size = 0;
DataInputStream dis = new DataInputStream(is);
int read = 0;
byte[] chunk = new byte[chunkSize];
while( (read = dis.read(chunk)) != -1){
os.write(chunk, 0, read);
size += read;
if(montitor != null)
montitor.worked(read);
}
chunk = null;
dis.close();
os.flush();
os.close();
return size;
}
The reason I am posting a question here is because it works in 99.999% of the time and doesn't work as expected whenever there is an antivirus or some other protection software installed on a computer running this code. I am blindly pointing a finger that way because whenever I stop (or disable) it, the code works perfect again. The end result of such interference is that the MD5 of downloaded file don't match the expected, and a whole new saga begins.
So, the question is - is it really possible that some smart "protection" software would alter the actual stream coming from the URL without me knowing about it? And if yes - how do you deal with this? (verified with Kasperksy and Norton products).
EDIT-1:
Apparently I've got a hold on the problem and it's got nothing to do with antiviruses. The download takes place from the FTP server (FileZilla in particular) and we use apache commons ftp on client side . What I did is went to the FTP server and terminated the connection (kicked it out) in a middle of the download. I expected that is.read(..) would throw an IOException on client side, but this never happened. Instead, the is.read(..) returns -1 meaning that there is no more data coming from the stream. This is definitely unexpected and explains why sometimes I get partial files. This doesn't explain however why sometimes the data gets altered as well.
Yeah this happens to me all the time. In my case it's caused by transparent HTTP proxying by Websense on my corporate network. The worst problem are caused by the block page being returned with 200 OK.
Do you get the same or similar corruption every time? E.g., do you get some HTML explaining why the request was blocked? The best you can probably do is compare the first few bytes of the downloaded data to some text in the block page, and throw an exception in this case.
Edit: based on your update, have you got the FTP client set to image/binary mode?

Cannot getInputStream() in applet

Everyone,
I am trying to code an applet in Java which will access a video-game's Application Programming Interface (API), and while I can successfully run the applet via the Eclipse IDE, it consistantly hangs up when run from the browser.
I've narrowed down where the bug must be by scattering debug messages around until I found the last line run.
I am attempting to parse the output from a parameter-filled URL. Figuring browsers must pass this information differently than an IDE, I've tried many different methods of doing this, including POSTing the parameters via http socket (although I am unfamiliar with this method and could easily have implemented it incorrectly). Below is my current version, irrelevent parts omitted (if you should deduce the bug might be in an omitted area, these are easily revealed):
...
URL apiCharList = null;
...
try {
...
apiCharList = new URL("https:// ... ");
URLConnection connection = apiCharList.openConnection();
connection.connect();
DataInputStream in = new DataInputStream( new BufferedInputStream( connection.getInputStream() ) );
BufferedReader br = new BufferedReader( new InputStreamReader( in ) );
String line = br.readLine();
while( line != null ) {
...
line = br.readLine();
}
...
} catch (MalformedURLException e) {
current.setText( "Malformed URL" );
} catch (IOException e) {
current.setText( "InputStream fail." );
}
My debugging dropped me at:
connection.connect();
And without that line, at:
DataInputStream in = new DataInputStream( new BufferedInputStream( connection.getInputStream() ) );
Any insight into this problem is most appreciated. Again, simply let me know if/which omitted areas may be necessary to see.
Respectfully,
Inquiring
UPDATE
Thank you all for the replies. My BufferedReader now initializes as:
= new BufferedReader( new InputStreamReader( connection.getInputStream() ), 1 );
That part had been getting jumpbled up as a combination of all the various methods I had tried; thank you for ironing it out for me. From what I am seeing, it seems the issue is that my applet needs to be digitally signed in order to make the connections it requires when run via browser. I've been looking into that and have been having problems with keytool and jarsigner after downloading the latest JDK. I have only been at it for a short while, and have never had an applet digitally signed before, but at least I have a new avenue to pursue. If anyone caould provide a good (up to date) walkthrough on how to digitally sign an applet, that would be most appreciated. Various things I've read said it can cost me anywhere from 40USD to 20USD to nothing?
Anyways, thanks for the lift over this hurdle.
Respectully,
Inquiring
Browsers will not allow applets to make network connections anywhere but their own server of origin, by default. You would need to digitally sign the applet, then ask the user for permission to make the connection. An alternative is to run a simple proxy servlet on your web server which the applet can use to talk to the third-party server.
As an aside, why are you wrapping so many streams around eachother -- two of them buffered, which is a huge no-no -- just to read from the network? Those two lines could be replaced by
BufferedReader br = new BufferedReader( new InputStreamReader( connection.getInputStream() ), 1 );
Do note that ",1" I stuck in at the end, which turns off buffering for the BufferedReader. You do not want to read ahead on a network connection, or you're inviting your code to hang.

How can I set a timeout against a BufferedReader based upon a URLConnection in Java?

I want to read the contents of a URL but don't want to "hang" if the URL is unresponsive. I've created a BufferedReader using the URL...
URL theURL = new URL(url);
URLConnection urlConn = theURL.openConnection();
urlConn.setDoOutput(true);
BufferedReader urlReader = new BufferedReader(newInputStreamReader(urlConn.getInputStream()));
...and then begun the loop to read the contents...
do
{
buf = urlReader.readLine();
if (buf != null)
{
resultBuffer.append(buf);
resultBuffer.append("\n");
}
}
while (buf != null);
...but if the read hangs then the application hangs.
Is there a way, without grinding the code down to the socket level, to "time out" the read if necessary?
I think URLConnection.setReadTimeout is what you are looking for.
If you have java 1.4:
I assume the connection timeout (URLConnection.setConnectTimeout(int timeout) ) is of no use because you are doing some kind of streaming.
---Do not kill the thread--- It may cause unknown problems, open descriptors, etc.
Spawn a java.util.TimerTask where you will check if you have finished the process, otherwise, close the BufferedReader and the OutputStream of the URLConnection
Insert a boolean flag isFinished and set it to true at the end of your loop and to false before the loop
TimerTask ft = new TimerTask(){
public void run(){
if (!isFinished){
urlConn.getInputStream().close();
urlConn.getOutputStream().close();
}
}
};
(new Timer()).schedule(ft, timeout);
This will probably cause an ioexception, so you have to catch it. The exception is not a bad thing in itself.
I'm omitting some declarations (i.e. finals) so the anonymous class can access your variables. If not, then create a POJO that maintains a reference and pass that to the timertask
Since Java 1.5, it is possible to set the read timeout in milliseconds on the underlying socket via the 'setReadTimeout(int timeout)' method on the URLConnection class.
Note that there is also the 'setConnectTimeout(int timeout)' which will do the same thing for the initial connection to the remote server, so it is important to set that as well.
I have been working on this issue in a JVM 1.4 environment just recently. The stock answer is to use the system properties sun.net.client.defaultReadTimeout (read timeout) and/or sun.net.client.defaultConnectTimeout. These are documented at Networking Properties and can be set via the -D argument on the Java command line or via a System.setProperty method call.
Supposedly these are cached by the implementation so you can't change them from one thing to another so one they are used once, the values are retained.
Also they don't really work for SSL connections ala HttpsURLConnection. There are other ways to deal with that using a custom SSLSocketFactory.
Again, all this applies to JVM 1.4.x. At 1.5 and above you have more methods available to you in the API (as noted by the other responders above).
For Java 1.4, you may use SimpleHttpConnectionManager.getConnectionWithTimeout(hostConf,CONNECTION_TIMEOUT) from Apache

Categories