Blackberry Http Connection times out on image download. Why? - java

My application loops through about 200 urls that are all jpg images.
In the simulator it reads ok, then stores the byte array in persistentStore with no problems.
On the device, it gives java.io.IOException: TCP read timed out on basically every image.
Every now and then, one gets through. Not even sure how. The image sizes don't give insight either. Some are 6k, some are 11k. Size doesn't seem to matter for timing out.
I'll try to post what I believe to be the relevant code, but I am not really an expert here, so if I left something out, please say so.
Call http connection through loop and join thread:
for(int i = 0; i < images.size(); i ++)
{
try {
String url = images.elementAt(i).toString();
HttpRequest data3 = new HttpRequest(url, "GET", false);
data3.start();
data3.join();
} catch (IOException e) {
Dialog.inform("wtf " + e);
}
}
Make the actual connection in HttpConnection class with the proper suffix:
try
{
HttpConnection connection = (HttpConnection)Connector.open(url + updateConnectionSuffix());
int responseCode = connection.getResponseCode();
if(responseCode != HttpConnection.HTTP_OK)
{
connection.close();
return;
}
String contentType = connection.getHeaderField("Content-type");
long length = connection.getLength();
InputStream responseData = connection.openInputStream();
connection.close();
outputFinal(responseData, contentType, length);
}
catch(IOException ex)
{
} catch (SAXException ex) {
} catch (ParserConfigurationException ex) {
}
Finally, read the stream and write the bytes to a byte array:
else if(contentType.equals("image/png") || contentType.equals("image/jpeg") || contentType.equals("image/gif"))
{
try
{
if((int) length < 1)
length = 15000;
byte[] responseData = new byte[(int) length];
int offset = 0;
int numRead = 0;
StringBuffer rawResponse = new StringBuffer();
int chunk = responseData.length-offset;
if(chunk < 1)
chunk = 1024;
while (offset < length && (numRead=result.read(responseData, offset, chunk)) >= 0){
rawResponse.append(new String(responseData, offset, numRead));
offset += numRead;
}
String resultString = rawResponse.toString();
byte[] dataArray = resultString.getBytes();
result.close();
database db = new database();
db.storeImage(venue_id, dataArray);
}
catch( Exception e )
{
System.out.println(">>>>>>>----------------> total image fail: " + e);
}
}
Things to consider:
Length is always byte length in simulator. In device it is always -1.
The chunk var is a test to see if I force a 15k byte array, will it try to read as expected since byte[-1] gave an out of bounds exception. The results are the same. Sometimes it writes. Mostly it times out.
Any help is appreciated.

You can adjust the length of TCP timeouts on Blackberry using the parameter 'ConnectionTimeout'.
In your code here:
HttpConnection connection = (HttpConnection)Connector.open(url + updateConnectionSuffix());
You'll want to append ConnectionTimeout. You might write it into updateConnectionSuffix() or just append it.
HttpConnection connection = (HttpConnection)Connector.open(url + updateConnectionSuffix() + ";ConnectionTimeout=54321");
This sets the timeout to 54321 milliseconds.
Timeouts occur when the client is waiting for the server to send an ack and it doesn't get one in a specified amount of time.
edit: also, are you able to use the browser and stuff? You may also want to play with the deviceside parameter.

I think the problem may be that you're closing the connection before reading the bytes from the input stream. Try moving the connection.close() after the bytes have been read in.

Related

File transferred via TFTP has different size than on host

For quite long time now I'm struggling with handling TFTP protocol in my Android app. Its main feature is downloading files from custom designed device which hosts TFTP server.
I was browsing internet hoping to find some good, already written, implementation. First I've tried with TFTP library which is part of Apache Commons. Unfortunately no luck - constant timeouts or even complete freeze. After some further research I found some code on github - please take a look. I've adopted code to Android and after some tweaking I managed to finally receive some files.
Creator of the device stated, that block size should be exactly 1015 bytes. So I increased package size to 1015 and updated creating read request packet method:
DatagramPacket createReadRequestPacket(String strFileName) {
byte[] filename = strFileName.getBytes();
byte[] mode = currentMode.getBytes();
int len = rOpCode.length + filename.length + mode.length + 2;
ByteArrayOutputStream outputStream = new ByteArrayOutputStream(len);
try {
outputStream.write(rOpCode);
outputStream.write(filename);
byte term = 0;
outputStream.write(term);
outputStream.write(mode); // "octet"
outputStream.write(term);
outputStream.write("blksize".getBytes());
outputStream.write(term);
outputStream.write("1015".getBytes());
outputStream.write(term);
} catch (IOException e) {
e.printStackTrace();
}
byte[] readPacketArray = outputStream.toByteArray();
return new DatagramPacket(readPacketArray, readPacketArray.length, serverAddr, port);
}
Chunks are being downloaded, but there is one major issue - files I'm downloading are in parts, 512kB each (except last one), and each part I receive on Android device is around 0,5kB larger. It seems like there is one byte more each time or one whole append more. Apparently I don't understand it completely and I'm missing something.
This is my method for file receiving:
byte previousBlockNumber = (byte) -1;
try {
PktFactory pktFactory;
DatagramSocket clientSocket;
byte[] buf;
DatagramPacket sendingPkt;
DatagramPacket receivedPkt;
System.out.print(ftpHandle);
if (isConnected) {
System.out.println("You're already connected to " + hostname.getCanonicalHostName());
}
try {
hostname = InetAddress.getByName(host);
if (!hostname.isReachable(4000)) {
System.out.println("Hostname you provided is not responding. Try again.");
return false;
}
} catch (UnknownHostException e) {
System.out.println("tftp: nodename nor servname provided, or not known");
return false;
}
clientSocket = new DatagramSocket();
pktFactory = new PktFactory(PKT_LENGTH + 4, hostname, TFTP_PORT);
System.out.println("Connecting " +
hostname.getCanonicalHostName() + " at the port number " + TFTP_PORT);
isConnected = true;
ftpHandle = "tftp#" + hostname.getCanonicalHostName() + "> ";
System.out.println("mode " + PktFactory.currentMode);
if (!isConnected) {
System.out.println("You must be connected first!");
}
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
buf = new byte[PKT_LENGTH + 4];
/* Sending the reading request with the filename to the server. **/
try {
/* Sending a RRQ with the filename. **/
System.out.println("Sending request to server.");
sendingPkt = pktFactory.createReadRequestPacket(filename);
clientSocket.setSoTimeout(4500);
clientSocket.send(sendingPkt);
} catch (Exception e) {
e.printStackTrace();
System.out.println("Connection with server failed");
}
boolean receivingMessage = true;
while (true) {
try {
receivedPkt = new DatagramPacket(buf, buf.length);
clientSocket.setSoTimeout(10000);
clientSocket.receive(receivedPkt);
byte[] dPkt = receivedPkt.getData();
byte[] ropCode = pktFactory.getOpCode(dPkt);
/* rPkt either a DATA or an ERROR pkt. If an error then print the error message and
* terminate the program finish get command. **/
if (ropCode[1] == 5) {
String errorMsg = pktFactory.getErrorMessage(dPkt);
System.out.println(errorMsg);
return false;
}
if (receivedPkt.getLength() < PKT_LENGTH + 4 && ropCode[1] == 3) {
byte[] fileDataBytes = pktFactory.getDataBytes(dPkt);
outputStream.write(fileDataBytes);
if (isListFile) {
listBytes = outputStream.toByteArray();
} else {
FileOutputStream fstream = new FileOutputStream(Constants.EEG_DATA_PATH.concat("file.bin"), true);
// Let's get the last data pkt for the current transfering file.
fstream.write(outputStream.toByteArray());
fstream.close();
}
// It's time to send the last ACK message before Normal termination.
byte[] bNum = pktFactory.getBlockNum(dPkt);
DatagramPacket sPkt = pktFactory.createAckPacket(bNum, receivedPkt.getPort());
clientSocket.send(sPkt);
disconnect();
return true;
}
if (ropCode[1] == 3) {
if (receivingMessage) {
System.out.println("Receiving the file now..");
receivingMessage = false;
}
byte[] bNum = pktFactory.getBlockNum(dPkt);
//I've added this if and it reduces file size a little (it was more than 0,5kB bigger)
if (previousBlockNumber != bNum[1]) {
byte[] fileDataBytes = pktFactory.getDataBytes(dPkt);
previousBlockNumber = bNum[1];
outputStream.write(fileDataBytes);
}
/* For each received DATA pkt we need to send ACK pkt back. **/
DatagramPacket sPkt = pktFactory.createAckPacket(bNum, receivedPkt.getPort());
clientSocket.send(sPkt);
}
} catch (SocketTimeoutException e) {
disconnect();
System.out.println("Server didn't respond and timeout occured.");
return false;
}
}
} catch (Exception e) {
System.out.println(e.getMessage());
return false;
}
I know what was wrong. That strange behavior was result of this line when last packet was received:
byte[] fileDataBytes = pktFactory.getDataBytes(dPkt);
Returned array size was always equal to specified packet length, even if received data was smaller. In my case last packet was 0 bytes (+4 bytes for tftp), but even then extra 512 bytes was added to output stream.
To resolve this I overload mentioned method with extra parameter - actual size of received packet when received data size is higher than 4 bytes and lower than specified packet size (512 bytes). This change resulted with getting correct size of array for last packet, so received file has correct size at the end of the operation.

Reading Socket InputStream in a loop when writing byte arrays

This is what you usually do when sending text data
// Receiver code
while (mRun && (response = in.readLine()) != null && socket.isConnected()) {
// Do stuff
}
// Sender code
printWriter.println(mMessage);
printWriter.flush();
but when working with DataOutputStream#write(byte[]) to send byte[], how do you write a while loop to receive sent data.
All I have found is this, but it doesn't loop, so I'm guessing this will just run on the first sent message:
int length = in.readInt();
byte[] data = new byte[length];
in.readFully(data);
How can I achieve this?
PS: yep, I'm new to socket programming.
EDIT: I'm sending a byte array each 3 to 5 seconds. This is what I've got so far.
// In the client side, in order to send byte[]. This is executed each 3 seconds.
if(out != null) {
try {
out.writeInt(encrypted.length);
out.write(encrypted);
out.writeInt(0);
out.flush();
return true;
} catch (IOException e) {
e.printStackTrace();
return false;
}
}
// In the serverside, in order to receive byte[] sent from client (also executed 3 to 5 seconds due to bytes being sent at said rate. "client" being the Socket instance.
while(true && client.isConnected()) {
byte[] data = null;
while(true) {
int length = in.readInt();
if(length == 0)
break;
data = new byte[length];
in.readFully(data);
}
if(data != null) {
String response = new String(data);
if(listener != null) {
listener.onMessageReceived(response);
}
}
}
Assuming you're trying to handle a stream of messages, sounds like what you're missing is a way of specifying (in the stream) how big your messages are (or where they end).
I suggest you just write a prefix before each message, specifying the length:
output.writeInt(data.length);
output.write(data);
Then when reading:
while (true)
{
int length = input.readInt();
byte[] buffer = new byte[length];
input.readFully(buffer, 0, length);
// Process buffer
}
You'll also need to work out a way of detecting the end of input. DataInputStream doesn't have a clean way of detecting that as far as I can tell. There are various options - the simplest may well be to write out a message of length 0, and break out of the loop if you read a length of 0.

Receving FFT data over a socket in Java

I am trying to write an Applet that established a socket connection with a server and receives FFT data from that server, computes a spectrogram and displays it. Currently, this is what I have in C.
int getData(){
int i;
int constant;
// get as many bytes in the socket to fill up the buffer
n = recv(sockfd, tempBuf + readCount, length - readCount, MSG_DONTWAIT);
if(n>0)
readCount += n;
if(readCount == length) //when get enough data
{
// check header constant
constant = ((int*)(tempBuf))[0];
fprintf(stderr, "\nReading header... ");
printf("header.constSync is %X\n", constant);
if(constant != 0xACFDFFBC)
error1("ERROR reading from socket, incorrect header placement\n");
//put data into a buffer
for( i = 0 ; i < samp_rate; i++)
buffer[i] = ((double*)(tempBuf + sizeof(struct fft_header)))[i];
fprintf(stderr, "Reading data... ");
//shift
shift();
readCount = 0;
}
return 1;
}
However I also wrote a similar method in Java that I am hoping will accomplish the same thing. Is this right?
public int getData() throws IOException {
int constant;
BufferedInputStream data = null;
try{
data=new BufferedInputStream(socket.getInputStream());
} catch (UnknownHostException e){
System.err.println("Invalid Host");
}
catch (IOException e){
System.err.println("Couldn't get the I/O for the connection to the host");
}
int numBytes = data.available();
if(numBytes >0){
readCount+=numBytes;
}
if(readCount == length){
constant = tempBuff[0];
System.out.println("Reading Header");
System.out.println(constant);
if(constant != 0xACFDFFBC){
System.err.println("Error reading from Socket. Incorrect Header Placement");
}
for(int i=0; i<samp_rate; i++){
buffer[i] = tempBuff[i];
System.out.println("Reading data...");
}
}
return 1;
}
**Edit - Sorry I forgot to post the actual question. What I am trying to ask is am I using bufferedInputStream correctly? Or should I use DataInputStream? Also I understand that available() is used to determine how many bytes to read. Am I using it right?
You should know perfectly well that it doesn't work, unless you haven't even bothered to try it, in which case you have no business to be posting here at all yet. There is:
a misuse of available()
an assignment to constant from an undeclared array variable that could be anything
no actual reading going on at all.
You should be using the facilities of DataInputStream for this: readInt(), readDouble(), readFully(), etc. Wrap the BufferedInputStream in a DataInputStream and start calling those methods.

Is there a difference in Java's writeInt when executed on Windows vs an Intel based Mac

I currently writing a Java TCP server to handle the communication with a client (which I didn't write). When the server, hosted on windows, responds to the client with the number of records received the client doesn't read the integer correctly, and instead reads it as an empty packet. When the same server code, hosted on my Mac, responds to the client with the number of records received the client reads the packet and responds correctly. Through my research I haven't found an explanation that seems to solve the issue. I have tried reversing the bytes (Integer.reverseBytes) before calling the writeInt method and that didn't seem to resolve the issue. Any ideas are appreciated.
Brian
After comparing the pcap files there are no obvious differences in how they are sent. The first byte is sent followed by the last 3. Both systems send the correct number of records.
Yes I'm referring to the DataOutputStream.writeInt() method. //Code added
public void run() {
try {
InputStream in = socket.getInputStream();
DataOutputStream datOut = new DataOutputStream(socket.getOutputStream());
datOut.writeByte(1); //sends correctly and read correctly by client
datOut.flush();
//below is used to read bytes to determine length of message
int bytesRead=0;
int bytesToRead=25;
byte[] input = new byte[bytesToRead];
while (bytesRead < bytesToRead) {
int result = in.read(input, bytesRead, bytesToRead - bytesRead);
if (result == -1) break;
bytesRead += result;
}
try {
inputLine = getHexString(input);
String hexLength = inputLine.substring(46, 50);
System.out.println("hexLength: " + hexLength);
System.out.println(inputLine);
//used to read entire sent message
bytesRead = 0;
bytesToRead = Integer.parseInt(hexLength, 16);
System.out.println("bytes to read " + bytesToRead);
byte[] dataInput = new byte[bytesToRead];
while (bytesRead < bytesToRead) {
int result = in.read(dataInput, bytesRead, bytesToRead - bytesRead);
if (result == -1) break;
bytesRead += result;
}
String data = getHexString(dataInput);
System.out.println(data);
//Sends received data to class to process
ProcessTel dataValues= new ProcessTel(data);
String[] dataArray = new String[10];
dataArray = dataValues.dataArray();
//assigns returned number of records to be written to client
int towrite = Integer.parseInt(dataArray[0].trim());
//Same write method on Windows & Mac...works on Mac but not Windows
datOut.writeInt(towrite);
System.out.println("Returned number of records: " + Integer.parseInt(dataArray[0].trim()) );
datOut.flush();
} catch (Exception ex) {
Logger.getLogger(ServerThread.class.getName()).log(Level.SEVERE, null, ex);
}
datOut.close();
in.close();
socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
As described in its Javadoc, DataOutputStream.writeInt() uses network byte order as per the TCP/IP RFCs. Is that the method you are referring to?
No, x86 processors only support little-endian byte order, it doesn't vary with the OS. Something else is wrong.
I suggest using wireshark to capture the stream from a working Mac server and a non-working Windows server and compare.
Some general comments on your code:
int bytesRead=0;
int bytesToRead=25;
byte[] input = new byte[bytesToRead];
while (bytesRead < bytesToRead) {
int result = in.read(input, bytesRead, bytesToRead - bytesRead);
if (result == -1) break;
bytesRead += result;
}
This EOF handling is hokey. It means that you don't know whether or not you've actually read the full 25 bytes. And if you don't, you'll assume that the bytes-to-send is 0.
Worse, you copy-and-paste this code lower down, relying on proper initialization of the same variables. If there's a typo, you'll never know it. You could refactor it into its own method (with tests), or you could call DataInputStream.readFully().
inputLine = getHexString(input);
String hexLength = inputLine.substring(46, 50);
You're converting to hex in order to extract an integer? Why? And more important, if you have any endianness issues this is probably the reason
I was originally going to recommend using a ByteBuffer to extract values, but on a second look I think you should wrap your input stream with a DataInputStream. That would allow you to read complete byte[] buffers without the need for a loop, and it would let you get rid of the byte-to-hex-to-integer conversions: you'd simply call readInt().
But, continuing on:
String[] dataArray = new String[10];
dataArray = dataValues.dataArray();
Do you realize that the new String[10] is being thrown away by the very next line? Is that what you want?
int towrite = Integer.parseInt(dataArray[0].trim());
datOut.writeInt(towrite);
System.out.println("Returned number of records: " + Integer.parseInt(dataArray[0].trim()) );
If you're using logging statements, print what you're actually using (towrite). Don't recalculate it. There's too much of a chance to make a mistake.
} catch (Exception ex) {
Logger.getLogger(ServerThread.class.getName()).log(Level.SEVERE, null, ex);
}
// ...
} catch (IOException e) {
e.printStackTrace();
}
Do either or both of these catch blocks get invoked? And why do they send their output to different places? For that matter, if you have a logger, why are you inserting System.out.println() statements?

File transfer from C++ client to Java server

I have a c++ client which needs to send a file to a c++ server. I'm splitting the file to chunks of PACKET_SIZE (=1024) bytes and send them over a TCP socket. At the server side I read at most PACKET_SIZE bytes to a buffer. When the client sends files which are less than PACKET_SIZE, the server receives more bytes than sent. Even when I limit the number of bytes to be exactly the size of the file, the files differ. I know the problem does not have to do with the client because I've tested it with a c++ server and it works flawlessly.
Thanks.
Server:
public void run() {
DataInputStream input = null;
PrintWriter output = null;
try {
input = new DataInputStream (_client.getInputStream());
}
catch (Exception e) {/* Error handling code */}
FileHeader fh = recvHeader(input);
size = fh._size;
filename = fh._name;
try {
output = new PrintWriter(_client.getOutputStream(), true);
}
catch (Exception e) {/* Error handling code */}
output.write(HEADER_ACK);
output.flush();
FileOutputStream file = null;
try {
file = new FileOutputStream(filename);
}
catch (FileNotFoundException fnfe) {/* Error handling code */}
int total_bytes_rcvd = 0, bytes_rcvd = 0, packets_rcvd = 0;
byte [] buf = new byte [PACKET_DATA_SIZE];
try {
int max = (size > PACKET_DATA_SIZE)? PACKET_DATA_SIZE: size;
bytes_rcvd = input.read(buf,0, max);
while (total_bytes_rcvd < size) {
if (-1 == bytes_rcvd) {...}
++packets_rcvd;
total_bytes_rcvd += bytes_rcvd;
file.write (buf,0, bytes_rcvd);
if (total_bytes_rcvd < size)
bytes_rcvd = input.read(buf);
}
file.close();
}
catch (Exception e) {/* Error handling code */}
}
Client:
char packet [PACKET_SIZE] ;
file.open (filename, ios::in | ios::binary);//fopen (file_path , "rb");
int max = 0;
if (file.is_open()) {
if (size > PACKET_SIZE)
max = PACKET_SIZE;
else
max = size;
file.read (packet , max);
}
else {...}
int sent_packets = 0;
while (sent_packets < (int) ceil (((float)size)/PACKET_SIZE) ) {
_write=send(_sd , packet, max,0);
if (_write <0) {...}
else {
++sent_packets;
if (size > PACKET_SIZE* sent_packets) {
if (size - PACKET_SIZE* sent_packets >= PACKET_SIZE)
max = PACKET_SIZE;
else
max = size - PACKET_SIZE* sent_packets;
file.read (packet , max);
}
}
}
Is the sending socket closed at the end of the file, or is the next file streamed over the same socket? If more than one file is streamed, you could pick up data from the next file if you have the endedness wrong for the file size in recvHeader(), i.e. you send a file of length 0x0102 and try to read one of length 0x0201.
Other question, why do you provide a max for the first read, but not for the following reads on the same file?
One issue I see is that it appears that you assume that if the send returns a non-error, that it sent the entire chunk you requested it to send. This is not necessarily true, especially with stream sockets. How large are the packets you are sending, and how many? The most likely reason this could occur would be if the sndbuf for the socket filled, and your socket _sd is set to non-blocking. I'm not positive (depends on stack implementation), but I believe it could also likely occur if the TCP transmit window was full for your connection, and tcp couldn't enqueue your entire packet.
You should probably loop on the send until max is sent.
Thusly:
int send_ct=0;
while( (_write = send(_sd, packet + send_ct, max-send_ct, 0)) > 0) {
send_ct += _write;
if(send_ct >= max) {
break;
} else {
// Had to do another send
}
}
the code is not complete. E.g. you have omitted the sending of the filename and the filesize, as well as the parsing of those values. Are those values correct? If not first ensure that these values are the right ones before investigating further.

Categories