Ensuring no packet loss between TCP client and server - java

I am writing a Java TCP client which sends chunks of data to a C server. The client-server worked very well on my development PC. This code upon deployment on a hardware board showed packet loss. I only have the logs with me and I know that the server did not receive all packets.
I do not have the hardware to test. Therefore, at the first level, I want to be very sure client code does send all the required data.
Here is my code(the client part in Java). How do I make sure this is done? Is there some resend commands with timings etc?
Socket mySocket = new Socket("10.0.0.2",2800);
OutputStream os = mySocket.getOutputStream();
System.out.println(" Sending 8 byte Header Msg with length of following data to Server");
os.write(hdr, 0, 8);
os.flush();
System.out.println(" Sending Data ");
start = 0;
for(int index=0; index < ((rbuffer.length/chucksize)+1); index++){
if(start + chucksize > rbuffer.length) {
System.arraycopy(rbuffer, start, val, 0, rbuffer.length - start);
} else {
System.arraycopy(rbuffer, start, val, 0, chucksize);
}
start += chucksize ;
os.write(val,0,chucksize);
os.flush();
}
Here is the C snippet which receives this data:
while ((bytes_received = recv(connected, rMsg, sizeof(rMsg),0)) > 0){
if (bytes_received > 0) // zero indicates end of transmission */
{
/* get length of message (2 bytes) */
tmpVal = 0;
tmpVal |= rMsg[idx++];
tmpVal = tmpVal << 8;
tmpVal |= rMsg[idx++];
msg_len = tmpVal;
len = msg_len;
//printf("msg_len = %d\n", len);
printf("length of following message from header message : %d\n", len);
char echoBuffer[RCVBUFSIZE] ;
memset(echoBuffer, 0, RCVBUFSIZE);
int recvMsgsize = 0;
plain=(char *)malloc(len+1);
if (!plain)
{
fprintf(stderr, "Memory error!");
}
for( i = RCVBUFSIZE; i < (len+RCVBUFSIZE); i=i+RCVBUFSIZE){
if(i>=len){
recvMsgSize = recv(connected, echoBuffer, (len - (i-RCVBUFSIZE)), 0);
memcpy(&plain[k], echoBuffer, recvMsgSize);
k = k+recvMsgSize;
}
else{
recvMsgSize = recv(connected, echoBuffer, RCVBUFSIZE, 0);
memcpy(&plain[k], echoBuffer, recvMsgSize);
k = k+recvMsgSize;
}
}
}//closing if
}//closing while

First of all there is no such thing as packet loss in TCP/IP. This protocol was designed to reliably send a stream of bytes in correct order. So the problem must be with your application or the other side.
I am not really in a mood to analyze this whole arraycopy() madness (C anyone?), but why aren't you just sending the whole rbuffer in one go through BufferedOutputStream?
OutputStream os = new BufferedOutputStream(mySocket.getOutputStream());
and then:
os.write(rbuffer);
Believe me, BufferedOutputStream is doing the exact same thing (collecting bytes into chunks and sending them in one go). Or maybe I am missing something?

I changed the C side program in the following way and it now works:
printf("length of following message from header message : %d\n", len);
plain=(char *)malloc(len+1);
if (!plain)
{
fprintf(stderr, "Memory error!");
}
memset(plain, 0, len+1);
int remain = len;
k= 0;
while (remain){
int toGet = remain > RCVBUFSIZE ? RCVBUFSIZE : remain;
remain -= toGet;
int recvd = 0;
while(recvd < toGet) {
if((recvMsgSize = recv(connected, echoBuffer, toGet-recvd, 0)) < 0){
printf("error receiving data\n");
}
memcpy(&plain[k], echoBuffer, recvMsgSize);
k += recvMsgSize;
printf("Total data accumulated after recv input %d\n", k);
recvd += recvMsgSize;
}
}

Related

Why socket just dumped between Android and Linux?

In my case,Android App should be regarded as a Server and linux is the client.
I just send data from linux to Android with 4096 bytes per send.
The log shows that linux sends all data successfully.
Here come to the Server, i.e Android...
Server receives data with 4096 bytes per receive. But socket error happened because the read(...) function return the value -1;
Here is my code:
In linux with c++:
auto size = static_cast<int>(buffer.size()); // buffer is the data needed to send;
auto bytes_send = 0, bytes = 0;
printf("target. data size needed to send: %d\n", size);
int single = 4096;
while(bytes_send < size){
int remain = size - bytes_send;
if(remain < single){
bytes = send(socket_client_fd, &buffer[bytes_send], remain, 0);
} else {
bytes = send(socket_client_fd, &buffer[bytes_send], single, 0);
}
if(bytes < 0){
std::cerr:: "Failed to send data" << std::endl;
return;
}
bytes_send += bytes;
printf("This remain: %d; We send %d bytes; Totally %d bytes sended;\n", remain, bytes, bytes_send);
}
And the server with java:
// previously got size need to receive
byte[] bytes = new byte[size];
int bytes_recved = 0;
int single = 4096;
while(bytes_recved < size){
int remain = size - bytes_recved;
int read = -1;
if(remain < single)
read = inputStream.read(bytes, bytes_recved , remain);
else
read = inputStream.read(bytes, bytes_recved , single);
if(read < 0){
Log.i(TAG, "received failed, less than 0 bytes " + read);
break;
}
bytes_recved += read;
Log.i(TAG, "received: " + bytes_recved + " ; receive this time: " + read);
}
Client send all data completely, but Server failed to received data, here is the log of Server(Android),It seems that data just lost! I think my code is right, but why it failed?
Here is the log of server:
The log of Server(Android)
The logical mistake is this:
if(readed < 0)
It should be:
if(read < 0)
I advise you to use better variable names. The past participle of read is read not readed.

C++ Socket Send to Java Server gets hanged after some hours

My C++ program which is the client connects with the Java server . From the time of connection establishment the C++ client sends a block of data of size ~1MB to 3MB in a fixed frequency( say 10 sec).
My Java server opens a socket
Socket client = new ServerSocket(14001, 10).accept();//blocking
ReceiveThread st = new ReceiveThread(client);
and it receives the data from client as below.
private String getDataFromSocket(BufferedReader reader) throws IOException
{
int byteLimit = 1024*1024*2; //2 MB
String output = "";
char[] charArray = null;
int availableSize = is.available();
if(availableSize < 1) // If available size is 0 just return empty
{
return output;
}
while(availableSize > byteLimit) // Reads 2MB max if available size is more than 2 MB
{
charArray = new char[byteLimit];
reader.read(charArray,0,charArray.length);
output += new String(charArray);
availableSize = is.available();
}
charArray = new char[availableSize];
reader.read(charArray,0,charArray.length);
output = output +new String(charArray);
return output;
}
The above GetDataFromSocket keeps on checking for available data till the socket is closed gracefully.
the C++ connects with the Java server
void CreateSocket()
{
int err, nRet = 0;
sockfd = 0;
WORD wVersionRequested;
WSADATA wsaData;
//WSACleanup();// Is this needed?
wVersionRequested = MAKEWORD(1, 1);
while (1)
{
err = WSAStartup(wVersionRequested, &wsaData);
if (err != 0)
{
Sleep(50);
continue;
}
else
{
break;
}
}
while (1)
{
sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (sockfd == -1 || sockfd == INVALID_SOCKET)
{
Sleep(50);
continue;
}
else
{
nRet = 1;
break;
}
}
}
void ConnectWithServer()
{
int nRet = 0;
char myname[256] = { 0 };
int wsaErr = 0, portNum = 0, retryCount=0;
struct hostent *h = NULL;
struct sockaddr_in server_addr;
gethostname(myname, 256);
portNum = 1401;
while (1)
{
if ((h = gethostbyname(myname)) != NULL)
{
memset(&server_addr, 0, sizeof(struct sockaddr_in));
memcpy((char *)&server_addr.sin_addr, h->h_addr, h->h_length);
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(portNum);
server_addr.sin_addr = *((struct in_addr*) h->h_addr);
}
if (0 == connect(sockfd, (struct sockaddr *)&server_addr, sizeof(server_addr)))
{
nRet = 1;
break;
}
else
{
}
Sleep(50);
}
}
The connection establishment to the server is done by the above two functions and it returns success. After these steps i am sending the data buffer to the Java server once in every 10 seconds.
while(index<retryCount)
{
string toSend = wstring_to_utf8(sRequestData);
nRet = send(sockfd, toSend.c_str(), toSend.length(), 0);
if (nRet == SOCKET_ERROR)
{
wsaErr = WSAGetLastError();
Sleep(DEFAULT);
index++;
}
else if(nRet == toSend.length())
{
break;
}
else
{
index = 0;
}
}
The problem here is, after some hours of send and receive from C++ to Java server , the send gets hanged for infinite time. The execution never comes out from the send() function. But after the hang if i abruptly close the Java server , then the send returns socket error and again works well for some hours and the hang still occurs.
As i mentioned i keep on sending data to server of size varied from 1 MB to 3 MB ten seconds once. What could be the issue here? How can i sort this out?

write int from Java client to c server over socket

I thought it might be byte ordering but it doesn't look like it.
I am not sure what else it could be.
Java client on linux
private static final int CODE = 0;
Socket socket = new Socket("10.10.10.10", 50505);
DataOutputStream output = new DataOutputStream(socket.getOutputStream());
output.writeInt(CODE);
c server also on linux
int sd = createSocket();
int code = -1;
int bytesRead = 0;
int result;
while (bytesRead < sizeof(int))
{
result = read(sd, &code + bytesRead, sizeof(int) - bytesRead);
bytesRead += result;
}
int ntolCode = ntohl(code); //test for byte order issue
printf("\n%i\n%i\n%i\n", code, ntolCode, bytesRead);
Which prints out:
-256
16777215
4
Not sure what else to try.
Solution
This solution is not intuitive in the least for me, but thanks for the down votes anyway!
Java side
Socket socket = new Socket("10.10.10.10", 50505);
DataOutputStream out = new DataOutputStream(socket.getOutputStream());
int x = 123456;
ByteBuffer buff = ByteBuffer.allocate(4);
byte[] b = buff.order(ByteOrder.LITTLE_ENDIAN).putInt(x).array();
out.write(b);
C side
int sd = createSocket();
char buff[4];
int bytesRead = 0;
int result;
while (bytesRead < 4){
result = read(sd, buff + bytesRead, sizeof(buff) - bytesRead);
if (result < 1) {
return -1;
}
bytesRead += result;
}
int answer = (buff[3] << 24 | buff[2] << 16 | buff[1] << 8 | buff[0]);
I am still interested in a simpler solution if anyone has anything, preferably using BufferedWriter if that is possible.
The problem is here:
&code + bytesRead
This will increment the address of code in steps of 4 (sizeof code), not 1. You need a byte array, or some typecasting.
You forgot to design and implement a protocol! You wrote one piece of code that sends data in one format and another piece of code that receives data in an entirely different format. Decide on a format, document that format, then write code that sends in that format, then write code the receives in that format.
Do not skip the documentation step. That is the most important one. Document precisely what bytes will be used to communicate the information.

It is possible when inputStream.available != 0 the complete data has not been received?

I am using a BluetoothSocket in Android (in spp mode). I send data like this:
Packet sent﹕ 0xAA 0x00 0x00 0x01 0x01 0x14 0x00 0x00 0xB6 0x34
and i get response:
Packet received﹕ 0xAA 0x01 0x00 0x01 0x81 0x14 0x00 0x00 0x8F 0x34
But when I try to get a large response, I get the following error:
09-25 11:13:26.583 6442-6495E/AndroidRuntime﹕ FATAL EXCEPTION: Thread-1258
Process: es.xxx.xxxx, PID: 6442
java.lang.ArrayIndexOutOfBoundsException: length=178; index=178
The error is in:
public void receive(int command, byte[] data) {
if (data.length != 0) {
int device = data[1];
int par = data[5];
short sizeData = (short)(((data[6]&0xFF) << 8) | ((data[7]&0xFF)));
byte[] datos = new byte[sizeData];
for (int i = 0; i < sizeData; i++) {
datos[i] = data[8 + i]; // Here ocurred the error
}
switch (command) {
case RETURN_PING:
break;
case RETURN_MOUNT:
...
}
My method in order to read input data from bluetooth is (I made manual timeout following a response in StackOverflow):
public byte[] read(){
try {
int timeout = 0;
int maxTimeout = 10; // leads to a timeout of 2 seconds
int available = 0;
while((available = in.available()) == 0 && timeout < maxTimeout){
timeout++;
Thread.sleep(50);
}
receive = new byte[available];
in.read(receive);
return receive.clone();
} catch (IOException e) {
e.printStackTrace();
if (socket != null){
close();
}
} catch (InterruptedException e) {
e.printStackTrace();
}
return null;
}
So, my question is: It is possible when in.available != 0 the complete data has not been received yet? (in this way, method receive read bytes 6 and 7, where is store the packet lenght, but when this method try to iterate over all items it throw ArrayIndexOutOfBoundsException).
The major problem of your "read" is incorrect computing of how many bytes need to be read to get the whole package. There is a few main solutions how to pass and, then, read a data packet:
a. each packet has a header with its length specified
b. each packet has a predefined delimiter at its end, a kind of magic like "0x00" (this means you cannot use this byte in your data)
c. some other exotic ones
As I see, you use a. Then you may use something like this:
/**
* equals to java.io.DatainputStream.readFully
*/
public static void readFully(InputStream in, byte b[], int off, int len) throws IOException {
if (len < 0) {
throw new IndexOutOfBoundsException();
}
int n = 0;
while (n < len) {
final int count = in.read(b, off + n, len - n);
if (count < 0) {
throw new EOFException();
}
n += count;
}
}
public static int readByte (byte [] bytes, int offset) {
return ((int) bytes [offset]) & 0xFF;
}
public static short readShort(byte [] bytes, int offset) {
return (short)
(readByte(bytes, offset) << 8 |
readByte(bytes, offset + 1));
}
I see your header consists of 8 bytes. Then I'd suggest to do the following:
byte[] header = new byte[8];
readFully(in, header, 0, header.length);
int device = readByte(header, 1);
int par = readByte(header, 5);
int sizeData = readShort(header, 6);
byte[] data = new byte[sizeData];
readFully(in, data, 0, sizeData);
// now we have the whole data
After years of development I still have no a good idea what would we do with InputStream.available() :) To close a connection by data transmission timeout you could use
http://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#setSoTimeout(int)
or if not available, as in your case, a kind of timer
http://developer.android.com/reference/java/util/Timer.html
(update last receiving time after each call of readFully method and check the difference between current time and last receiving time by the timer)
It is possible when inputStream.available != 0 the complete data has not been received?
There is nothing in its Javadoc that says anything about 'complete data'. The Javadoc correctly states that it is a measure of how much data may be read without blocking.
It isn't:
a measure of the total length of the input stream
an indicator of message boundaries
an indicator of end of stream.
The Javadoc contains a specific warning about using its value to allocate a buffer ...
If you want a read timeout, use Socket.setSoTimeout().

Java SocketChannel Read Entire String

In my current project, I am trying to transmit a string from one computer to another, and after finding and learning from numerous examples I have managed to get a basic form of communication working.
The issue I am having is if one computer tries sending a message that is too long, it seems to get broken up into multiple parts (roughly 3700 characters), and my parsing method fails.
I am using a Selector to iterate through all of the channels. Here is the relevant code:
if(key.isReadable()) {
// Get the channel and read in the data
SocketChannel keyChannel = (SocketChannel)key.channel();
ByteBuffer buffer = buffers.get(keyChannel);
int length = 0;
try {
length = keyChannel.read(buffer);
} catch ( IOException ioe) {
closeChannel(keyChannel);
}
if(length > 0) {
buffer.flip();
// Gather the entire message before processing
while( buffer.remaining() > 0) {
byte[] data = new byte[buffer.remaining()];
buffer.get(data);
fireReceiveEvent(keyChannel, data);//Send the data for processing
}
buffer.compact();
} else if (length < 0) {
closeChannel(keyChannel);
}
}
How can I guarantee that the entire message (regardless of length) is read at once before passing it along?
After talking to numerous people that know more about this than I do. The issue turns out to be that with TCP it is impossible to know when an entire "message" has arrived because there is no such thing as a message since TCP works on a two-way byte stream. The solution is to create your own protocol and implements your own definition of "message".
For my project, every message either starts with a [ or { and ends with a ] or } depending on the starting character. I search through the received data, and if there is a complete message, I grab it and pass it along to the handler. Otherwise skip the channel, and wait for more to arrive.
Here is the final version of my code that handles the message receiving.
if(key.isReadable()) {
// Get the channel and read in the data
SocketChannel keyChannel = (SocketChannel)key.channel();
ByteBuffer buffer = buffers.get(keyChannel);
int length = 0;
try {
length = keyChannel.read(buffer);
} catch ( IOException ioe) {
key.cancel();
closeChannel(keyChannel);
}
if (length > 0) {
buffer.flip();
// Gather the entire message before processing
if (buffer.remaining() > 0) {
byte[] data = new byte[buffer.remaining()];
buffer.get(data);
buffer.rewind();
int index = 0;
int i = 0;
// Check for the beginning of a packet
//[ = 91
//] = 93
//{ = 123
//} = 125
if (data[0] == 91 || data[0] == 123) {
// The string we are looking for
byte targetByte = (byte) (data[0] + 2);
for (byte b : data) {
i += 1;
if (b == targetByte) {
index = i;
break;
}
}
if (index > 0) {
data = new byte[index];
buffer.get(data, 0, index);
fireReceiveEvent(keyChannel, data);
}
} else {
for (byte b : data) {
i += 1;
if (b == 91 || b == 123) {
index = i;
break;
}
}
if (index > 0) {
data = new byte[index];
buffer.get(data, 0, index); // Drain the data that we don't want
}
}
}
buffer.compact();
} else if (length < 0) {
key.cancel();
closeChannel(keyChannel);
}
}

Categories