C++ TCP Winsock Server receiving the same packet many times - java

I have written a very simple C++ server which I am connecting to from a Java application. The C++ server uses winsock2. I am sending UTF8 encoded numbers to the server from my client and on receipt of these numbers I would like the server to perform an action. However my server seems to be receiving a series of numbers as one. At the moment I have the server listening every 1 millisecond for a new message.
This is my C++ server code which receives the message:
bool receive()
{
char buffer[1024];
int inDataLength=recv(Socket,buffer,sizeof(buffer),0);
if(buffer[0] != '\0')
{
std::cout<<"Client: ";
std::cout << buffer;
sendKey(string(buffer));
}
else if (inDataLength == 0) //Properly closed connection
{
std::cout<<"Connection lost..\r\n";
return false;
}
return true;
}
This is called within a loop like so:
while ( receive() )
{
Sleep(1);
}
This is my java client code to send a message where out is OutputStream = socket.getOutputStream():
public void send(String msg)
{
try {
out.write( msg.getBytes("UTF8") );
out.flush();
Thread.sleep(100);
} catch (SocketException e) {
Global.error("Connection error..");
//e.printStackTrace();
} catch (UnsupportedEncodingException e) {
} catch (IOException e) {
Global.error("Never connected..\r\n");
} catch (Exception e)
{ Global.error("Sending failed..\r\n"); }
}
What I am getting is the server receiving for example the number 1, then 2, then 12, then 121 etc.. in no specific pattern except once the server is receiving 2 numbers at once it will never start receiving only one again. This is the only place in my java code where anything is sent to the server and I flush the buffer after each message so I think the issue is on my server but I'm at a loss as to the problem.
Any help would be much appreciated.
Thanks.

You are forgetting the most important check:
int inDataLength=recv(Socket,buffer,sizeof(buffer),0);
if (inDataLength == -1 ) {
std::cerr << "receive error: " << GetLastError() << std::endl;
return false;
}
...
This actually might be the reason your loop took so much CPU time.

Given the sleeps it seems as the recv is not blocking.. Take a look at Winsock recv() does not block. You need to check for errors in the return value.

Related

How to properly parse JSON objects in a Java+C++ TCP connection?

So I want to have a TCP connection between a Java client and a C++ server. Think of the client as an input device and the C++ server should receive JSON objects, parse them and use them in a game.
It seems like the connection is established successfully, but 1) there is an error("parse error-unexpected ''") when I try to parse the json objects (i'm using nlohmann's json) and 2) when I don't even call doStuff a.k.a just print out the buffer, only weird characters are printed (e.g.).
I assume I messed up something in the sending/receiving of data(This is the first time I use C++), but I've lost two days and really can't figure it out!
In the Java client I have:
private void connect() {
try {
hostname = conn.getHostname();
portnumber = conn.getPortNr();
socket = new Socket(hostname, portnumber);
out = new OutputStreamWriter(socket.getOutputStream());
in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
} catch (Exception e) {
e.printStackTrace();
Log.e(debugString, e.getMessage());
}
}
public void sendMessage(String json) {
try {
//connect();
out.write(json.length());
Log.d(debugString, String.valueOf(json.length()));
out.flush();
out.write(json);
out.flush();
Log.d(debugString, json);
in.read();
this.close();
} catch (Exception e) {
e.printStackTrace();
Log.e(debugString, e.getMessage());
}
}
And in the C++ server:
void Server::startConnection() {
if (listen(s, 1) != 0) {
perror("Error on listen");
exit(EXIT_FAILURE);
}
listen(s, 1);
clilen = sizeof(cli_addr);
newsockfd = accept(s, (struct sockaddr *) &cli_addr, &clilen);
if (newsockfd < 0) {
close(newsockfd);
perror("Server: ERROR on accept");
exit(EXIT_FAILURE);
}
puts("Connection accepted");
int numbytes;
char buffer[MAXDATASIZE];
while (1)
{
numbytes = recv(s,buffer,MAXDATASIZE-1,0);
buffer[numbytes]='\0';
//Here's where the weird stuff happens
//cout << buffer;
//doStuff(numbytes,buffer);
if (numbytes==0)
{
cout << "Connection closed"<< endl;
break;
}
}
}
bool Server::sendData(char *msg) {
int len = strlen(msg);
int bytes_sent = send(s,msg,len,0);
if (bytes_sent == 0) {
return false;
} else {
return true;
}
}
void Server::doStuff(int numbytes, char * buf) {
json jdata;
try {
jdata.clear();
jdata = nlohmann::json::parse(buf);
if (jdata["type"] == "life") {
life = jdata["value"];
puts("json parsed");
}
} catch (const std::exception& e) {
cerr << "Unable to parse json: " << e.what() << std::endl;
}
}
Since your char "buffer" is showing weird characters after recv() on the C++ server it seems to me the issue should be due to character encoding mismatch between the Java client and the C++ server. To verify you can check the "numbytes" returned by recv() on C++ server, it should be greater than the number of characters in the JSON string on the Java client.
You are sending the lower 8 bytes of the JSON length but you're never doing anything about it at the receiver. This is almost certainly a mistake anyway. You shouldn't need to send the length. JSON is self-describing.

Why TCP client can't detect server closed using write?

I am building an IM application, from the client side, I write my code like this (I use SocketChannel in blocking mode, history reason, I think it is not related to this problem):
try {
LogUtil.info(TAG, this.label + " tryConnect, attempt = " + (3 - retry));
clientChannel = SocketChannel.open();
clientChannel.configureBlocking(true);
clientChannel.socket().setSoTimeout(100);
clientChannel.socket().setTrafficClass(0x10);
clientChannel.socket().setTcpNoDelay(true);
clientChannel.socket().setPerformancePreferences(3, 3, 1);
clientChannel.socket().connect(address, 10000);
LogUtil.info(TAG, this.label + " socket connected successfully");
break;
} catch (AlreadyConnectedException ace) {
LogUtil.info(TAG, label + " AlreadyConnectedException");
break;
} catch (NotYetConnectedException ace) {
LogUtil.info(TAG, label + " NotYetConnectedException");
break;
} catch (SocketTimeoutException e) {
LogUtil.info(TAG, label + " SocketTimeoutException");
break;
} catch (Exception e) {
clientChannel = null;
throw new SocketConnectionException(label + ", exception = " + ThrowableUtil.stackTraceToString(e));
}
The problem is, when sometimes I shut down the server, the client-side will keeps writing successfully (small chunks of data, less than 50 bytes in total). After about 3 minutes, the client side hits the write fail exception.
Why didn't the client side fail immediately after the server has been closed? How do I fix this problem? Maybe reduce the send buffer to 10 bytes ?
EDIT
Here's how I actually write data:
public void writeXML(ByteBuffer buffer, int retry) {
synchronized (writeLock) {
if (retry < 0) {
throw new SocketConnectionException(label + "Write Exception");
}
tryConnect(false);
try {
int written = 0;
while (buffer.hasRemaining()) {
// I think it should be an exception here after I closed server
written += clientChannel.write(buffer);
}
if (LogUtil.debug) {
LogUtil.info(TAG, "\t successfully written = " + written);
}
} catch (Exception e) {
e.printStackTrace();
tryConnect(true);
writeXML(buffer, --retry);
}
}
}
Because in between you and the peer application there are:
a socket send buffer
a TCP implementation
another TCP implementation
a socket receive buffer.
Normally when you write, the data just gets transferred into your socket send buffer and is sent on the wire asynchronously. So if there is going to be an error sending it you won't find out straight away. You will only find out when the TCP sends have failed enough times over whatever the internal send timeout period is for TCP to decide that an error condition exists. The next write (or read) after that will get the error. It could be some minutes away.
It turns out that the read operation can detect a closed connection(via #EJP's reply, it is different from a lost connection) immediately.
In my reading thread, I have this line:
int read = clientChannel.read(buffer);
, When it returns -1 means the server is shutdown (Shutdown on purpose is different than network unreachable), I guess the write operation needs to fill the TCP send buffer, so there's no way to detect a connection lost quickly.

My SerialPortEvent does not receive data using jSSC in a continous loop

I have been trying to use serial communication with my Arduino Uno and have used the library jSSC-2.6.0. I am using a SeriaPortEvent listener to receive bytes from the Serial Port (Arduino) and store them in a linked list.
public synchronized void serialEvent(SerialPortEvent serialPortEvent) {
if (serialPortEvent.isRXCHAR()) { // if we receive data
if (serialPortEvent.getEventValue() > 0) { // if there is some existent data
try {
byte[] bytes = this.serialPort.readBytes(); // reading the bytes received on serial port
if (bytes != null) {
for (byte b : bytes) {
this.serialInput.add(b); // adding the bytes to the linked list
// *** DEBUGGING *** //
System.out.print(String.format("%X ", b));
}
}
} catch (SerialPortException e) {
System.out.println(e);
e.printStackTrace();
}
}
}
}
Now if I send individual data in a loop and don't wait for any response the serialEvent usually prints bytes received, to the Console.
But If I try and wait till there is some data in the linked list the program just keeps on looping and the SerialEvent never adds bytes to the LinkedList, it even does not even register any bytes being received.
This works and the correct bytes are sent by Arduino received by SerialEvent and printed to the Console:
while(true) {
t.write((byte) 0x41);
}
But this method just stucks at this.available() which returns the size of the LinkedList,
as in no data is actually received from the Arduino or receieved by the serialEvent:
public boolean testComm() throws SerialPortException {
if (!this.serialPort.isOpened()) // if port is not open return false
return false;
this.write(SerialCOM.TEST); // SerialCOM.TEST = 0x41
while (this.available() < 1)
; // we wait for a response
if (this.read() == SerialCOM.SUCCESS)
return true;
return false;
}
I have debugged the program and sometimes debugging, the program does work but not always. Also the program only gets stuck when i try and check if there is some bytes in the linkedlist i.e while(available() < 1). Otherwise if I dont check I eventually receive the correct response of bytes from Arduino
Found the answer myself after wasting 4hours. I was better off using the readBytes() method with a byteCount of 1 and timeOut of 100ms just to be on the safe side. So now the read method looks like this.
private byte read() throws SerialPortException{
byte[] temp = null;
try {
temp = this.serialPort.readBytes(1, 100);
if (temp == null) {
throw new SerialPortException(this.serialPort.getPortName(),
"SerialCOM : read()", "Can't read from Serial Port");
} else {
return temp[0];
}
} catch (SerialPortTimeoutException e) {
System.out.println(e);
e.printStackTrace();
}
return (Byte) null;
}

Android, detect socket write failure

Basically the server side sends a keep alive message every 8 minutes, if the write fails it disconnects the client and closes the socket connection. If my android device is awake and the server closes the connection then the read operation on the android device throws an exception as it should and i disconnect from the server. If the device is asleep it doesn't read data at all even with a partial wake lock and a wifilock, i have already given up on that, but my actual problem is when my device comes back from sleep (if i turn the screen on for example) what i do is send a message to the server so i can refresh the data but if my server has already closed the socket my write operation should throw an IOException but for some reason it doesn't. And even the blocking read i have doesn't throw any exception or return -1.
here is my write operation:
public boolean sendData(byte[] data)
{
boolean sent=false;
if(connectedToServer)
{
try
{
myOutputStream.write(data, 0, data.length);
sent= true;
}
catch (IOException e)
{
e.printStackTrace();
unexpectedDisconnectionFromServer();
}
}
return sent;
}
and here is my read operation:
public void startReadingInBackground()
{
while(connectedToServer)
{
try
{
int bytesRead=0;
if(myWifiLock!=null && !myWifiLock.isHeld())
myWifiLock.acquire();
byte val=(byte)myInputStream.read();
myWakeLock.acquire();
if(val==-1)
{
unexpectedDisconnectionFromServer();
if(myWifiLock!=null && myWifiLock.isHeld())
myWifiLock.release();
myWakeLock.release();
return;
}
bytesRead=myInputStream.read(myBuffer, 0, bufferSize);
if(bytesRead<1)
{
unexpectedDisconnectionFromServer();
if(myWifiLock!=null && myWifiLock.isHeld())
myWifiLock.release();
myWakeLock.release();
return;
}
byte[] dataArray=Arrays.copyOfRange(myBuffer,0,bytesRead);
ByteBuffer data=ByteBuffer.allocate(bytesRead+1).put(val).put(dataArray);
myParent.invokeReceiveAction(data, bytesRead + 1);
}
catch (IOException e)
{
if(!myWakeLock.isHeld())
myWakeLock.acquire();
unexpectedDisconnectionFromServer();
e.printStackTrace();
}
finally
{
if(myWifiLock!=null && myWifiLock.isHeld())
myWifiLock.release();
if(myWakeLock!=null && myWakeLock.isHeld())
myWakeLock.release();
}
}
}
and i get the outputstream like so:
Socket mySocket = new Socket(SERVER_IP, SERVER_PORT_TCP );
myOutputStream=mySocket.getOutputStream();
Your write will throw an IOException, eventually. Your mistake is in assuming it is bound to happen on the first write after the disconnect. It won't, for all sorts of reasons including buffering and retries. TCP has to determine that the connection is really dead before it will reject a new write, and it certainly won't do that on the first write after the disconnect.

How to manage lots of incoming packets

I have a socketserver set up with a remote client, and it is functional. Upon opening the client and logging in, I noticed that sometimes, there is an error that seems to be due to the client reading an int when it shouldn't be.
Upon logging on, the server sends a series of messages/packets to the client, and these are anything from string messages to information used to load variables on the client's side.
Occasionally, while logging in, an error gets thrown showing that the client has read a packet of size 0 or a very large size. Upon converting the large-sized number into ascii I once found that it was a bit of a string "sk." (I located this string in my code so it's not entirely random).
Looking at my code, I'm not sure why this is happening. Is it possible that the client is reading an int at the wrong time? If so, how can I fix this?
InetAddress address = InetAddress.getByName(host);
connection = new Socket(address, port);
in = new DataInputStream(connection.getInputStream());
out = new DataOutputStream(connection.getOutputStream());
String process;
System.out.println("Connecting to server on "+ host + " port " + port +" at " + timestamp);
process = "Connection: "+host + ","+port+","+timestamp + ". Version: "+version;
write(0, process);
out.flush();
while (true) {
int len = in.readInt();
if (len < 2 || len > 2000) {
throw new Exception("Invalid Packet, length: "+len+".");
}
byte[] data = new byte[len];
in.readFully(data);
for (Byte b : data) {
System.out.printf("0x%02X ",b);
}
try {
reader.handlePackets(data);
} catch (Exception e) {
e.printStackTrace();
//connection.close();
//System.exit(0);
//System.out.println("Exiting");
}
}
//Here is code for my write function (Server sided):
public static void write(Client c, Packet pkt) {
for (Client client : clients) {
if (c.equals(client)) {
try {
out.writeInt(pkt.size());
out.write(pkt.getBytes());
out.flush();
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
}
So looking at the write function, I don't really see how it could be confusing the client and making it read for the size of the packet twice for one packet (at least that's what I think is happening).
If you need more information please ask me.
The client side code looks fine, and the server side code looks fine too.
The most likely issue is that this is some kind of issue with multi-threading and (improper) synchronization. For example, maybe two server-side threads are trying to write a packet to the same client at the same time.
It is also possible that your Packet class has inconsistent implementations of size() and getBytes() ... or that one thread is modifying a Packet objects while a second one is sending it.

Categories