I'm writing a server that is supposed to communicate with some embedded devices. The communication protocol is based on a fixed length header. The problem is I can't get my server to handle sudden disconnects of the devices properly (by "sudden" I mean situations when I just turn the device off). Here is the code for the client thread main loop:
while(!terminate) {
try {
// Receive the header
while(totalBytesRead < ServerCommon.HEADER_SIZE) {
bytesRead = dis.read(headerBuffer, bytesRead, ServerCommon.HEADER_SIZE - bytesRead);
if(bytesRead == -1) {
// Can't get here!
}
else {
totalBytesRead += bytesRead;
}
}
totalBytesRead = 0;
bytesRead = 0;
type = Conversion.byteArrayToShortOrder(headerBuffer, 0);
length = Conversion.byteArrayToShortOrder(headerBuffer, 2);
// Receive the payload
while(totalBytesRead < length) {
bytesRead = dis.read(receiveBuffer, bytesRead, length - bytesRead);
if(bytesRead == -1) {
// Can't get here!
}
else {
totalBytesRead += bytesRead;
}
}
totalBytesRead = 0;
bytesRead = 0;
// Pass received frame to FrameDispatcher
Even if I turn the device off, the read method keeps returning 0, not -1. How this could be?
When you close a socket normally, there's a sequence of codes sent between client and server to coordinate this (starting with a FIN code from the closing end). In this instance this isn't happening (since you simply turn the device off), and consequently the server is left wondering what has happened.
You may want to investigate configuration via timeouts etc., or some sort of timed protocol to identify a disconnect through absence of response (perhaps out-of-band heartbeats using ICMP/UDP?). Or is a connectionless protocol like UDP of use for your communication ?
Read is supposed to return 0, only if the supplied length is 0. In case of an error -1 should be returned or an exception be thrown.
I suggest that you debug your server program first. Create a java client application (should be easy to do so). Kill the client to see what happens. Even better use two PCs and suddenly unplug them. This will simulate your situation better.
TCP has a 30 seconds timeout for communication partners that are not reachable. I suppose if you wait for 30 seconds you should get your -1.
Related
I am working on a client application that sends transactions to a server over a TLS connection. The applications sends a given number of bytes and receives 1182 bytes as response. It has been working fine until I started increasing the number of transactions per second. After that, some response packets started to be broken and cannot be fully received by the client in only read. When I try to unwrap the packet content, it raises an exception and terminates TLS session.
javax.net.ssl.SSLException: Unrecognized record version (D)TLS-0.0 , plaintext connection?
at java.base/sun.security.ssl.SSLEngineInputRecord.bytesInCompletePacket(SSLEngineInputRecord.java:98)
at java.base/sun.security.ssl.SSLEngineInputRecord.bytesInCompletePacket(SSLEngineInputRecord.java:64)
at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:544)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:441)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:420)
at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:634)
at MyClass.handleEncryptedTransaction(MyClass.java:297)
I tried to use a buffer to accumulate possible broken packages, however I cannot see the packet content before decripting and can only estimate if it is complete based on its size.
this.peerNetData.clear();
int bytesRead = socketChannel.read(this.peerNetData);
if (bytesRead == DEFAULT_TLS_HANDSHAKE_SIZE) {
handleEncryptedTransaction(socketChannel, engine);
} else if (bytesRead > 0) {
// TLS packet buffering
byte[] justRead = new byte[this.peerNetData.position()];
this.peerNetData.flip();
this.peerNetData.get(justRead);
this.incompleteTransactionsBuffer.put(justRead);
// DEFAULT_TLS_TRANSACTION_SIZE == 1182
if (this.incompleteTransactionsBuffer.position() >= DEFAULT_TLS_TRANSACTION_SIZE) {
this.incompleteTransactionsBuffer.flip(); // flipping to read mode
while (this.incompleteTransactionsBuffer.remaining() >= DEFAULT_TLS_TRANSACTION_SIZE) {
byte[] fullTransaction = new byte[DEFAULT_TLS_TRANSACTION_SIZE];
this.incompleteTransactionsBuffer.get(fullTransaction, 0, fullTransaction.length);
this.peerNetData.clear();
this.peerNetData.put(fullTransaction);
// This method uses this.peerNetData to unwrap data and
handleEncryptedTransaction(socketChannel, engine);
}
this.incompleteTransactionsBuffer.compact(); // wipe out bytes that had been read and free space of the buffer
}
}
Is there anyway to check if a TCP packet over TLS is complete? I tried to read the first 1182 bytes but it doesn't seem to work. Interestingly, this code work when I get multiple packets in the response, such as (N * 1182), where N varies from 2 to 7. Maybe I should wait for another socket read and get another piece of information?
I suppose this problem occurs because of packet retransmissions caused by heavy traffic. Is there any other way to deal with TLS packet retransmissions in low level socket connections in Java?
After getting comments and better understanding TLS protocol, I could work out a solution for the problem by implementing a buffer and getting the exact size of a TLS Record to wait for other TCP reads.
A TLS record "might be split into multiple TCP fragments or TCP fragments might also contain multiple TLS records in full, in half or whatever. These TCP fragments then might even be cut into multiple IP packets although TCP tries hard to avoid this.", from Determine packet size of TLS packet Java/Android. However, that post mentions first 2 bytes and it is not right. According to https://hpbn.co/transport-layer-security-tls/#tls-record-diagram:
Maximum TLS record size is 16 KB
Each record contains a 5-byte header, a MAC (up to 20 bytes for SSLv3,
TLS 1.0, TLS 1.1, and up to 32 bytes for TLS 1.2), and padding if a
block cipher is used.
To decrypt and verify the record, the entire record must be available.
The TLS record size lies on the 3rd and 4th bytes:
The code ended up being like this:
protected synchronized void read(SocketChannel socketChannel, SSLEngine engine) throws IOException {
this.peerNetData.clear();
int bytesRead = socketChannel.read(this.peerNetData);
if (bytesRead > 0) {
// TLS records buffering
this.peerNetData.flip();
byte[] justRead = new byte[this.peerNetData.limit()];
this.peerNetData.get(justRead);
this.tlsRecordsReadBuffer.put(justRead);
byte[] fullTlsRecord;
// Process every TLS record available until buffer is empty or the last record is yet not complete
while ( (fullTlsRecord = this.getFullTlsRecordFromBufferAndDeleteIt()) != null ) {
this.peerNetData.clear();
this.peerNetData.put(fullTlsRecord);
handleEncryptedTransaction(socketChannel, engine);
}
} else if (bytesRead < 0) {
handleEndOfStream(socketChannel, engine);
}
}
private synchronized byte[] getFullTlsRecordFromBufferAndDeleteIt() {
byte[] result = null;
this.tlsRecordsReadBuffer.flip();
if (this.tlsRecordsReadBuffer.limit() > DEFAULT_TLS_HEADER_SIZE) {
// Read only the first 5 bytes (5 = DEFAULT_TLS_HEADER_SIZE) which contains TLS record length
byte[] tlsHeader = new byte[DEFAULT_TLS_HEADER_SIZE];
this.tlsRecordsReadBuffer.get(tlsHeader);
**// read 3rd and 4th bytes to get TLS record length in big endian notation
int tlsRecordSize = ((tlsHeader[3] & 0xff) << 8) | (tlsHeader[4] & 0xff);**
// 0xff IS NECESSARY because it removes one-bit negative representation
// Set position back to the beginning
this.tlsRecordsReadBuffer.position(0);
if (this.tlsRecordsReadBuffer.limit() >= (tlsRecordSize + DEFAULT_TLS_HEADER_SIZE)) {
// Then we have a complete TLS record
result = new byte[tlsRecordSize + DEFAULT_TLS_HEADER_SIZE];
this.tlsRecordsReadBuffer.get(result);
}
}
// remove record and get back to write mode
this.tlsRecordsReadBuffer.compact();
return result;
}
I've write a Client / Server code using Java server socket ( TCP ).
The server is working as a Radio, listening to Mic, and sending the bytes to connected clients.
When i run the code using "localhost" as server name, it works very well, and i can hear the voice in the speakers without any issues.
Now, when i want to expose the localhost to internet using ngrok:
Forwarding tcp://0.tcp.ngrok.io:11049 -> localhost:5000
I start to get below exception in the client side:
java.lang.IllegalArgumentException: illegal request to write non-integral number of frames (1411 bytes, frameSize = 2 bytes)
at com.sun.media.sound.DirectAudioDevice$DirectDL.write(Unknown Source)
at client.Client.Start(Client.java:79)
at client.Receiver.main(Receiver.java:17)
Does any one know why, and how i can fix such problem ?
I tried to change the byte array length.
//server code
byte _buffer[] = new byte[(int) (_mic.getFormat().getSampleRate() *0.4)];
// byte _buffer[] = new byte[1024];
_mic.start();
while (_running) {
// returns the length of data copied in buffer
int count = _mic.read(_buffer, 0, _buffer.length);
//if data is available
if (count > 0) {
server.SendToAll(_buffer, 0, count);
}
}
// client code where exception happens:
_streamIn = _server.getInputStream();
_speaker.start();
byte[] data = new byte[8000];
System.out.println("Waiting for data...");
while (_running) {
// checking if the data is available to speak
if (_streamIn.available() <= 0)
continue; // data not available so continue back to start of loop
// count of the data bytes read
int readCount= _streamIn.read(data, 0, data.length);
if(readCount>0){
_speaker.write(data, 0, readCount); // here throws exception
}
}
should play the sound through the speaker.
I am implementing a simple java udp socket program. Here are the details:
Server side: suppose I create 2500 packets in server side, then inform the client that i'm gonna send 2500 packets, and each packet is packetSize bytes. then in a loop, each packet is created and then sent.
Client side: after being informed of the number of packets, in a for (or while), I wait for 2500 packets to be received.
Here is the problem:
The for loop in client side never ends! that means 2500 packets are never received! Although I checked server side and it has sent them all.
I tried setting the socket's receive buffer size to 10 * packetSize using this:
socket.setReceiveBufferSize(10 * packetSize)
but it does not work.
How do you think I could solve this problem? I know UDP is not reliable but both client and server are running on different ports of the same computer!
Here is the code for server side:
for (int i = 0; i < packets; i++) {
byte[] currentPacket = new byte[size];
byte[] seqnum = intToByteArray(i);
currentPacket[0] = seqnum[0];
currentPacket[1] = seqnum[1];
currentPacket[2] = seqnum[2];
currentPacket[3] = seqnum[3];
for (int j = 0; j < size-4; j++) {
currentPacket[j+4] = finFile[i][j];
}
sendPacket = new DatagramPacket(currentPacket, currentPacket.length, receiverIP, receiverPort);
socket.send(sendPacket);
try {
Thread.sleep(2);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
and the client side:
int k = 0;
while (true) {
receivedBytes = new byte[size];
receivePacket = new DatagramPacket(receivedBytes, size);
socket.receive(receivePacket);
allBytes.add(receivePacket.getData());
k++;
if (k == packets)
break;
}
allBytes is just a linked list containing received bytes. i use it to reassemble the final file.
P.S. This code works perfect for under 100Mb files.
Thanks
Update:
tl;dr summary: Either packets is not properly initialized or use TCP, or add a sequence number in your UDP packet so that your client knows if it drops a packet and you can write code to handle that (request for a rebroadcast). This essentially makes it a rudimentary TCP anyways.
I have a suspicion that you never initialized packets so you never hit your break. Rewriting your while into a for loop can easily check if this is true. Assuming the first packet you send contains how many packets it will be receiving, and you initialize packets correctly then if your packets are being lost then your client side program will not end since receive() is a blocking method.
If you strongly suspect that your packets are being lost, then debug your client side and see how many received packets are in your LinkedList and compare that against how many are sent on the server side.
for(int i = 0; i < packets; i++) {
receivedBytes = new byte[size];
receivePacket = new DatagramPacket(receivedBytes, size);
socket.receive(receivePacket);
allBytes.add(receivePacket.getData());
}
System.out.println("All " + packet + " received.");
Switching to code to the above will let you know that if you never get to the print statement, then you know that you're losing packets since receive() is a blocking method and it means that your client side is stuck in the for loop. This is because the for loop can't be satisfied since if the server sends 2500 packets but the client only receives 2300 packets, it'll still be in the for loop at the receive() line waiting for 2301, 2302, ... packets and so on.
Since you have a file with upwards of 100MB or more that needs to be assembled I assume you can't tolerate loss, so either use TCP that will fulfill that requirement or handle that possibility in your code by creating your own header with each packet. This header can be as simple as an incrementing sequence number that the client will receive and read, if it skips a number from the previous packet then it will know that a packet was lost. At this point, you can have your client request the server to rebroadcast that specific packet. But at this point you just implemented your own crude TCP.
Hi I have created a server socket for reading byte array from socket using getInputStream, But getInputStream.read is not exiting after endof data reaches. Below is my code.
class imageReciver extends Thread {
private ServerSocket serverSocket;
InputStream in;
public imageReciver(int port) throws IOException
{
serverSocket = new ServerSocket(port);
}
public void run()
{
Socket server = null;
server = serverSocket.accept();
in = server.getInputStream();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte buffer[] = new byte[1024];
while(true){
int s = 0;
s = in.read(buffer); //Not exiting from here
if(s<0) break;
baos.write(buffer, 0, s);
}
server.close();
return;
}
}
From the client if I sent 2048 bytes, the line in.read(buffer) should return -1 after reading two times, but it waiting there to read for the third time. How can I solve this ?
Thanks in advance....
Your server will need to close the connection, basically. If you're trying to send multiple "messages" over the same connection, you'll need some way to indicate the size/end of a message - e.g. length-prefixing or using a message delimiter. Remember that you're using a stream protocol - the abstraction is just that this is a stream of data; it's up to you to break it up as you see fit.
See the "network packets" in Marc Gravell's IO blog post for more information.
EDIT: Now that we know that you have an expected length, you probably want something like this:
int remainingBytes = expectedBytes;
while (remainingBytes > 0) {
int bytesRead = in.read(buffer, 0, Math.min(buffer.length, remainingBytes));
if (bytesRead < 0) {
throw new IOException("Unexpected end of data");
}
baos.write(buffer, 0, bytesRead);
remainingBytes -= bytesRead;
}
Note that this will also avoid overreading, i.e. if the server starts sending the next bit of data, we won't read into that.
If I send 2048 bytes, the line 'in.read(buffer)' should return -1 after reading two times.
You are mistaken on at least two counts here. If you send 2048 bytes, the line 'in.read(buffer)' should execute an indeterminate number of times, to read a total of 2048 bytes, and then block. It should only return -1 when the peer has closed the connection.
I am sending data to a server in two steps:
1) Length of what I will send using byte[4]
2) Data.
The server listens to the exact length of the data (shipped first) and then replies.
So I listen to the InputStream and try to get the data.
My Problem:
Whatever I am doing I am getting only the stream I send, but the server definatly sends a new string.
It seems I cannot wait for a -1 (end of string), as the program would time out and I am sure the server does not send anything alike.
Therefore I am using inputStream.available() to find out how many bytes are left in the buffer.
Once I am sending inputStream.read() after reading all the data it will time out with "Network idle timeout".
But I need to listen to the inputStream to make sure I am not missing information.
Why am I only receiving the information I send and not what is send by the server?
How can I listen to the connection for new items coming in?
Here is my code:
private void sendData (byte[] sendBytes){
try {
os.write(sendBytes);
os.flush();
} catch (IOException ex) {
}
}
Please help
THD
This is how you normally read all data from a reader (until the other end closes):
//BufferedReader is
StringBuilder data = new StringBuilder();
char[] buffer = new char[1024 * 32];
int len = 0;
while ((len = is.read(buffer)) != -1) {
data.append(buffer, 0, len);
}
//data will on this line contain all code received from the server