I am implementing a simple java udp socket program. Here are the details:
Server side: suppose I create 2500 packets in server side, then inform the client that i'm gonna send 2500 packets, and each packet is packetSize bytes. then in a loop, each packet is created and then sent.
Client side: after being informed of the number of packets, in a for (or while), I wait for 2500 packets to be received.
Here is the problem:
The for loop in client side never ends! that means 2500 packets are never received! Although I checked server side and it has sent them all.
I tried setting the socket's receive buffer size to 10 * packetSize using this:
socket.setReceiveBufferSize(10 * packetSize)
but it does not work.
How do you think I could solve this problem? I know UDP is not reliable but both client and server are running on different ports of the same computer!
Here is the code for server side:
for (int i = 0; i < packets; i++) {
byte[] currentPacket = new byte[size];
byte[] seqnum = intToByteArray(i);
currentPacket[0] = seqnum[0];
currentPacket[1] = seqnum[1];
currentPacket[2] = seqnum[2];
currentPacket[3] = seqnum[3];
for (int j = 0; j < size-4; j++) {
currentPacket[j+4] = finFile[i][j];
}
sendPacket = new DatagramPacket(currentPacket, currentPacket.length, receiverIP, receiverPort);
socket.send(sendPacket);
try {
Thread.sleep(2);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
and the client side:
int k = 0;
while (true) {
receivedBytes = new byte[size];
receivePacket = new DatagramPacket(receivedBytes, size);
socket.receive(receivePacket);
allBytes.add(receivePacket.getData());
k++;
if (k == packets)
break;
}
allBytes is just a linked list containing received bytes. i use it to reassemble the final file.
P.S. This code works perfect for under 100Mb files.
Thanks
Update:
tl;dr summary: Either packets is not properly initialized or use TCP, or add a sequence number in your UDP packet so that your client knows if it drops a packet and you can write code to handle that (request for a rebroadcast). This essentially makes it a rudimentary TCP anyways.
I have a suspicion that you never initialized packets so you never hit your break. Rewriting your while into a for loop can easily check if this is true. Assuming the first packet you send contains how many packets it will be receiving, and you initialize packets correctly then if your packets are being lost then your client side program will not end since receive() is a blocking method.
If you strongly suspect that your packets are being lost, then debug your client side and see how many received packets are in your LinkedList and compare that against how many are sent on the server side.
for(int i = 0; i < packets; i++) {
receivedBytes = new byte[size];
receivePacket = new DatagramPacket(receivedBytes, size);
socket.receive(receivePacket);
allBytes.add(receivePacket.getData());
}
System.out.println("All " + packet + " received.");
Switching to code to the above will let you know that if you never get to the print statement, then you know that you're losing packets since receive() is a blocking method and it means that your client side is stuck in the for loop. This is because the for loop can't be satisfied since if the server sends 2500 packets but the client only receives 2300 packets, it'll still be in the for loop at the receive() line waiting for 2301, 2302, ... packets and so on.
Since you have a file with upwards of 100MB or more that needs to be assembled I assume you can't tolerate loss, so either use TCP that will fulfill that requirement or handle that possibility in your code by creating your own header with each packet. This header can be as simple as an incrementing sequence number that the client will receive and read, if it skips a number from the previous packet then it will know that a packet was lost. At this point, you can have your client request the server to rebroadcast that specific packet. But at this point you just implemented your own crude TCP.
Related
I am working on a client application that sends transactions to a server over a TLS connection. The applications sends a given number of bytes and receives 1182 bytes as response. It has been working fine until I started increasing the number of transactions per second. After that, some response packets started to be broken and cannot be fully received by the client in only read. When I try to unwrap the packet content, it raises an exception and terminates TLS session.
javax.net.ssl.SSLException: Unrecognized record version (D)TLS-0.0 , plaintext connection?
at java.base/sun.security.ssl.SSLEngineInputRecord.bytesInCompletePacket(SSLEngineInputRecord.java:98)
at java.base/sun.security.ssl.SSLEngineInputRecord.bytesInCompletePacket(SSLEngineInputRecord.java:64)
at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:544)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:441)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:420)
at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:634)
at MyClass.handleEncryptedTransaction(MyClass.java:297)
I tried to use a buffer to accumulate possible broken packages, however I cannot see the packet content before decripting and can only estimate if it is complete based on its size.
this.peerNetData.clear();
int bytesRead = socketChannel.read(this.peerNetData);
if (bytesRead == DEFAULT_TLS_HANDSHAKE_SIZE) {
handleEncryptedTransaction(socketChannel, engine);
} else if (bytesRead > 0) {
// TLS packet buffering
byte[] justRead = new byte[this.peerNetData.position()];
this.peerNetData.flip();
this.peerNetData.get(justRead);
this.incompleteTransactionsBuffer.put(justRead);
// DEFAULT_TLS_TRANSACTION_SIZE == 1182
if (this.incompleteTransactionsBuffer.position() >= DEFAULT_TLS_TRANSACTION_SIZE) {
this.incompleteTransactionsBuffer.flip(); // flipping to read mode
while (this.incompleteTransactionsBuffer.remaining() >= DEFAULT_TLS_TRANSACTION_SIZE) {
byte[] fullTransaction = new byte[DEFAULT_TLS_TRANSACTION_SIZE];
this.incompleteTransactionsBuffer.get(fullTransaction, 0, fullTransaction.length);
this.peerNetData.clear();
this.peerNetData.put(fullTransaction);
// This method uses this.peerNetData to unwrap data and
handleEncryptedTransaction(socketChannel, engine);
}
this.incompleteTransactionsBuffer.compact(); // wipe out bytes that had been read and free space of the buffer
}
}
Is there anyway to check if a TCP packet over TLS is complete? I tried to read the first 1182 bytes but it doesn't seem to work. Interestingly, this code work when I get multiple packets in the response, such as (N * 1182), where N varies from 2 to 7. Maybe I should wait for another socket read and get another piece of information?
I suppose this problem occurs because of packet retransmissions caused by heavy traffic. Is there any other way to deal with TLS packet retransmissions in low level socket connections in Java?
After getting comments and better understanding TLS protocol, I could work out a solution for the problem by implementing a buffer and getting the exact size of a TLS Record to wait for other TCP reads.
A TLS record "might be split into multiple TCP fragments or TCP fragments might also contain multiple TLS records in full, in half or whatever. These TCP fragments then might even be cut into multiple IP packets although TCP tries hard to avoid this.", from Determine packet size of TLS packet Java/Android. However, that post mentions first 2 bytes and it is not right. According to https://hpbn.co/transport-layer-security-tls/#tls-record-diagram:
Maximum TLS record size is 16 KB
Each record contains a 5-byte header, a MAC (up to 20 bytes for SSLv3,
TLS 1.0, TLS 1.1, and up to 32 bytes for TLS 1.2), and padding if a
block cipher is used.
To decrypt and verify the record, the entire record must be available.
The TLS record size lies on the 3rd and 4th bytes:
The code ended up being like this:
protected synchronized void read(SocketChannel socketChannel, SSLEngine engine) throws IOException {
this.peerNetData.clear();
int bytesRead = socketChannel.read(this.peerNetData);
if (bytesRead > 0) {
// TLS records buffering
this.peerNetData.flip();
byte[] justRead = new byte[this.peerNetData.limit()];
this.peerNetData.get(justRead);
this.tlsRecordsReadBuffer.put(justRead);
byte[] fullTlsRecord;
// Process every TLS record available until buffer is empty or the last record is yet not complete
while ( (fullTlsRecord = this.getFullTlsRecordFromBufferAndDeleteIt()) != null ) {
this.peerNetData.clear();
this.peerNetData.put(fullTlsRecord);
handleEncryptedTransaction(socketChannel, engine);
}
} else if (bytesRead < 0) {
handleEndOfStream(socketChannel, engine);
}
}
private synchronized byte[] getFullTlsRecordFromBufferAndDeleteIt() {
byte[] result = null;
this.tlsRecordsReadBuffer.flip();
if (this.tlsRecordsReadBuffer.limit() > DEFAULT_TLS_HEADER_SIZE) {
// Read only the first 5 bytes (5 = DEFAULT_TLS_HEADER_SIZE) which contains TLS record length
byte[] tlsHeader = new byte[DEFAULT_TLS_HEADER_SIZE];
this.tlsRecordsReadBuffer.get(tlsHeader);
**// read 3rd and 4th bytes to get TLS record length in big endian notation
int tlsRecordSize = ((tlsHeader[3] & 0xff) << 8) | (tlsHeader[4] & 0xff);**
// 0xff IS NECESSARY because it removes one-bit negative representation
// Set position back to the beginning
this.tlsRecordsReadBuffer.position(0);
if (this.tlsRecordsReadBuffer.limit() >= (tlsRecordSize + DEFAULT_TLS_HEADER_SIZE)) {
// Then we have a complete TLS record
result = new byte[tlsRecordSize + DEFAULT_TLS_HEADER_SIZE];
this.tlsRecordsReadBuffer.get(result);
}
}
// remove record and get back to write mode
this.tlsRecordsReadBuffer.compact();
return result;
}
I'm doing a chat application between C# clients and a java server.
I need to send a lot of sockets from the server to the client when he connects. I want to send the logs of the day. So I have all logs in a file.txt, and I send them to the new connected client.
For sending them, I have a for loop until all the logs are sent. Here is the loop:
for (String item : Logs) {
client.send("log:" + item);
}
And for the send method:
public void send(String text) {
//'os' is the: Socket.getOutputStream();
//What the server will send to the client
PrintWriter Out = new PrintWriter(os);
// 0 is the offset, not needed
Out.write(text, 0, text.length());
Out.flush();
System.out.println(text.length());
}
So until there, all works well.
Now my problem is: The output stream sends bytes length like '30', '100', '399' who is 'text.length()', and the C# client receive all sockets, but paste 2 or 3 sockets in one.
E.g: if I send with separated sockets (each line is a out.write() and out.flush() because I call the send method for each line)
(Server-side)
log:abcdefghijklmnopqrstuvwxyz123456789101112131415
log:abcdefghijklmnopqrstuvwxyz
log:abcdefghijklmnopqrstuvwxyz123456789101
log:abcdefghijklmnopqrstuvwxyz1234567891011121314151617
log:abcdefghijklmnopqrst
log:abcdefghijklmnopqrstuvwxyzyxwvu
The sockets will be at the end:
(Client-side)
log:abcdefghijklmnopqrstuvwxyz123456789101112131415log:abcdefghijklmnopqrstuvwxyzlog:abcdefghijklmnopqrstuvwxyz123456789101
log:abcdefghijklmnopqrstuvwxyz1234567891011121314151617log:abcdefghijklmnopqrst
log:abcdefghijklmnopqrstuvwxyzyxwvu
And if I check the sockets length in the server side I get like;
20
12
15
17
20
But in the client side;
32
15
37
The sum of multiples sockets put together.. (And sometimes it's 3 sockets put together, and sometimes 2, sometime 4...) I cant understand why...
Here's my Async method for receiving the sockets from the server;
private void callBack(IAsyncResult aResult)
{
String message = "";
try
{
int size = sck.EndReceiveFrom(aResult, ref ip);
if (size > 0)
{
byte[] receive = new byte[size];
receive = (byte[])aResult.AsyncState;
message = Encoding.Default.GetString(receive, 0, size);
Debug.WriteLine(message.Length);
}
byte[] buffer = new byte[1024];
//restart the async task
sck.BeginReceiveFrom(buffer, 0, buffer.Length, SocketFlags.None, ref ip, new AsyncCallback(callBack), buffer);
}
catch (Exception) { }
}
Where the int 'size' contains the size of the byte[] received, and here is the problem. How can I get the right sockets I sent from the server?
If I send each socket with delay in the server side (like 15ms), the client can get the sockets one by one. But only if you have a good connection. If your connection do like 200ms of latency, you will get the sockets grouped... So the problem is in the client side (I think...). The server (java) side works correctly, the flush method always send the socket!
UPDATE:
Here are my sockets;
//Global var
EndPoint ip;
public Socket sck;
//How I connect my sockets
private void connect()
{
ip = new IPEndPoint(IPAddress.Parse("127.0.0.1"), mysql.selectPort());
sck = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
try
{
sck.Connect(ip);
}catch(Exception e) {
Debug.WriteLine(e.Message);
}
}
I have a very strange behaviour during writing to a NIO socket.
In my mobile client I'm using a NIO socketChannel which is configured as follows:
InetSocketAddress address = new InetSocketAddress(host, port);
socketChannel = SocketChannel.open();
socketChannel.socket().connect(address, 10000);
socketChannel.configureBlocking(false);
then periodically (every 60 seconds) I write some data to this socketChannel (the code here is a little bit simplified):
ByteBuffer readBuffer = ByteBuffer.allocate(DEFAULT_BUFFER_SIZE);
readBuffer.clear();
int numRead = -1;
numRead = socketChannel.read(readBuffer);
Log.write("Read " + numRead + " bytes" );
if(numRead > 0)
{
processServerResponse(readBuffer);
}
else if(numRead < 0)
{
// ... re-connect etc.
}
// ...
byte[] msg = getNextMessage();
ByteBuffer buffer = ByteBuffer.wrap(msg);
int numOfBytes = 0;
while(buffer.hasRemaining())
{
numOfBytes += socketChannel.write(buffer);
}
Log.write("Written " + numOfBytes + " bytes of " + msg.length );
And it works. But... Sometimes (very infrequently, may 1 or 2 times per day) my server don't receive data. My client log looks like:
Written 10 bytes of 10
Written 20 bytes of 20
Written 30 bytes of 30
Written 40 bytes of 40
Written 50 bytes of 50
and so on. But on the server side it looks like:
Received 10 bytes of 10
Received 50 bytes of 50
20, 30 and 40 bytes data records were not received, despite of fact that on the client side it looks like all data was sent without any exceptions! (In reality the server log is a little bit better than this simplified version. So I can see which data was sent (my sent records contain timestamps etc))
Such gaps can be small (2-3 minutes) which is not very bad, but sometimes they can be very big (1-2 hours = 60-120 cycles) and it is really a problem for my customers.
I really have no idea what can be wrong. The data seems to be sent by client, but it never arrives on the server side. I've checked it also with a proxy.
I would be very grateful for any ideas and tips.
P.S. Maybe it plays some role: the client code runs on an Android mobile device which is moving (it is in a car). The internet connection is established through GPRS.
I'm writing a UDP Client to transfer a file to a UDP server. First I try to measure the length of the file, devided by the buffer length to be sent in UDP packet, to get the number of packets required to be sent. I send this number to the server first to acknowledge it. But on the server side, the transform from byte array of the receiving packet into the initial number just failed. Can anyone help me out of this? Here is my code on the client side:
DatagramSocket socket=new DatagramSocket();
File f = new File(filename);
long fileSize = f.length();
byte[] buffer = new byte[16384];
long packetNumber = (fileSize/(buffer.length))+1;
DatagramPacket sendPacket=new DatagramPacket(buffer,buffer.length,addr,srvPort);
String str=Long.toString(packetNumber);
buffer = str.getBytes();
socket.send(sendPacket);
And here is the code on the server side:
DatagramSocket socket=new DatagramSocket(port);
byte[] buffer=new byte[16384];
DatagramPacket receivePacket=new DatagramPacket(buffer, buffer.length);
while(true)
{
socket.receive(receivePacket);
if (receivePacket.getData().toString().trim()!=null)
{
String str=receivePacket.getData().toString();
System.out.println(str);
long pcount=Long.parseLong(str);
System.out.println(pcount+" packets to be received.");
break;
}
}
But on the server side the variable pcount can always not be resolved, and when I try to print out the str, it writes out some "[B#60991f" or sth, weird.
This code doesn't make any sense.
Most networks won't let you send a datagram over 534 bytes reliably.
At present you are sending 16384 bytes of zero value, because you aren't putting anything into the buffer: instead you are creating a new buffer after creating the DatagramPacket. So you aren't sending anything yet.
And you aren't receiving anything yet either. The result of String.trim() cannot be null. You must reset the byte array in a DatagramPacket before every receive(), because it shrinks to the size of the actual received packet, so unless you reset it it keeps getting smaller and smaller. The result of toString() on a byte array does not include its contents, so parsing it is futile.
You need to study several basic Java programming concepts: too many to answer here.
You're receiving a byte array, using toString() won't give you anything.
You should reconstruct the String from the bytes array using new String
Im trying to send a packet back to the user informing them of all the people currently on the server, when they send a message to the server which has the word "who" in it.
Here is my code:
else if( response.contains( "who" ) )
{
System.out.println( "Size of names collection: "+names.size() );
buf = null;
buf = names.toString().getBytes();
int thisPort = packet.getPort();
packet = new DatagramPacket( buf, buf.length,packet.getAddress(),thisPort );
socket.send(packet);
}
The output of the print statement above is 2 indicating that there are two people on, for example andrew and james. Now when I package it up and send it I would expect it to output this:
[Andrew, James]
But instead the client gets:
[Andrew,
And thats it. Whats the problem? BTW I have to use UDP for this and can't switch to TCP
UPDATE
Here is the code in the client class that receives the packets:
while( true )
{
try
{
// Set the buf to 256 to receive data back from same address and port
buf = null;
buf = new byte[256];
packet = new DatagramPacket(buf, buf.length, address, 4445);
socket.receive(packet);
String response = new String( packet.getData() );
// Receive the packet back
System.out.println( response );
}
catch( IOException e )
{
}
}
Your datagram is being truncated to 256 byes because that's the size of the buffer you declared for the receiving DatagramPacket. If your datagrams can be longer, make the buffer bigger.
Best practice is to make it one bigger than the largest datagram you are expecting to receive. Then if you receive one that size you have an application protocol error.
You should check on both client and server the length of the DatagramPacket after the send/receive operation respectively (with the getLength method) to make sure it's the same, that would be the first hint. What Collection are you using for names?
Your question is incomplete. However..
UDP loses packets. That's why it's not reliable to use UDP for File Transfer purposes. The Adobe RTMFP uses UDP to transfer audio and video data in which many packets are lost., But audio/video content streaming is really faster when compared to TCP. I don't know if this answers your question, I just want to say that UDP does lose packets.