I am using C# to create a server software for Windows and Java to create the client software.
It works fine most of the time, except for those few exceptions that I don't understand.
I am generally using .ReadLine() and .WriteLine() on both ends to communicate, unless I try to send binary data. That's when I write and read the bytes directly.
This is how the software is supposed work:
Client requests the binary data
Server responds with the length of the binary data as a string
Client receives the length and converts it into an integer and starts reading (length) bytes
Server starts writing (length) bytes
It works in most cases, but sometimes the client app doesn't receive the full data and blocks. The server always immediately flushes after writing data, so flushing is not the problem.
Furthermore I've noticed this usually happens with larger files, small files (up to ~1 MB) usually are not a problem.
NOTE It seems like the C# server does send the data completely, so the problem is most likely somewhere in the Java code.
EDIT - Here are some logs from the client side
Working download: pastebin.com/hFd5TvrF
Failing download: pastebin.com/Q3zFWRLB
It seems like the client is waiting for 2048 bytes at the end (as it should be, as length - processed = 2048 in this case), but for some reason the client blocks.
Any ideas what I'm doing wrong? Below are the source codes of both server and client:
C# Server:
public void Write(BinaryWriter str, byte[] data)
{
int BUFFER = 2048;
int PROCESSED = 0;
// WriteString sends the String using a StreamWriter (+ flushing)
WriteString(data.Length.ToString());
while (PROCESSED < data.Length)
{
if (PROCESSED + BUFFER > data.Length)
BUFFER = data.Length - PROCESSED;
str.Write(data, PROCESSED, BUFFER);
str.Flush();
PROCESSED += BUFFER;
}
}
Java Client:
public byte[] ReadBytes(int length){
byte[] buffer = new byte[length];
int PROCESSED = 0;
int READBUF = 2048;
TOTAL = length;
progress.setMax(TOTAL);
InputStream m;
try {
m = clientSocket.getInputStream();
while(PROCESSED < length){
if(PROCESSED + READBUF > length)
READBUF = length - PROCESSED;
try {
PROCESSED += m.read(buffer, PROCESSED, READBUF);
} catch (IOException e) {
}
XPROCESSED = PROCESSED;
}
} catch (IOException e1) {
// Removed because of sensitive data
}
return decryptData(buffer);
}
I've found a fix. As of now, the server sends the length and right after sends the byte array. For some reason this does not work.
So what I've changed is:
Send length and wait for the client to respond with "OK"
Start writing bytes
Not sure why, but it works. Ran it in a while(true) loop and it's been sending data 1000 times in 4 minutes straight and no problems, so I guess it's fixed.
Related
Ive read many tutorials and posts about the java InputStream and reading data. Ive established a client and server implementation but i'm having weird issues where reading a variable length "payload" from the client is not consistent.
What im trying to do is to transfer up 100kB max in one single logical payload. Now i have verified that the TCP stack is not sending one mahousive 100kB packet from the client. I have played about with different read forms as per previous questions about the InputStream reading but ive nearly teared my hair out trying to get it to dump the correct data.
Lets for example say the client is sending a 70k payload.
Now the first observation i've noticed is that if I flow through the code line by line initiated from a break point, it will work fine, i get the exact same count in the outbound byte[]. When free running the byte[] will be different sizes every time i run the code with practically the same payload.
Timing problems?
second observation is that when the "inbuffer" size is set to 4096 for example this odd behaviour occurs. setting the "inbuffer" size to 1 presents the correct behaviour i.e. i get the correct payload size.
please understand i dont like the way ive had to get this to work and im not happy with the solution.
What experiences, problems have you had/seen respectively that might help me fix this code to be more reliable, easier to read.
public void listenForResponses() {
isActive = true;
try {
// apprently read() doesnt return -1 on socket based streams
// if big stuff comes through, TCP packets are segmented, but the inputstream
// does something odd and doesnt return the correct raw data.
// this is a work around to accept vari-length payloads into one byte[] buffer
byte[] inBuffer = new byte[1];
byte[] buffer = null;
int bytesRead = 0;
byte[] finalbuffer = new byte[0];
boolean isMultichunk = false;
InputStream istrm = currentSession.getInputStream();
while ((bytesRead = istrm.read(inBuffer)) > -1 && isActive) {
buffer = new byte[bytesRead];
buffer = Arrays.copyOfRange(inBuffer, 0, bytesRead);
int available = istrm.available();
if(available < 1) {
if(!isMultichunk) {
finalbuffer = buffer;
}
else {
finalbuffer = ConcatTools.ByteArrayConcat(finalbuffer,buffer);
}
notifyOfResponse(deserializePayload(finalbuffer));
finalbuffer = new byte[0];
isMultichunk = false;
}
else {
if(!isMultichunk) {
isMultichunk = true;
finalbuffer = new byte[0];
}
finalbuffer = ConcatTools.ByteArrayConcat(finalbuffer,buffer);
}
}
} catch (IOException e) {
Logger.consoleOut("PayloadReadThread: " + e.getMessage());
currentSession = null;
}
}
InputStream is working as designed.
if I flow through the code line by line initiated from a break point, it will work fine, i get the exact same count in the outbound byte[].
That's because stepping through the code is slower, so more data drives between reads, enough to fill your buffer.
When free running the byte[] will be different sizes every time i run the code with practically the same payload.
That's because InputStream.read() is contracted to block until at least one byte has been transferred, or EOS or an exception occurs. See the Javadoc. There's nothing in there about filling the buffer.
second observation is that when the "inbuffer" size is set to 4096 for example this odd behaviour occurs. setting the "inbuffer" size to 1 presents the correct behaviour i.e. i get the correct payload size.
That's the correct behaviour in the case of a 1 byte buffer for exactly the same reason given above. It's not the 'correct behaviour' for any other size.
NB Your copy loop is nonsense. available() has few correct uses, and this isn't one of them.
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
NB (2) read() does indeed return -1 on socket-based streams, but only when the peer has shutdown or closed the connection.
I'm writing a C++ that has a feature of receiving file via socket from a Java client.
The file's size I'm trying to send is 510KB, but the server only receives 46KB and then gets stuck "waiting" for more bytes.
This is the server code: (I tried to receive 510KB at once, but the server got stuck, so I tried also to receive 64KB each iteration and again, the server got stuck).
int bytes_to_receive;
size_t length;
while(file_length > 0)
{
if(file_length >= 65536)
bytes_to_receive = 65536;
else
bytes_to_receive = file_length;
char buf5[bytes_to_receive];
length = 0;
while( length < bytes_to_receive )
length += socket.read_some(boost::asio::buffer(&buf5[length], bytes_to_receive - length), error);
string temp(buf5);
file_parts.push_back(temp);
file_length -= bytes_to_receive;
}
string file("");
vector<string>::const_iterator it;
for(it = file_parts.begin(); it != file_parts.end(); ++it)
{
file += *it;
}
The Java client load the file to a string, and then sends the string using
writeBytes
Note: When sending a 1KB file everything works.
Why does this happens and how can I fix it? Any help would be highly appreciated.
Edit: any other way of receiving large data using boost would be appreciated.
I'm trying to write an upload system for a fairly complex java server. I have reproduced the error in the two small programs listed below. Basically, I am using an ObjectOutputStream/ObjectInputStream to communicate via the client/server. This is a requirement; I have thousands of lines of code working perfectly fine around this ObjectOutputStream/ObjectInputStream setup, so I must be able to still use these streams after an upload is complete.
To access the files(the one being read on the client and the one being written on the server), FileInputStream and FileOutputStream is used. My client appears to be functioning perfectly; it reads in the file and sends a different byte array each iteration(it reads in 1MB at a time, so large files can be handled without overflowing the heap). However, on the server it appears as though the byte array is ALWAYS just the first array sent(the first 1MB of the file). This does not conform to my understanding of ObjectInputStream/ObjectOutputStream. I am seeking either a working solution to this issue or enough education on the matter to form my own solution.
Below is the client code:
import java.net.*;
import java.io.*;
public class stupidClient
{
public static void main(String[] args)
{
new stupidClient();
}
public stupidClient()
{
try
{
Socket s = new Socket("127.0.0.1",2013);//connect
ObjectOutputStream output = new ObjectOutputStream(s.getOutputStream());//init stream
//file to be uploaded
File file = new File("C:\\Work\\radio\\upload\\(Op. 9) Nocturne No. 1 in Bb Minor.mp3");
long fileSize = file.length();
output.writeObject(file.getName() + "|" + fileSize);//send name and size to server
FileInputStream fis = new FileInputStream(file);//open file
byte[] buffer = new byte[1024*1024];//prepare 1MB buffer
int retVal = fis.read(buffer);//grab first MB of file
int counter = 0;//used to track progress through upload
while (retVal!=-1)//until EOF is reached
{
System.out.println(Math.round(100*counter/fileSize)+"%");//show current progress to system.out
counter += retVal;//track progress
output.writeObject("UPACK "+retVal);//alert server upload packet is incoming, with size of packet read
System.out.println(""+buffer[0]+" "+buffer[1]+" "+buffer[2]);//preview first 3 bytes being sent
output.writeObject(buffer);//send bytes
output.flush();//make sure all bytes read are gone
retVal = fis.read(buffer);//get next MB of file
}
System.out.println(Math.round(100*counter/fileSize)+"%");//show progress at end of file
output.writeObject("UPLOAD_COMPLETE");//let server know protocol is finished
output.close();
}
catch (Exception e)
{
e.printStackTrace();
}
}
}
The following is my server code:
import java.net.*;
import java.io.*;
public class stupidServer
{
Socket s;
ServerSocket server;
public static void main(String[] args)
{
new stupidServer();
}
public stupidServer()
{
try
{
//establish connection and stream
server = new ServerSocket(2013);
s = server.accept();
ObjectInputStream input = new ObjectInputStream(s.getInputStream());
String[] args = ((String)input.readObject()).split("\\|");//args[0] will be file name, args[1] will be file size
String fileName = args[0];
long filesize = Long.parseLong(args[1]);
String upack = (String)input.readObject();//get upload packet(string reading UPACK [bytes read])
FileOutputStream outStream = new FileOutputStream("C:\\"+fileName.trim());
while (!upack.equalsIgnoreCase("UPLOAD_COMPLETE"))//until protocol is complete
{
int bytes = Integer.parseInt(upack.split(" ")[1]);//get number of bytes being written
byte[] buffer = new byte[bytes];
buffer = (byte[])input.readObject();//get bytes sent from client
outStream.write(buffer,0,bytes);//go ahead and write them bad boys to file
System.out.println(buffer[0]+" "+buffer[1]+" "+buffer[2]);//peek at first 3 bytes received
upack = (String)input.readObject();//get next 'packet' - either another UPACK or a UPLOAD_COMPLETE
}
outStream.flush();
outStream.close();//make sure all bytes are in file
input.close();//sign off
}
catch (Exception e)
{
e.printStackTrace();
}
}
}
As always, many thanks for your time!
Your immediate problem is that ObjectOutputStream uses an ID mechanism to avoid sending the same object over the stream multiple times. The client will send this ID for the second and subsequent writes of buffer, and the server will use its cached value.
The solution to this immediate problem is to add a call to reset():
output.writeObject(buffer);//send bytes
output.reset(); // force buffer to be fully written on next pass through loop
That aside, you're misusing object streams by layering your own protocol on top of them. For example, writing the filename and filesize as a single string delimited by "|"; just write them as two separate values. Ditto for the number of bytes on each write.
It would seem that the Client - Server application i wrote does work however it seems that not all data is processed every time.
I am testing it on a local machine in Eclipse env.
Server:
private void sendData() throws Exception
{
DatagramPacket data = new DatagramPacket(outgoingData, outgoingData.length, clientAddress, clientPort);
InputStream fis = new FileInputStream(responseData);
int a;
while((a = fis.read(outgoingData,0,512)) != -1)
{
serverSocket.send(data);
}
}
Client:
private void receiveData() throws Exception
{
DatagramPacket receiveData = new DatagramPacket(incomingData, incomingData.length);
OutputStream fos = new FileOutputStream(new File("1"+data));
while(true)
{
clientSocket.receive(receiveData);
fos.write(incomingData);
}
}
I used to have if else in the while(true) loop to check if packet length is less than 512 bytes so it knew when to break;
I was thinking there was a problem whit that but seems that was oke for now i just wait few minutes and then stop the Client.java app
The file does transfer but the original file is 852kb and so far i got 777, 800, 850,.. but never all of it.
The fundamental problem with your approach is that UDP does not guarantee delivery. If you have to use UDP (rather than, say, TCP), you have to implement a scheme that would detect and deal with packets that got lost, arrive out of order, or are delivered multiple times.
See When is it appropriate to use UDP instead of TCP?
You should probably use TCP to transfer files. You are probably losing packets because you are sending them so fast in that while loop.
int a;
while((a = fis.read(outgoingData,0,512)) != -1)
{
serverSocket.send(data);
}
since you're sending so fast I highly doubt it will have a chance to be received in the right order. some packets will probably be lost because of it too.
Also since your sending a fixed size of 512 bytes the last packet you send will probably not be exactly that size, so you will see the end of the file "look wierd."
For my homework assignment, I have a network of Nodes that are passing messages to each other. Each Node is connected to a set amount of other Nodes (I'm using 4 for testing). Each Link has a weight, and all the Nodes have computed the shortest path for how they want their messages sent. Every Packet that is sent is composed of the message protocol (a hard-coded int), an integer that tells how many messages have passed through the sending Node, and the routing path for the Packet.
Every Node has a Thread for each of its Links. There is an active Socket in each Link. The Packets are sent by adding a 4-byte int to the beginning of the message telling the message's length.
Everything works fine until I stress the network. For my test, there are 10 Nodes, and I get 5 of them to send 10000 packets in a simple while() loop with no Thread.sleep(). Without exception, there is always an error at some point during execution at the if(a!=len) statement.
Please let me know if I can clarify anything. Thanks in advance! Here is the code (from the Link Thread; send() and forward() are called from the Node itself):
protected void listen(){
byte[] b;
int len;
try{
DataInputStream in = new DataInputStream(sock.getInputStream());
while(true){
len = in.readInt();
b = new byte[len];
int a = in.read(b,0,len);
if(a!=len){
System.out.println("ERROR: " + a + "!=" + len);
throw new SocketException(); //may have to fix...this will happen when message is corrupt/incomplete
}
Message m = new Message(b);
int p = m.getProtocol();
switch (p){
case CDNP.PACKET:
owner.incrementTracker();
System.out.print("\n# INCOMMING TRACKER: " + m.getTracker() + "\n>>> ");
owner.forward(m);
}
}
}catch (IOException e){
e.printStackTrace();
}
}
public void send(int tracker){
String[] message = { Conv.is(CDNP.PACKET), Conv.is(tracker), owner.getMST().toString() };
Message m = new Message(message);
forward(m);
}
public synchronized void forward(Message m){
try{
OutputStream out = sock.getOutputStream();
//convert length to byte array of length 4
ByteBuffer bb = ByteBuffer.allocate(4+m.getLength());
bb.putInt(m.getLength());
bb.put(m.getBytes());
out.write(bb.array());
out.flush();
}catch (UnknownHostException e){
System.out.println("ERROR: Could not send to Router at " + sock.getRemoteSocketAddress().toString());
return;
}catch (IOException e1){
}
}
int a = in.read(b,0,len);
if(a!=len){
That won't work. The InputStream may not read all the bytes you want, it may read only what is available right now, and return that much without blocking.
To quote the Javadocs (emphasis mine):
Reads up to len bytes of data from the input stream into an array of bytes. An attempt is made to read as many as len bytes, but a smaller number may be read, possibly zero. The number of bytes actually read is returned as an integer.
You need to continue reading in a loop until you have all the data you want (or the stream is finished).
Or, since you are using a DataInputStream, you can also use
in.readFully(b, 0, len);
which always reads exactly len bytes (blocking until those have arrived, throwing an exception when there is not enough data).