It would seem that the Client - Server application i wrote does work however it seems that not all data is processed every time.
I am testing it on a local machine in Eclipse env.
Server:
private void sendData() throws Exception
{
DatagramPacket data = new DatagramPacket(outgoingData, outgoingData.length, clientAddress, clientPort);
InputStream fis = new FileInputStream(responseData);
int a;
while((a = fis.read(outgoingData,0,512)) != -1)
{
serverSocket.send(data);
}
}
Client:
private void receiveData() throws Exception
{
DatagramPacket receiveData = new DatagramPacket(incomingData, incomingData.length);
OutputStream fos = new FileOutputStream(new File("1"+data));
while(true)
{
clientSocket.receive(receiveData);
fos.write(incomingData);
}
}
I used to have if else in the while(true) loop to check if packet length is less than 512 bytes so it knew when to break;
I was thinking there was a problem whit that but seems that was oke for now i just wait few minutes and then stop the Client.java app
The file does transfer but the original file is 852kb and so far i got 777, 800, 850,.. but never all of it.
The fundamental problem with your approach is that UDP does not guarantee delivery. If you have to use UDP (rather than, say, TCP), you have to implement a scheme that would detect and deal with packets that got lost, arrive out of order, or are delivered multiple times.
See When is it appropriate to use UDP instead of TCP?
You should probably use TCP to transfer files. You are probably losing packets because you are sending them so fast in that while loop.
int a;
while((a = fis.read(outgoingData,0,512)) != -1)
{
serverSocket.send(data);
}
since you're sending so fast I highly doubt it will have a chance to be received in the right order. some packets will probably be lost because of it too.
Also since your sending a fixed size of 512 bytes the last packet you send will probably not be exactly that size, so you will see the end of the file "look wierd."
Related
I am using C# to create a server software for Windows and Java to create the client software.
It works fine most of the time, except for those few exceptions that I don't understand.
I am generally using .ReadLine() and .WriteLine() on both ends to communicate, unless I try to send binary data. That's when I write and read the bytes directly.
This is how the software is supposed work:
Client requests the binary data
Server responds with the length of the binary data as a string
Client receives the length and converts it into an integer and starts reading (length) bytes
Server starts writing (length) bytes
It works in most cases, but sometimes the client app doesn't receive the full data and blocks. The server always immediately flushes after writing data, so flushing is not the problem.
Furthermore I've noticed this usually happens with larger files, small files (up to ~1 MB) usually are not a problem.
NOTE It seems like the C# server does send the data completely, so the problem is most likely somewhere in the Java code.
EDIT - Here are some logs from the client side
Working download: pastebin.com/hFd5TvrF
Failing download: pastebin.com/Q3zFWRLB
It seems like the client is waiting for 2048 bytes at the end (as it should be, as length - processed = 2048 in this case), but for some reason the client blocks.
Any ideas what I'm doing wrong? Below are the source codes of both server and client:
C# Server:
public void Write(BinaryWriter str, byte[] data)
{
int BUFFER = 2048;
int PROCESSED = 0;
// WriteString sends the String using a StreamWriter (+ flushing)
WriteString(data.Length.ToString());
while (PROCESSED < data.Length)
{
if (PROCESSED + BUFFER > data.Length)
BUFFER = data.Length - PROCESSED;
str.Write(data, PROCESSED, BUFFER);
str.Flush();
PROCESSED += BUFFER;
}
}
Java Client:
public byte[] ReadBytes(int length){
byte[] buffer = new byte[length];
int PROCESSED = 0;
int READBUF = 2048;
TOTAL = length;
progress.setMax(TOTAL);
InputStream m;
try {
m = clientSocket.getInputStream();
while(PROCESSED < length){
if(PROCESSED + READBUF > length)
READBUF = length - PROCESSED;
try {
PROCESSED += m.read(buffer, PROCESSED, READBUF);
} catch (IOException e) {
}
XPROCESSED = PROCESSED;
}
} catch (IOException e1) {
// Removed because of sensitive data
}
return decryptData(buffer);
}
I've found a fix. As of now, the server sends the length and right after sends the byte array. For some reason this does not work.
So what I've changed is:
Send length and wait for the client to respond with "OK"
Start writing bytes
Not sure why, but it works. Ran it in a while(true) loop and it's been sending data 1000 times in 4 minutes straight and no problems, so I guess it's fixed.
Ive read many tutorials and posts about the java InputStream and reading data. Ive established a client and server implementation but i'm having weird issues where reading a variable length "payload" from the client is not consistent.
What im trying to do is to transfer up 100kB max in one single logical payload. Now i have verified that the TCP stack is not sending one mahousive 100kB packet from the client. I have played about with different read forms as per previous questions about the InputStream reading but ive nearly teared my hair out trying to get it to dump the correct data.
Lets for example say the client is sending a 70k payload.
Now the first observation i've noticed is that if I flow through the code line by line initiated from a break point, it will work fine, i get the exact same count in the outbound byte[]. When free running the byte[] will be different sizes every time i run the code with practically the same payload.
Timing problems?
second observation is that when the "inbuffer" size is set to 4096 for example this odd behaviour occurs. setting the "inbuffer" size to 1 presents the correct behaviour i.e. i get the correct payload size.
please understand i dont like the way ive had to get this to work and im not happy with the solution.
What experiences, problems have you had/seen respectively that might help me fix this code to be more reliable, easier to read.
public void listenForResponses() {
isActive = true;
try {
// apprently read() doesnt return -1 on socket based streams
// if big stuff comes through, TCP packets are segmented, but the inputstream
// does something odd and doesnt return the correct raw data.
// this is a work around to accept vari-length payloads into one byte[] buffer
byte[] inBuffer = new byte[1];
byte[] buffer = null;
int bytesRead = 0;
byte[] finalbuffer = new byte[0];
boolean isMultichunk = false;
InputStream istrm = currentSession.getInputStream();
while ((bytesRead = istrm.read(inBuffer)) > -1 && isActive) {
buffer = new byte[bytesRead];
buffer = Arrays.copyOfRange(inBuffer, 0, bytesRead);
int available = istrm.available();
if(available < 1) {
if(!isMultichunk) {
finalbuffer = buffer;
}
else {
finalbuffer = ConcatTools.ByteArrayConcat(finalbuffer,buffer);
}
notifyOfResponse(deserializePayload(finalbuffer));
finalbuffer = new byte[0];
isMultichunk = false;
}
else {
if(!isMultichunk) {
isMultichunk = true;
finalbuffer = new byte[0];
}
finalbuffer = ConcatTools.ByteArrayConcat(finalbuffer,buffer);
}
}
} catch (IOException e) {
Logger.consoleOut("PayloadReadThread: " + e.getMessage());
currentSession = null;
}
}
InputStream is working as designed.
if I flow through the code line by line initiated from a break point, it will work fine, i get the exact same count in the outbound byte[].
That's because stepping through the code is slower, so more data drives between reads, enough to fill your buffer.
When free running the byte[] will be different sizes every time i run the code with practically the same payload.
That's because InputStream.read() is contracted to block until at least one byte has been transferred, or EOS or an exception occurs. See the Javadoc. There's nothing in there about filling the buffer.
second observation is that when the "inbuffer" size is set to 4096 for example this odd behaviour occurs. setting the "inbuffer" size to 1 presents the correct behaviour i.e. i get the correct payload size.
That's the correct behaviour in the case of a 1 byte buffer for exactly the same reason given above. It's not the 'correct behaviour' for any other size.
NB Your copy loop is nonsense. available() has few correct uses, and this isn't one of them.
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
NB (2) read() does indeed return -1 on socket-based streams, but only when the peer has shutdown or closed the connection.
I have a socketserver set up with a remote client, and it is functional. Upon opening the client and logging in, I noticed that sometimes, there is an error that seems to be due to the client reading an int when it shouldn't be.
Upon logging on, the server sends a series of messages/packets to the client, and these are anything from string messages to information used to load variables on the client's side.
Occasionally, while logging in, an error gets thrown showing that the client has read a packet of size 0 or a very large size. Upon converting the large-sized number into ascii I once found that it was a bit of a string "sk." (I located this string in my code so it's not entirely random).
Looking at my code, I'm not sure why this is happening. Is it possible that the client is reading an int at the wrong time? If so, how can I fix this?
InetAddress address = InetAddress.getByName(host);
connection = new Socket(address, port);
in = new DataInputStream(connection.getInputStream());
out = new DataOutputStream(connection.getOutputStream());
String process;
System.out.println("Connecting to server on "+ host + " port " + port +" at " + timestamp);
process = "Connection: "+host + ","+port+","+timestamp + ". Version: "+version;
write(0, process);
out.flush();
while (true) {
int len = in.readInt();
if (len < 2 || len > 2000) {
throw new Exception("Invalid Packet, length: "+len+".");
}
byte[] data = new byte[len];
in.readFully(data);
for (Byte b : data) {
System.out.printf("0x%02X ",b);
}
try {
reader.handlePackets(data);
} catch (Exception e) {
e.printStackTrace();
//connection.close();
//System.exit(0);
//System.out.println("Exiting");
}
}
//Here is code for my write function (Server sided):
public static void write(Client c, Packet pkt) {
for (Client client : clients) {
if (c.equals(client)) {
try {
out.writeInt(pkt.size());
out.write(pkt.getBytes());
out.flush();
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
}
So looking at the write function, I don't really see how it could be confusing the client and making it read for the size of the packet twice for one packet (at least that's what I think is happening).
If you need more information please ask me.
The client side code looks fine, and the server side code looks fine too.
The most likely issue is that this is some kind of issue with multi-threading and (improper) synchronization. For example, maybe two server-side threads are trying to write a packet to the same client at the same time.
It is also possible that your Packet class has inconsistent implementations of size() and getBytes() ... or that one thread is modifying a Packet objects while a second one is sending it.
Question edited following first comment.
My problem is mostly with java socket performance, and especially reading from the target server.
The server is a simple serversocket.accept() loop that create a client thread for every connection from firefox
Main problem is socket input stream reading that blocks for enormous amounts of time.
Client thread is as follows :
//Take an httpRequest (hc.apache.org), raw string http request, and the firefox socket outputstream
private void handle(httpRequest req, String raw, Outputstream out)
{
InputStream targetIn =null;
OutputStream targetOut = null;
Socket target = null;
try {
System.out.println("HANDLE HTTP");
String host = req.getHeaders("Host")[0].getValue();
URI uri = new URI(req.getRequestLine().getUri());
int port = uri.getPort() != -1 ? uri.getPort() : 80;
target = new Socket(host, port);
//**I have tried to play around with these but cannot seem to get a difference in performance**
target.setTcpNoDelay(true);
// target.setReceiveBufferSize(1024 *1024);
// target.setSendBufferSize(1024 * 1024);
//Get your plain old in/out streams
targetIn = target.getInputStream();
targetOut = target.getOutputStream();
//Send the request to the target
System.out.println("---------------Start response---------------");
targetOut.write(raw.getBytes());
System.out.println("request sent to target");
////Same as membrane
byte[] buffer = new byte[8 * 1024];
int length = 0;
try {
while((length = targetIn.read(buffer)) > 0) {
out.write(buffer, 0, length);
out.flush();
}
} catch(Exception e) {
e.printStackTrace();
}
System.out.println("closing out + target socket");
//IOUTILS
// long count = IOUtils.copyLarge(targetIn, out, 0L, 1048576L);
// int count = IOUtils.copy(targetIn, out);
// System.out.println("transfered : " + count );
//CHANNEL COPY
//
// ReadableByteChannel input = Channels.newChannel(targetIn);
// WritableByteChannel output = Channels.newChannel(out);
//
// ChannelTools.fastChannelCopy(input, output);
//
// input.close();
// output.close();
//CHAR TO CHAR COPY
// int c;
// while ((c = targetIn.read()) != -1) {
// out.write(c);
// }
target.close();
out.close();
System.out.println("-------------------- end response ------------------------------");
}
catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
The main problem lies in in the appropriate method to copy the target inputstream to the client (firefox) outputstream.
The site i am using to test this out is http://www.ouest-france.fr (new site with a load of images and makes loads of requests).
Ping time from workstation to target : 10ms
Normal Loading in iceweasel (debian firefox, firebug time) : 14 secs, 2.5MB
Loading behind this proxy : 14 minutes (firebug net panel is full of fake 404s, and aborted request that go back to black after a certain time, loads of requests are in blocking or waiting mode)
Now when executing i loadup visual vm, launch profiling with no class filter (to see where the app is really spending its time) and it spends 99% of its time in java.net.SocketInputStream.read(byte[], int, int), which is reading on the target socket input stream.
I think i have done my homework and been searching a testing different solutions juste about anywhere i could.
but performance never seems to improve.
I What i have already tried :
-Putting input and output streams into their buffered version, no change at all
-int to int copy, no change at all,
-classic byte[] array copy with variable sized arrays, no change at all
fiddling around with settcpnodelay, setsendbuffersize, setreceivebuffersize, could not get any change.
Was thinking of trying out nio socketchannels , but cannot find a way to do the socket to sslsocket hijacking.
So at the moment i am a bit stuck and searching for solutions.
I look at the source code of open sources proxies and cannot seem to find a fundamental difference in logic so i am completely lost with this.
Tried a other test :
export http_proxy="localhost:4242"
wget debiandvd.iso
Throughput gets to 2MB/s.
And threads seems to spend 66% time reading from target an 33% time writing to client
I am thinking that maybe i have to many threads running but running a test on www.google.com has much less requests going through but still the sames problems as www.ouest-france.fr
With the debian iso test i was thinking i had to many threads running (ouest-france is around 270 requests) but the google test (10 request) test seems to confirm that thread numbers are not the problem.
Any help will be appreciated.
Environment is debian, sun java 1.6, dev with eclipse and visualvm
I can provide the rest of the code as needed.
Thank you
Partial solution found :
Not a very clean solution but works.
I still have a throughput problem.
What I do is set the socket timer to a normal timeout (30000ms).
When the first read has come in the loop I reset the timer to something a lot lower (1000ms at the moment).
That allows me to wait for the server to start sending data, and if I get 1 second without any new data coming I consider the transfer to be finished.
Response times are still quite slow but way better.
I'm developing Server-Client application and I have a problem with waiting for input data on input stream.
I have thread dedicated to reading input data. Currently it uses while loop to hold until data is available. (N.B. protocol is as follow: send size of packet, say N, as int then send N bytes).
public void run(){
//some initialization
InputStream inStream = sock.getInputStream();
byte[] packetData;
//some more stuff
while(!interrupted){
while(inStream.available()==0);
packetData = new byte[inStream.read()];
while(inStream.available()<packetData.length);
inStream.read(packetData,0,packetData.length);
//send packet for procession in other thread
}
}
It works but blocking the thread by while loop is IMO a bad idea. I could use Thread.sleep(X) to prevent resources being continously consumed by the loop, but there surely must be a better way.
Also I can not rely on InputStream.read to block the thread as part of the data may be sent by the server with delays. I have tried but it always resulted in unexpected behaviour.
I'd appreciate any ideas :)
You can use DataInputStream.readFully()
DataInputStream in = new DataInputStream(sock.getInputStream());
//some more stuff
while(!interrupted) {
// readInt allows lengths of up to 2 GB instead of limited to 127 bytes.
byte[] packetData = new byte[in.readInt()];
in.readFully(packetData);
//send packet for procession in other thread
}
I prefer to use blocking NIO which supports re-usable buffers.
SocketChannel sc =
ByteBuffer bb = ByteBuffer.allocateDirect(1024 *1024); // off heap memory.
while(!Thread.currentThread.isInterrupted()) {
readLength(bb, 4);
int length = bb.getInt(0);
if (length > bb.capacity())
bb = ByteBuffer.allocateDirect(length);
readLength(bb, length);
bb.flip();
// process buffer.
}
static void readLength(ByteBuffer bb, int length) throws EOFException {
bb.clear();
bb.limit(length);
while(bb.remaining() > 0 && sc.read(bb) > 0);
if (bb.remaining() > 0) throw new EOFException();
}
As UmNyobe said, available() is meant to be used if you dont want to block as the default behaviour is blocking.
Just use the normal read to read whatever is available but only send packet for processing in other thread once you have packetData.length bytes in your buffer...