I'm creating a Server application in Java but when client connects to a server and opens a stream, the stream come to and end the connection is lost. What I need is to keep that connection alive even when the stream has ended. Here is a code example to better explain what I'm saying:
diSTR = new DataInputStream(Conexao.getInputStream());
doSTR = new DataOutputStream(Conexao.getOutputStream());
conectado = true;
while (diSTR.available() > 0)
{
byte[] buffer = new byte[size];
diSTR.readFully(buffer);
String str = new String(buffer, "UTF-8");
log(str);
}
So when diSTR.available() returns 0 the method returns and the connection is over, how can I solve this problem?
So when diSTR.available() = 0 the method returns and the connection is over, how can I solve this problem?
The solution is to NOT use available().
That method tells you how many bytes are available to read right now without blocking. If you use this to tell you "the connection is over", then you will get a premature end if the other end or the network cannot keep up with the rate at which you can read and process the data. Even if the other end can keep up, all it takes is a brief networking disruption for the reader to catch up, and the connection to be "over" ... according to your criterion.
The correct way to do this is to just read on the input stream until the read call returns -1. That means "end of stream" and indicates that the other end has closed, and there won't be any more data.
You should probably use the java.net package, here's documentation for socket connections:
http://docs.oracle.com/javase/tutorial/networking/sockets/
You are misusing InputStream.available(). The available() call only tells you how many bytes you can read without blocking. It doesn't tell you if you have reached the end of the stream. It is common that an inputstream may have 0 bytes to read immediately but still be open.
Your while loop can be reconstructed like this
int count;
byte[] buffer = new byte[4096];
ByteArrayOutputStream baos = new ByteArrayOutputStream();
while((count = diSTR.read(buffer)) != -1){
baos.write(buffer, 0, count);
}
String str = new String(baos.toByteArray(), "UTF-8")
log(sb.toString());
InputStream.read(byte[]) will read bytes and return the number of bytes read or -1 when the end of stream is reached. Each time read() returns, the contents of the buffer are written to a ByteArrayOutputStream. Once all the bytes have been read (read(byte[]) returns -1) the contents of the stream can then be interpreted as a UTF-8 encoded String.
Related
I wrote a piece of Java code to send PDF-turned postscript scripts to a network printer via Socket.
The files were printed in perfect shape but every job comes with one or 2 extra pages with texts like ps: stack underflow or error undefined offending command.
At beginning I thought something is wrong with the PDF2PS process so I tried 2 PS files from this PS Files. But the problem is still there.
I also verified the ps files with GhostView. Now I think there may be something wrong with the code. The code does not throw any exception.
The printer, Toshiba e-studion 5005AC, supports PS3 and PCL6.
File file = new File("/path/to/my.ps");
Socket socket = null;
DataOutputStream out = null;
FileInputStream inputStream = null;
try {
socket = new Socket(printerIP, printerPort);
out = new DataOutputStream(socket.getOutputStream());
DataInputStream input = new DataInputStream(socket.getInputStream());
inputStream = new FileInputStream(file);
byte[] buffer = new byte[8000];
while (inputStream.read(buffer) != -1) {
out.write(buffer);
}
out.flush();
} catch (IOException e) {
e.printStackTrace();
}
You are writing the whole buffer to the output stream regardless of how much actual content there is.
That means that when you write the buffer the last time it will most probably have a bunch of content from the previous iteration at the end of the buffer.
Example
e.g. imagine you have the following file and you use a buffer of size 10:
1234567890ABCDEF
After first inputStream.read() call it will return 10 and in the buffer you will have:
1234567890
After second inputStream.read() call it will return 6 and in the buffer you will have:
ABCDEF7890
After third inputStream.read() call it will return -1 and you will stop reading.
A printer socket will receive these data in the end:
1234567890ABCDEF7890
Here the last 7890 is an extra bit that the printer does not understand, but it can successfully interpret the first 1234567890ABCDEF.
Fix
You should consider the length returned by inputStream.read():
byte[] buffer = new byte[8000];
for (int length; (length = inputStream.read(buffer)) != -1; ){
out.write(buffer, 0, length);
}
Also consider using try-with-resources to avoid problems with unclosed streams.
Ive read many tutorials and posts about the java InputStream and reading data. Ive established a client and server implementation but i'm having weird issues where reading a variable length "payload" from the client is not consistent.
What im trying to do is to transfer up 100kB max in one single logical payload. Now i have verified that the TCP stack is not sending one mahousive 100kB packet from the client. I have played about with different read forms as per previous questions about the InputStream reading but ive nearly teared my hair out trying to get it to dump the correct data.
Lets for example say the client is sending a 70k payload.
Now the first observation i've noticed is that if I flow through the code line by line initiated from a break point, it will work fine, i get the exact same count in the outbound byte[]. When free running the byte[] will be different sizes every time i run the code with practically the same payload.
Timing problems?
second observation is that when the "inbuffer" size is set to 4096 for example this odd behaviour occurs. setting the "inbuffer" size to 1 presents the correct behaviour i.e. i get the correct payload size.
please understand i dont like the way ive had to get this to work and im not happy with the solution.
What experiences, problems have you had/seen respectively that might help me fix this code to be more reliable, easier to read.
public void listenForResponses() {
isActive = true;
try {
// apprently read() doesnt return -1 on socket based streams
// if big stuff comes through, TCP packets are segmented, but the inputstream
// does something odd and doesnt return the correct raw data.
// this is a work around to accept vari-length payloads into one byte[] buffer
byte[] inBuffer = new byte[1];
byte[] buffer = null;
int bytesRead = 0;
byte[] finalbuffer = new byte[0];
boolean isMultichunk = false;
InputStream istrm = currentSession.getInputStream();
while ((bytesRead = istrm.read(inBuffer)) > -1 && isActive) {
buffer = new byte[bytesRead];
buffer = Arrays.copyOfRange(inBuffer, 0, bytesRead);
int available = istrm.available();
if(available < 1) {
if(!isMultichunk) {
finalbuffer = buffer;
}
else {
finalbuffer = ConcatTools.ByteArrayConcat(finalbuffer,buffer);
}
notifyOfResponse(deserializePayload(finalbuffer));
finalbuffer = new byte[0];
isMultichunk = false;
}
else {
if(!isMultichunk) {
isMultichunk = true;
finalbuffer = new byte[0];
}
finalbuffer = ConcatTools.ByteArrayConcat(finalbuffer,buffer);
}
}
} catch (IOException e) {
Logger.consoleOut("PayloadReadThread: " + e.getMessage());
currentSession = null;
}
}
InputStream is working as designed.
if I flow through the code line by line initiated from a break point, it will work fine, i get the exact same count in the outbound byte[].
That's because stepping through the code is slower, so more data drives between reads, enough to fill your buffer.
When free running the byte[] will be different sizes every time i run the code with practically the same payload.
That's because InputStream.read() is contracted to block until at least one byte has been transferred, or EOS or an exception occurs. See the Javadoc. There's nothing in there about filling the buffer.
second observation is that when the "inbuffer" size is set to 4096 for example this odd behaviour occurs. setting the "inbuffer" size to 1 presents the correct behaviour i.e. i get the correct payload size.
That's the correct behaviour in the case of a 1 byte buffer for exactly the same reason given above. It's not the 'correct behaviour' for any other size.
NB Your copy loop is nonsense. available() has few correct uses, and this isn't one of them.
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
NB (2) read() does indeed return -1 on socket-based streams, but only when the peer has shutdown or closed the connection.
I tried to send an image from One device to other Device using Bluetooth.For that I take Android Bluetooth chat application source code and it works fine when I send String.But If i send image as byte array the while loop not breaks or EOF not reached when read from Inputstream.
Model:1
It receives image properly.But here I need to pass resultByteArray length.But I dont know the length.How to know the length of byte array in inputstream? inputstream.available() returns 0.
while(true)
{
byte[] resultByteArray = new byte[150827];
DataInputStream dataInputStream = new DataInputStream(mmInStream);
dataInputStream.readFully(resultByteArray);
mHandler.obtainMessage(AppConstants.MESSAGE_READ, dataInputStream.available(),-1, resultByteArray).sendToTarget();
}
Model:2
In this code while loop not breaks,
ByteArrayOutputStream bao = new ByteArrayOutputStream();
byte[] resultByteArray = new byte[1024];
int bytesRead;
while ((bytesRead = mmInStream.read(resultByteArray)) != -1) {
Log.i("BTTest1", "bytesRead=>"+bytesRead);
bao.write(resultByteArray,0,bytesRead);
}
final byte[] data = bao.toByteArray();
Also tried byte[] resultByteArray = IOUtils.toByteArray(mmInStream);but it also not works.I followed Bluetooth chat sample.
How to solve this issue?
As noted in the comment, the server needs to put the length of image at front of the actual image data. And the length of the image length information should be fixed like 4 bytes.
Then in the while loop, you need to get 4 bytes first to figure out the length of the image. After that, read bytes of the exact length from the input stream. That is the actual image.
The while loop doesn't need to break during the connection is alive. Actually it needs to wait another image data in the same while loop. The InputStream.read() is a blocking function and the thread will be sleeping until it receives enough data from the input stream.
And then you can expect another 4 bytes right after the previous image data as a start of another image.
while(true) {
try {
// Get the length first
byte[] bytesLengthOfImage = new byte[4];
mmInStream.read(bytesLengthOfImage);
int lengthOfImage = 0;
{
ByteBuffer buffer = ByteBuffer.wrap(bytesLengthOfImage);
buffer.order(ByteOrder.BIG_ENDIAN); // Assume it is network byte order.
lengthOfImage = buffer.getInt();
}
byte[] actualImage = new byte[lengthOfImage]; // Mind the memory allocation.
mmInStream.read(actualImage);
mHandler.obtainMessage(AppConstants.MESSAGE_READ, lengthOfImage,-1, actualImage).sendToTarget();
} catch (Exception e) {
if(e instanceof IOException) {
// If the connection is closed, break the loop.
break;
}
else {
// Handle errors
break;
}
}
}
This is a kind of simplified communication protocol. There is an open source framework for easy protocol implementation, called NFCommunicator.
https://github.com/Neofect/NFCommunicator
It might be an over specificiation for a simple project, but is worth a look.
I'm trying to make a simple transfer of a text .txt file from client to server, and no matter how much I think I know, and understand what I'm doing, and what exactly happening, I always get it wrong. I can really use some help here please.
So, this is the code, two function that transfer a .txt file from one to another:
Client side:
private void sendFileToServer(String file_name) throws IOException {
File file=new File(file_name);
int file_size=(int)file.length();
byte[] bytes=new byte[file_size];
FileInputStream os=null;
try {
os = new FileInputStream(file);
} catch (FileNotFoundException e) {
System.out.println("The file "+file+" wasn't found");
return;
}
BufferedInputStream bos=new BufferedInputStream(os);
bos.read(bytes);
output.write(bytes,0,bytes.length);
/* 'output' is a PrintStream object, that holds the output stream
* for the client's socket, meaning:
* output=new PrintStream(client_socket.getOutputStream()); */
output.flush();
bos.close();
}
this will buffer everything into BufferedInputStream, will copy it to bytes and will then send it to the other side - the server.
Server side:
public static String receiveFileFromClient(Client client) throws IOException {
int buffer_size=client.getSocket().getReceiveBufferSize();
byte[] bytes=new byte[buffer_size];
FileOutputStream fos=new FileOutputStream("transfered_file.txt");
BufferedOutputStream bos=new BufferedOutputStream(fos);
DataInputStream in=client.getInputStream();
int count;
System.out.println("this will be printed out");
while ((count=in.read(bytes))>0) { // execution is blocked here!
bos.write(bytes, 0, count);
}
System.out.println("this will not be printed");
bos.flush();
bos.close();
return "transfered_file.txt";
}
My intention here is to keep reading bytes from the client (the while loop), until the other side (the client) have no more bytes to send, and this is where in.read(bytes) should return 0 and the loop should break, but this is never happens, it just get blocked, even though all the bytes from the client's input-stream were successfully transferred!
Why doesn't the loop breaks?
From Javadoc:
If no byte is available because the stream is at end of file, the
value -1 is returned
doesn't the last byte is considered "end of file"? I made sure that the function sendFileToServer properly writes the entire file to the output instance (PrintStream object) and returns.
Any help would be appreciated.
As i understand it, the read() method will block until either it read[bytes] OR the socket is closed. So there is nothing for the read() what would indicate that it should stop reading, because it does not "understand" the file, its just some data.
A solution...
You could determine the number of bytes the client will send (on the client side) and then send the NUMBER over to the server. Now the server can process this number and knows how many bytes to read before the file is complete. So you can break the loop (or even don't use a loop) when the transfer is completed.
You could also process the data the server receives, and let the client send some "flag" after the file is complete, so the server knows when it is done. But this is more difficult, because you have to find something, that is not contained in the file-byte data
read() method will block for further input if you dont close the stream. So eather close the stream, or remove the loop and only read the number of bytes, you receive from the client
I am using java comm library to try accomplish a simple read/write to a serial port. I am able to successfully write to the port, and catch the return input from the input stream, but when I read from the input stream I am only able to read 1 byte (when I know there should be 11 returned)
I can write to the port successfully using Putty and am receiving the correct return String there. I am pretty new to Java, buffers and serial i/o and think maybe there is some obvious syntax or understanding of how data is returned to the InputStream. Could someone help me? Thanks!
case SerialPortEvent.DATA_AVAILABLE:
System.out.println("Data available..");
byte[] readBuffer = new byte[11];
try {
System.out.println("We trying here.");
while (inputStream.available() > 0) {
int numBytes = inputStream.read(readBuffer, 1, 11);
System.out.println("Number of bytes read:" + numBytes);
}
System.out.println(new String(readBuffer));
} catch (IOException e) {System.out.println(e);}
break;
}
This code returns the following output:
Data available..
We trying here.
Number of bytes read:1
U
As the documentation states
Reads up to len bytes of data from the input stream into an array of bytes. An attempt is made to read as many as len bytes, but a smaller number may be read.
This behavior is perfectly legal. I would also expect that a SerialPortEvent.DATA_AVAILABLE does not guarantee that all data is available. It's potentially just 1 byte and you get that event 11 times.
Things you can try:
1) Keep reading until you have all your bytes. E.g. wrap your InputStream into a DataInputStream and use readFully, that's the simplest way around the behavior of the regular read method. This might fail if the InputStream does not provide any more bytes and signals end of stream.
DataInputStream din = new DataInputStream(in);
byte[] buffer = new byte[11];
din.readFully(buffer);
// either results in an exception or 11 bytes read
2) read them as they come and append them to some buffer. Once you have all of them take the context of the buffer as result.
private StringBuilder readBuffer = new StringBuilder();
public void handleDataAvailable(InputStream in) throws IOException {
int value;
// reading just one at a time
while ((value = in.read()) != -1) {
readBuffer.append((char) value);
}
}
Some notes:
inputStream.read(readBuffer, 1, 11)
Indices start at 0 and if you want to read 11 bytes into that buffer you have to specify
inputStream.read(readBuffer, 0, 11)
It would otherwise try to put the 11th byte at the 12th index which will not work.