Env: Windows 7, java 1.8, default OS encodings
I'm trying to read a byte stream of currency market data from a socket to a file, and then play that file back to simulate the market over a fixed period; however, the file has a few malformed bytes, seemingly at random.
Below, I outline the problem with metacode, where the notation "..." indicates skipped irrelevant or boilerplate code.
Bytes are coming over the socket, and I'm reading them with a non-blockingNIO selector, then writing to disk via BufferedOutputStream:
class SocketReadDiskWrite implements Runnable{
...
blobWriter = new BufferedOutputStream(new FileOutputStream(blobFileName));
sc = SocketChannel.open(addr)
sc.configureBlocking(false);
And then in the run() method, when the selector deems the socket readable,
public void run(){
...
while(keyIterator.hasNext())
{
SelectionKey key = keyIterator.next();
if (key.isReadable()) {
if(bytesRead == -1)
{
connected = false;
logger.warn("no bytes to read");
break;
}
readBuffer.flip();
// Write bytes from socket to file, then rewind and process data
while (readBuffer.hasRemaining()){
byte[] b = new byte[readBuffer.remaining()];
readBuffer.get(b);
blobWriter.write(b);
}
readBuffer.rewind();
processData(readBuffer); //<-- Further processing
...
}
The processData method works fine when reading from a live stream of the market. For example, maybe processData reads a list of currencies and prints them, and the output is,
`EUR.USD.SPOT, EUR.AUD.SPOT, ..<thousands more>.. AUD.CAD.SPOT`
However, if I instead try to play back the captured bytestream (ie. Read in the contents of the file that was just previously created), on occasion, a corrupt symbol appears,
`EUR.USD.SPOT, EUR.AUD.SPOT, ..<thousands more>.. AUD.C##$###X`
Looking at the file in notepad++, indeed I find incorrect bytes (blue = correct symbols, red = malformed).
Subsequently, when the application points to the bytefile reader (instead of live market), the app fails at exactly these lines, throwing errors like Invalid symbol: EUR.-XD##O##$.
For what it's worth, this is how I playback the file by reading it from disk and streaming to socket:
class FilePlayer implements runnable (Socket clientSocket) {
clientWriter= clientSocket.getOutputStream();
blobReader = new FileInputStream(blobFileName);
byte[] dataArray = new byte[1024]; //<-- Store 1024 bytes data at a time
...
}
public void run() {
while(true){
blobReader.read(dataArray); //<-- Read 1024 bytes of data from disk
clientWriter.write(dataArray); //<-- Write 1024 bytes of data to socket
}
}
Note, I recently opened a related thread similar thread, but that was in regard to FileChannels, which were actually not the culprit. Figured that discussion had deviated enough to warrant a fresh post.
Related
I wrote a piece of Java code to send PDF-turned postscript scripts to a network printer via Socket.
The files were printed in perfect shape but every job comes with one or 2 extra pages with texts like ps: stack underflow or error undefined offending command.
At beginning I thought something is wrong with the PDF2PS process so I tried 2 PS files from this PS Files. But the problem is still there.
I also verified the ps files with GhostView. Now I think there may be something wrong with the code. The code does not throw any exception.
The printer, Toshiba e-studion 5005AC, supports PS3 and PCL6.
File file = new File("/path/to/my.ps");
Socket socket = null;
DataOutputStream out = null;
FileInputStream inputStream = null;
try {
socket = new Socket(printerIP, printerPort);
out = new DataOutputStream(socket.getOutputStream());
DataInputStream input = new DataInputStream(socket.getInputStream());
inputStream = new FileInputStream(file);
byte[] buffer = new byte[8000];
while (inputStream.read(buffer) != -1) {
out.write(buffer);
}
out.flush();
} catch (IOException e) {
e.printStackTrace();
}
You are writing the whole buffer to the output stream regardless of how much actual content there is.
That means that when you write the buffer the last time it will most probably have a bunch of content from the previous iteration at the end of the buffer.
Example
e.g. imagine you have the following file and you use a buffer of size 10:
1234567890ABCDEF
After first inputStream.read() call it will return 10 and in the buffer you will have:
1234567890
After second inputStream.read() call it will return 6 and in the buffer you will have:
ABCDEF7890
After third inputStream.read() call it will return -1 and you will stop reading.
A printer socket will receive these data in the end:
1234567890ABCDEF7890
Here the last 7890 is an extra bit that the printer does not understand, but it can successfully interpret the first 1234567890ABCDEF.
Fix
You should consider the length returned by inputStream.read():
byte[] buffer = new byte[8000];
for (int length; (length = inputStream.read(buffer)) != -1; ){
out.write(buffer, 0, length);
}
Also consider using try-with-resources to avoid problems with unclosed streams.
Ive read many tutorials and posts about the java InputStream and reading data. Ive established a client and server implementation but i'm having weird issues where reading a variable length "payload" from the client is not consistent.
What im trying to do is to transfer up 100kB max in one single logical payload. Now i have verified that the TCP stack is not sending one mahousive 100kB packet from the client. I have played about with different read forms as per previous questions about the InputStream reading but ive nearly teared my hair out trying to get it to dump the correct data.
Lets for example say the client is sending a 70k payload.
Now the first observation i've noticed is that if I flow through the code line by line initiated from a break point, it will work fine, i get the exact same count in the outbound byte[]. When free running the byte[] will be different sizes every time i run the code with practically the same payload.
Timing problems?
second observation is that when the "inbuffer" size is set to 4096 for example this odd behaviour occurs. setting the "inbuffer" size to 1 presents the correct behaviour i.e. i get the correct payload size.
please understand i dont like the way ive had to get this to work and im not happy with the solution.
What experiences, problems have you had/seen respectively that might help me fix this code to be more reliable, easier to read.
public void listenForResponses() {
isActive = true;
try {
// apprently read() doesnt return -1 on socket based streams
// if big stuff comes through, TCP packets are segmented, but the inputstream
// does something odd and doesnt return the correct raw data.
// this is a work around to accept vari-length payloads into one byte[] buffer
byte[] inBuffer = new byte[1];
byte[] buffer = null;
int bytesRead = 0;
byte[] finalbuffer = new byte[0];
boolean isMultichunk = false;
InputStream istrm = currentSession.getInputStream();
while ((bytesRead = istrm.read(inBuffer)) > -1 && isActive) {
buffer = new byte[bytesRead];
buffer = Arrays.copyOfRange(inBuffer, 0, bytesRead);
int available = istrm.available();
if(available < 1) {
if(!isMultichunk) {
finalbuffer = buffer;
}
else {
finalbuffer = ConcatTools.ByteArrayConcat(finalbuffer,buffer);
}
notifyOfResponse(deserializePayload(finalbuffer));
finalbuffer = new byte[0];
isMultichunk = false;
}
else {
if(!isMultichunk) {
isMultichunk = true;
finalbuffer = new byte[0];
}
finalbuffer = ConcatTools.ByteArrayConcat(finalbuffer,buffer);
}
}
} catch (IOException e) {
Logger.consoleOut("PayloadReadThread: " + e.getMessage());
currentSession = null;
}
}
InputStream is working as designed.
if I flow through the code line by line initiated from a break point, it will work fine, i get the exact same count in the outbound byte[].
That's because stepping through the code is slower, so more data drives between reads, enough to fill your buffer.
When free running the byte[] will be different sizes every time i run the code with practically the same payload.
That's because InputStream.read() is contracted to block until at least one byte has been transferred, or EOS or an exception occurs. See the Javadoc. There's nothing in there about filling the buffer.
second observation is that when the "inbuffer" size is set to 4096 for example this odd behaviour occurs. setting the "inbuffer" size to 1 presents the correct behaviour i.e. i get the correct payload size.
That's the correct behaviour in the case of a 1 byte buffer for exactly the same reason given above. It's not the 'correct behaviour' for any other size.
NB Your copy loop is nonsense. available() has few correct uses, and this isn't one of them.
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
NB (2) read() does indeed return -1 on socket-based streams, but only when the peer has shutdown or closed the connection.
I tried to send an image from One device to other Device using Bluetooth.For that I take Android Bluetooth chat application source code and it works fine when I send String.But If i send image as byte array the while loop not breaks or EOF not reached when read from Inputstream.
Model:1
It receives image properly.But here I need to pass resultByteArray length.But I dont know the length.How to know the length of byte array in inputstream? inputstream.available() returns 0.
while(true)
{
byte[] resultByteArray = new byte[150827];
DataInputStream dataInputStream = new DataInputStream(mmInStream);
dataInputStream.readFully(resultByteArray);
mHandler.obtainMessage(AppConstants.MESSAGE_READ, dataInputStream.available(),-1, resultByteArray).sendToTarget();
}
Model:2
In this code while loop not breaks,
ByteArrayOutputStream bao = new ByteArrayOutputStream();
byte[] resultByteArray = new byte[1024];
int bytesRead;
while ((bytesRead = mmInStream.read(resultByteArray)) != -1) {
Log.i("BTTest1", "bytesRead=>"+bytesRead);
bao.write(resultByteArray,0,bytesRead);
}
final byte[] data = bao.toByteArray();
Also tried byte[] resultByteArray = IOUtils.toByteArray(mmInStream);but it also not works.I followed Bluetooth chat sample.
How to solve this issue?
As noted in the comment, the server needs to put the length of image at front of the actual image data. And the length of the image length information should be fixed like 4 bytes.
Then in the while loop, you need to get 4 bytes first to figure out the length of the image. After that, read bytes of the exact length from the input stream. That is the actual image.
The while loop doesn't need to break during the connection is alive. Actually it needs to wait another image data in the same while loop. The InputStream.read() is a blocking function and the thread will be sleeping until it receives enough data from the input stream.
And then you can expect another 4 bytes right after the previous image data as a start of another image.
while(true) {
try {
// Get the length first
byte[] bytesLengthOfImage = new byte[4];
mmInStream.read(bytesLengthOfImage);
int lengthOfImage = 0;
{
ByteBuffer buffer = ByteBuffer.wrap(bytesLengthOfImage);
buffer.order(ByteOrder.BIG_ENDIAN); // Assume it is network byte order.
lengthOfImage = buffer.getInt();
}
byte[] actualImage = new byte[lengthOfImage]; // Mind the memory allocation.
mmInStream.read(actualImage);
mHandler.obtainMessage(AppConstants.MESSAGE_READ, lengthOfImage,-1, actualImage).sendToTarget();
} catch (Exception e) {
if(e instanceof IOException) {
// If the connection is closed, break the loop.
break;
}
else {
// Handle errors
break;
}
}
}
This is a kind of simplified communication protocol. There is an open source framework for easy protocol implementation, called NFCommunicator.
https://github.com/Neofect/NFCommunicator
It might be an over specificiation for a simple project, but is worth a look.
I'm trying to make a simple transfer of a text .txt file from client to server, and no matter how much I think I know, and understand what I'm doing, and what exactly happening, I always get it wrong. I can really use some help here please.
So, this is the code, two function that transfer a .txt file from one to another:
Client side:
private void sendFileToServer(String file_name) throws IOException {
File file=new File(file_name);
int file_size=(int)file.length();
byte[] bytes=new byte[file_size];
FileInputStream os=null;
try {
os = new FileInputStream(file);
} catch (FileNotFoundException e) {
System.out.println("The file "+file+" wasn't found");
return;
}
BufferedInputStream bos=new BufferedInputStream(os);
bos.read(bytes);
output.write(bytes,0,bytes.length);
/* 'output' is a PrintStream object, that holds the output stream
* for the client's socket, meaning:
* output=new PrintStream(client_socket.getOutputStream()); */
output.flush();
bos.close();
}
this will buffer everything into BufferedInputStream, will copy it to bytes and will then send it to the other side - the server.
Server side:
public static String receiveFileFromClient(Client client) throws IOException {
int buffer_size=client.getSocket().getReceiveBufferSize();
byte[] bytes=new byte[buffer_size];
FileOutputStream fos=new FileOutputStream("transfered_file.txt");
BufferedOutputStream bos=new BufferedOutputStream(fos);
DataInputStream in=client.getInputStream();
int count;
System.out.println("this will be printed out");
while ((count=in.read(bytes))>0) { // execution is blocked here!
bos.write(bytes, 0, count);
}
System.out.println("this will not be printed");
bos.flush();
bos.close();
return "transfered_file.txt";
}
My intention here is to keep reading bytes from the client (the while loop), until the other side (the client) have no more bytes to send, and this is where in.read(bytes) should return 0 and the loop should break, but this is never happens, it just get blocked, even though all the bytes from the client's input-stream were successfully transferred!
Why doesn't the loop breaks?
From Javadoc:
If no byte is available because the stream is at end of file, the
value -1 is returned
doesn't the last byte is considered "end of file"? I made sure that the function sendFileToServer properly writes the entire file to the output instance (PrintStream object) and returns.
Any help would be appreciated.
As i understand it, the read() method will block until either it read[bytes] OR the socket is closed. So there is nothing for the read() what would indicate that it should stop reading, because it does not "understand" the file, its just some data.
A solution...
You could determine the number of bytes the client will send (on the client side) and then send the NUMBER over to the server. Now the server can process this number and knows how many bytes to read before the file is complete. So you can break the loop (or even don't use a loop) when the transfer is completed.
You could also process the data the server receives, and let the client send some "flag" after the file is complete, so the server knows when it is done. But this is more difficult, because you have to find something, that is not contained in the file-byte data
read() method will block for further input if you dont close the stream. So eather close the stream, or remove the loop and only read the number of bytes, you receive from the client
I'm trying to write an upload system for a fairly complex java server. I have reproduced the error in the two small programs listed below. Basically, I am using an ObjectOutputStream/ObjectInputStream to communicate via the client/server. This is a requirement; I have thousands of lines of code working perfectly fine around this ObjectOutputStream/ObjectInputStream setup, so I must be able to still use these streams after an upload is complete.
To access the files(the one being read on the client and the one being written on the server), FileInputStream and FileOutputStream is used. My client appears to be functioning perfectly; it reads in the file and sends a different byte array each iteration(it reads in 1MB at a time, so large files can be handled without overflowing the heap). However, on the server it appears as though the byte array is ALWAYS just the first array sent(the first 1MB of the file). This does not conform to my understanding of ObjectInputStream/ObjectOutputStream. I am seeking either a working solution to this issue or enough education on the matter to form my own solution.
Below is the client code:
import java.net.*;
import java.io.*;
public class stupidClient
{
public static void main(String[] args)
{
new stupidClient();
}
public stupidClient()
{
try
{
Socket s = new Socket("127.0.0.1",2013);//connect
ObjectOutputStream output = new ObjectOutputStream(s.getOutputStream());//init stream
//file to be uploaded
File file = new File("C:\\Work\\radio\\upload\\(Op. 9) Nocturne No. 1 in Bb Minor.mp3");
long fileSize = file.length();
output.writeObject(file.getName() + "|" + fileSize);//send name and size to server
FileInputStream fis = new FileInputStream(file);//open file
byte[] buffer = new byte[1024*1024];//prepare 1MB buffer
int retVal = fis.read(buffer);//grab first MB of file
int counter = 0;//used to track progress through upload
while (retVal!=-1)//until EOF is reached
{
System.out.println(Math.round(100*counter/fileSize)+"%");//show current progress to system.out
counter += retVal;//track progress
output.writeObject("UPACK "+retVal);//alert server upload packet is incoming, with size of packet read
System.out.println(""+buffer[0]+" "+buffer[1]+" "+buffer[2]);//preview first 3 bytes being sent
output.writeObject(buffer);//send bytes
output.flush();//make sure all bytes read are gone
retVal = fis.read(buffer);//get next MB of file
}
System.out.println(Math.round(100*counter/fileSize)+"%");//show progress at end of file
output.writeObject("UPLOAD_COMPLETE");//let server know protocol is finished
output.close();
}
catch (Exception e)
{
e.printStackTrace();
}
}
}
The following is my server code:
import java.net.*;
import java.io.*;
public class stupidServer
{
Socket s;
ServerSocket server;
public static void main(String[] args)
{
new stupidServer();
}
public stupidServer()
{
try
{
//establish connection and stream
server = new ServerSocket(2013);
s = server.accept();
ObjectInputStream input = new ObjectInputStream(s.getInputStream());
String[] args = ((String)input.readObject()).split("\\|");//args[0] will be file name, args[1] will be file size
String fileName = args[0];
long filesize = Long.parseLong(args[1]);
String upack = (String)input.readObject();//get upload packet(string reading UPACK [bytes read])
FileOutputStream outStream = new FileOutputStream("C:\\"+fileName.trim());
while (!upack.equalsIgnoreCase("UPLOAD_COMPLETE"))//until protocol is complete
{
int bytes = Integer.parseInt(upack.split(" ")[1]);//get number of bytes being written
byte[] buffer = new byte[bytes];
buffer = (byte[])input.readObject();//get bytes sent from client
outStream.write(buffer,0,bytes);//go ahead and write them bad boys to file
System.out.println(buffer[0]+" "+buffer[1]+" "+buffer[2]);//peek at first 3 bytes received
upack = (String)input.readObject();//get next 'packet' - either another UPACK or a UPLOAD_COMPLETE
}
outStream.flush();
outStream.close();//make sure all bytes are in file
input.close();//sign off
}
catch (Exception e)
{
e.printStackTrace();
}
}
}
As always, many thanks for your time!
Your immediate problem is that ObjectOutputStream uses an ID mechanism to avoid sending the same object over the stream multiple times. The client will send this ID for the second and subsequent writes of buffer, and the server will use its cached value.
The solution to this immediate problem is to add a call to reset():
output.writeObject(buffer);//send bytes
output.reset(); // force buffer to be fully written on next pass through loop
That aside, you're misusing object streams by layering your own protocol on top of them. For example, writing the filename and filesize as a single string delimited by "|"; just write them as two separate values. Ditto for the number of bytes on each write.