I have made a socket listener in Java that listens on two ports for data and does operations on the listened data. Now the scenario is such that when both the listener and the device that transmits data are up and running, the listener receives data, one at a time ( each data starts with a "#S" and ends with a ".") and when the listener is not up or is not listening, the device stores the data in its local memory and as soon as the listener is up it sends all the data in the appended form like:
"#S ...DATA...[.]#S...DATA...[.]..."
Now I have implemented this in a way that, whatever data the listener gets on either port, it converts into the hex form, and then carries out operations on the hex format of the input data.The hex form of"#S" is "2353" and the hex form of "." is "2e". The code for handling the hex-converted form of the input data is as follows.
hexconverted1 is a string that contains the hex-converted form of the whole input data, that comes on any port.
String store[];
store=hexconverted1.split("2353");
for(int m=0;m<store.length;m++)
store[m]="2353"+store[m];
PrintWriter out2 = new PrintWriter(new BufferedWriter(new FileWriter("C:/Listener/array.bin", true)));
for(int iter=0;iter<store.length; iter++)
out2.println(store[iter]);
out2.close();
What I am trying to accomplish by the above code is that, whenever a bunch of data arrives, I'm trying to scan through the data and sore every single data from the bunch and store in in a string array so that the operations I wish to carry out on the hex converted form of the data can be done in an easier manner. So when I write the contents of the array to a BIN file,the output varies for the same input. When I send a bunched data of 280 data packets, appended one after the other, at times, the array contains 180, at other times 270. But for smaller bunch sizes I get the desired results and the size of the 'store' array is also as expected.
I'm pretty clueless about whats going on and any pointers would be of great help.
To make matters more lucid, the data I get on the ports are mostly unreadable and often the only readable parts are the starting bits"#S" and the end bit".". So I'm using a combination of BufferedInputStream and InputStream to read the incoming data and convert it into the hex format and I'm quite sure that the conversion to hex is coming about alright.
im using a combination of BufferedInputStream and InputStream to read the incoming data
Clutching at straws here. If you read from a Stream using both InputStream and BufferedInputStream methods, you'll get into difficulty:
InputStream is = ...
BufferedInputStream bis = new BufferedInputStream(is);
// This is OK
int b = bis.read();
...
// Reading the InputStream directly at this point is liable to
// give unpredictable results. It is likely that some bytes still
// remain in "bis"'s buffer, and a read on "is" will not return them.
int b2 = is.read();
Related
I am working on some networking with Java and I am having an issue with converting an object to a byte array, splitting that array into 2 parts, sending each over a TCP stream, receiving it, reconstructing the byte array, and then reforming the object.
So far it is all working. I have it all except for the reconstruction of the object. I get this error when using a ObjectInputStream:
java.io.StreamCorruptedException: invalid stream header: 34323435
Which is a common error I see online. I have tried fixing it. One of the causes I've heard of is that the stream was not flushed after sending the bytes, but my code does flus the steam before sending it. My code to send the data is:
public void sendTcp(ObjectOutputStream tcpOut) {
try {
synchronized(tcpOut) {
tcpOut.write(data);
tcpOut.flush();
}
} catch (IOException e) {
e.printStackTrace();
}
}
And I am able to successfully read those bytes on the server side. The problem comes when combining the bytes back together. Once that is done I use this to recreate the object:
ByteArrayInputStream in = new ByteArrayInputStream(data);
ObjectInputStream is = new ObjectInputStream(in);
Object object = is.readObject();
is.close();
in.close();
But the error gets thrown on the ObjectInputStream line. I have also looked at the raw data by debugging and it all matches up. The bytes of the object, before it was split and sent, matches the bytes that were recombined after it was received. I've been stuck on this for a little while and it would be very helpful if someone could help.
I am having an issue with converting an object to a byte array, splitting that array into 2 parts, sending each over a TCP stream, receiving it, reconstructing the byte array, and then reforming the object.
Of course you are. It's pointless. There's too much fluffing around here. You're over-complicating it and making mistakes in the process. You don't need any of this. It's just a waste of time and space. TCP already does splitting into segments; IP already does splitting into packets, and routers already do splitting into fragments. You don't need to add another layer of that.
Get rid of the ByteArrayOutputStream and ByteArrayInputStream
Create one ObjectOutputStream and one ObjectInputStream, in that order, wrapped around the socket output and input streams respectively, at both ends, and keep them for the life of the socket
use writeObject() and readObject() directly on these object streams
don't use any other streams or readers or writers on the same socket.
I am trying to transfer a text file to another server using TCP and it is behaving differently than expected. The code sending the data is:
System.out.println("sending file name...");
String outputFileNameWithDelimiter = outputFileName + "\r\n"; //These 4 lines send the fileName with the delimiter
byte[] fileNameData = outputFileNameWithDelimiter.getBytes("US-ASCII");
outToCompression.write(fileNameData, 0, fileNameData.length);
outToCompression.flush();
System.out.println("sending content...");
System.out.println(new String(buffer, dataBegin, dataEnd-dataBegin));
outToCompression.write(buffer, dataBegin, dataEnd-dataBegin); //send the content
outToCompression.flush();
System.out.println("sending magic String...");
byte[] magicStringData = "--------MagicStringCSE283Miami".getBytes("US-ASCII"); //sends the magic string to tell Compression server the data being sent is done
outToCompression.write(magicStringData, 0, magicStringData.length);
outToCompression.flush();
Because this is TCP and you can't send discrete packets like in UDP, I expected all of the data to be in the input stream and I could just use delimiters to separate the file name, content, and ending string and then each in.read() would just give me the next subsequent amount of data.
Instead this is the data I am getting on each read:
On the first in.read() byteBuffer appears to only have "fileName\r\n".
On the second in.read() byteBuffer still has the same information.
On the third in.read() byteBuffer now holds the content I sent.
On the fourth in.read() byteBuffer holds the content I sent minus a few letters.
On the fifth in.read() I get the magicString + part of the message.
I am flushing on every send from the Webserver, but input streams don't seem to implement flushable.
Can anyone explain why this is happening?
EDIT:
This is how I am reading things in. Basically this in a loop then writing to a file.
in.read(byteBuffer, 0, BUFSIZE);
If your expectation is that read will fill the buffer, or receive exactly what was sent by a single write() by the peer, it is your expectation that is at fault here, not read(). it isn't specified to transfer more than one byte at a time, and there is no guarantee about preserving write boundaries.
It is quite impossible to write correct code without storing the result of read() into a variable.
When you read from an InputStream, you're giving it a byte array to write into (and optionally an offset and a maximum amount to read). InputStream makes no guarantees that the array will be filled with fresh data. The return value is the number of bytes that was actually read into it.
What's happening in your example is this:
The first TCP packet comes in with "fileName\r\n", gets written into your buffer, everything fine so far.
You call read() again, but the next packet hasn't arrived yet. read() will have returned 0, because it didn't want to block until data arrived. So the buffer still contains "fileName\r\n". Edit: as pointed out, read() always blocks until it reads at least one byte. Don't really know why the buffer didn't change then.
On the third read, the content has arrived.
The first bit of the content gets overwritten with the second part of the message, the last bit still contains part of the old message (I think that's what you meant).
etc., you get the idea
You need to check the return value, wait for the data to arrive, and only use as much of the buffer as was written by the last read().
Question may be quite vague, let me expound it here.
I'm developing an application in which I'll be reading data from a file. I've a FileReader class which opens the file in following fashion
currentFileStream = new FileInputStream(currentFile);
fileChannel = currentFileStream.getChannel();
data is read as following
bytesRead = fileChannel.read(buffer); // Data is buffered using a ByteBuffer
I'm processing the data in any one of the 2 forms, one is binary and other is character.
If its processed as character I do an additional step of decoding this ByteBuffer into CharBuffer
CoderResult result = decoder.decode(byteBuffer, charBuffer, false);
Now my problem is I need to read by repositioning the file from some offset during recovery mode in case of some failure or crash in application.
For this, I maintain a byteOffset which keeps track of no of bytes processed during binary mode and I persist this variable.
If something happens I reposition the file like this
fileChannel.position(byteOffset);
which is pretty straightforward.
But if processing mode is character, I maintain recordOffset which keeps track of character position/offset in the file. During recovery I make calls to read() internally till I get some character offset which is persisted recordOffset+1.
Is there anyway to get corresponding bytes which were needed to decode characters? For instance I want something similar like this if recordOffset is 400 and its corresponding byteOffset is 410 or 480 something( considering different charsets). So that while repositioning I can do this
fileChannel.position(recordOffset); //recordOffset equivalent value in number of bytes
instead of making repeated calls internally in my application.
Other approach I could think for this was using an InputStreamReader's skip method.
If there are any better approach for this or if possible to get byte - character mapping, please let me know.
I have a server-side function that draws an image with the Python Imaging Library. The Java client requests an image, which is returned via socket and converted to a BufferedImage.
I prefix the data with the size of the image to be sent, followed by a CR. I then read this number of bytes from the socket input stream and attempt to use ImageIO to convert to a BufferedImage.
In abbreviated code for the client:
public String writeAndReadSocket(String request) {
// Write text to the socket
BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream()));
bufferedWriter.write(request);
bufferedWriter.flush();
// Read text from the socket
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
// Read the prefixed size
int size = Integer.parseInt(bufferedReader.readLine());
// Get that many bytes from the stream
char[] buf = new char[size];
bufferedReader.read(buf, 0, size);
return new String(buf);
}
public BufferedImage stringToBufferedImage(String imageBytes) {
return ImageIO.read(new ByteArrayInputStream(s.getBytes()));
}
and the server:
# Twisted server code here
# The analog of the following method is called with the proper client
# request and the result is written to the socket.
def worker_thread():
img = draw_function()
buf = StringIO.StringIO()
img.save(buf, format="PNG")
img_string = buf.getvalue()
return "%i\r%s" % (sys.getsizeof(img_string), img_string)
This works for sending and receiving Strings, but image conversion (usually) fails. I'm trying to understand why the images are not being read properly. My best guess is that the client is not reading the proper number of bytes, but I honestly don't know why that would be the case.
Side notes:
I realize that the char[]-to-String-to-bytes-to-BufferedImage Java logic is roundabout, but reading the bytestream directly produces the same errors.
I have a version of this working where the client socket isn't persistent, ie. the request is processed and the connection is dropped. That version works fine, as I don't need to care about the image size, but I want to learn why the proposed approach doesn't work.
BufferedReader.read() isn't guaranteed to fill the buffer, and converting the image to String and back is not only pointless but wrong.
String is not a container for binary data, and the round-trip isn't guaranteed to work.
It would be better to redesign the protocol so that you can get rid of the readLine(), and send the length in binary and can read the entire stream with a DataInputStream.
In general when dealing with binary protocols, the answer is always DataInputStream and DataOutputStream, unless the byte order isn't the canonical network byte order, which is a protocol design mistake, and in which case you need to look into byte-ordered ByteBuffers.
In the server code, your use of sys.getsizeof is wrong. That returns the size of the bytestring object, whereas what you want is the number of bytes in the bytestring, i.e. its length len(img_string).
Also, in the client code the .readLine method reads characters until it sees either '\r' possibly followed '\n' or '\n', so using '\r' as the terminator will cause a problem if the first byte of the image data happens to be 0x0A, i.e. '\n'.
I expect that the problem is that you are trying to use a Reader and getBytes() to read binary data (the image).
The Reader stack will be taking the bytes from the underlying socket stream, converting them to characters (using the platform's default character encoding), and returning them as a String. Then you convert the String contents back into bytes using the default encoding again. The initial conversion of bytes to characters is likely to be "lossy" for binary data.
The fix is not to use a Reader / BufferedReader. Use an InputStream and a BufferedInputStream. You are not making it easy for yourself by sending the image size encoded as text, but you can deal with that by reading bytes one at a time until you get the newline, and converting them "by hand" into an integer.
(If the size was sent as a fixed-sized binary integer in "network order" you could use DataInputStream instead ... )
I am working on a project and have a question about Java sockets. The source file which can be found here.
After successfully transmitting the file size in plain text I need to transfer binary data. (DVD .Vob files)
I have a loop such as
// Read this files size
long fileSize = Integer.parseInt(in.readLine());
// Read the block size they are going to use
int blockSize = Integer.parseInt(in.readLine());
byte[] buffer = new byte[blockSize];
// Bytes "red"
long bytesRead = 0;
int read = 0;
while(bytesRead < fileSize){
System.out.println("received " + bytesRead + " bytes" + " of " + fileSize + " bytes in file " + fileName);
read = socket.getInputStream().read(buffer);
if(read < 0){
// Should never get here since we know how many bytes there are
System.out.println("DANGER WILL ROBINSON");
break;
}
binWriter.write(buffer,0,read);
bytesRead += read;
}
I read a random number of bytes close to 99%. I am using Socket, which is TCP based,
so I shouldn't have to worry about lower layer transmission errors.
The received number changes but is always very near the end
received 7258144 bytes of 7266304 bytes in file GLADIATOR/VIDEO_TS/VTS_07_1.VOB
The app then hangs there in a blocking read. I am confounded. The server is sending the correct
file size and has a successful implementation in Ruby but I can't get the Java version to work.
Why would I read less bytes than are sent over a TCP socket?
The above is because of a bug many of you pointed out below.
BufferedReader ate 8Kb of my socket's input. The correct implementation can be found
Here
If your in is a BufferedReader then you've run into the common problem with buffering more than needed. The default buffer size of BufferedReader is 8192 characters which is approximately the difference between what you expected and what you got. So the data you are missing is inside BufferedReader's internal buffer, converted to characters (I wonder why it didn't break with some kind of conversion error).
The only workaround is to read the first lines byte-by-byte without using any buffered classes readers. Java doesn't provide an unbuffered InputStreamReader with readLine() capability as far as I know (with the exception of the deprecated DataInputStream.readLine(), as indicated in the comments below), so you have to do it yourself. I would do it by reading single bytes, putting them into a ByteArrayOutputStream until I encounter an EOL, then converting the resulting byte array into a String using the String constructor with the appropriate encoding.
Note that while you can't use a BufferedInputReader, nothing stops you from using a BufferedInputStream from the very beginning, which will make byte-by-byte reads more efficient.
Update
In fact, I am doing something like this right now, only a bit more complicated. It is an application protocol that involves exchanging some data structures that are nicely represented in XML, but they sometimes have binary data attached to them. We implemented this by having two attributes in the root XML: fragmentLength and isLastFragment. The first one indicates how much bytes of binary data follow the XML part and isLastFragment is a boolean attribute indicating the last fragment so the reading side knows that there will be no more binary data. XML is null-terminated so we don't have to deal with readLine(). The code for reading looks like this:
InputStream ins = new BufferedInputStream(socket.getInputStream());
while (!finished) {
ByteArrayOutputStream buf = new ByteArrayOutputStream();
int b;
while ((b = ins.read()) > 0) {
buf.write(b);
}
if (b == -1)
throw new EOFException("EOF while reading from socket");
// b == 0
Document xml = readXML(new ByteArrayInputStream(buf.toByteArray()));
processAnswers(xml);
Element root = xml.getDocumentElement();
if (root.hasAttribute("fragmentLength")) {
int length = DatatypeConverter.parseInt(
root.getAttribute("fragmentLength"));
boolean last = DatatypeConverter.parseBoolean(
root.getAttribute("isLastFragment"));
int read = 0;
while (read < length) {
// split incoming fragment into 4Kb blocks so we don't run
// out of memory if the client sent a really large fragment
int l = Math.min(length - read, 4096);
byte[] fragment = new byte[l];
int pos = 0;
while (pos < l) {
int c = ins.read(fragment, pos, l - pos);
if (c == -1)
throw new EOFException(
"Preliminary EOF while reading fragment");
pos += c;
read += c;
}
// process fragment
}
Using null-terminated XML for this turned out to be a really great thing as we can add additional attributes and elements without changing the transport protocol. At the transport level we also don't have to worry about handling UTF-8 because XML parser will do it for us. In your case you're probably fine with those two lines, but if you need to add more metadata later you may wish to consider null-terminated XML too.
Here is your problem. The first few lines of the program your using in.readLine() which is probably some sort of BufferedReader. BufferedReaders will read data off the socket in 8K chunks. So when you did the first readLine() it read the first 8K into the buffer. The first 8K contains your two numbers followed by newlines, then some portion of the head of the VOB file (that's the missing chunk). Now when you switched to using the getInputStream() off the socket you are 8K into the transmission assuming your starting at zero.
socket.getInputStream().read(buffer); // you can't do this without losing data.
While the BufferedReader is nice for reading character data, switching between binary and character data in a stream is not possible with it. You'll have to switch to using InputStream instead of Reader and convert the first few portions by hand to character data. If you read the file using a buffered byte array you can read the first chunk, look for your newlines and convert everything to the left of that to character data. Then write everything to the right to your file, then start reading the rest of the file.
This used to be easier with DataInputStream, but it doesn't do a good job handling character conversion for you (readLine is deprecated with BufferedReader being the only replacement - doh). Probably should write a DataInputStream replacement that under the covers uses Charset to properly handle string conversion. Then switching between characters and binary would be easier.
Your basic problem is that BufferedReader will read as much data is available and place in its buffer. It will give you the data as you ask for it. This is the whole point of buffereing i.e. to reduce the number of calls to the OS. The only safe way to use an buffered input is to use the same buffer over the life of the connection.
In your case, you only use the buffer to read two lines, however it is highly likely that 8192 bytes has been read into the buffer. (The default size of the buffer) Say the first two lines consist of 32 bytes, this leaves 8160 waiting for you to read, however you by-pass the buffer to perform the read() on the socket directly leading to 8160 bytes left in the buffer you end up discarding. (the amount you are missing)
BTW: You should be able to see this in a debugger if you inspect the contents of your buffered reader.
Sergei may have been right about data being lost inside the buffer, but I'm not sure about his explanation. (BufferedReaders don't usually hold onto data inside their buffers. He may be thinking of a problem with BufferedWriters, which can lose data if the underlying stream is shut down prematurely.) [Never mind; I had misread Sergei's answer. The rest of this is valid AFAIK.]
I think you have a problem that's specific to your application. In your client code, you start reading as follows:
public static void recv(Socket socket){
try {
BufferedReader in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
//...
int numFiles = Integer.parseInt(in.readLine());
... and you proceed to use in for the start of the exchange. But then you switch to using the raw socket stream:
while(bytesRead > fileSize){
read = socket.getInputStream().read(buffer);
Because in is a BufferedReader, it's already going to have filled its buffer with up to 8192 bytes from the socket input stream. Any bytes that are in that buffer, and which you don't read from in, will be lost. Your app is hanging because it believes that the server is holding onto some bytes, but the server doesn't have them.
The solution is not to do byte-by-byte reads from the socket (ouch! your poor CPU!), but to use the BufferedReader consistently. Or, to use buffering with binary data, change the BufferedReader to a BufferedInputStream that wraps the socket's InputStream.
By the way, TCP is not as reliable as many people assume it to be. For example, when the server socket closes, it's possible for it to have written data into the socket which then gets lost as the socket connection is shutdown. Calling Socket.setSoLinger can help to prevent this problem.
EDIT: Also BTW, you're playing with fire by treating byte and character data as if they're interchangeable, as you do below. If the data really is binary, then the conversion to String risks corrupting the data. Perhaps you want to be writing into a BufferedOutputStream?
// Java is retarded and reading and writing operate with
// fundamentally different types. So we write a String of
// binary data.
fileWriter.write(new String(buffer));
bytesRead += read;
EDIT 2: Clarified (or attempted to clarify :-} the handling of binary vs. String data.