I am working on some networking with Java and I am having an issue with converting an object to a byte array, splitting that array into 2 parts, sending each over a TCP stream, receiving it, reconstructing the byte array, and then reforming the object.
So far it is all working. I have it all except for the reconstruction of the object. I get this error when using a ObjectInputStream:
java.io.StreamCorruptedException: invalid stream header: 34323435
Which is a common error I see online. I have tried fixing it. One of the causes I've heard of is that the stream was not flushed after sending the bytes, but my code does flus the steam before sending it. My code to send the data is:
public void sendTcp(ObjectOutputStream tcpOut) {
try {
synchronized(tcpOut) {
tcpOut.write(data);
tcpOut.flush();
}
} catch (IOException e) {
e.printStackTrace();
}
}
And I am able to successfully read those bytes on the server side. The problem comes when combining the bytes back together. Once that is done I use this to recreate the object:
ByteArrayInputStream in = new ByteArrayInputStream(data);
ObjectInputStream is = new ObjectInputStream(in);
Object object = is.readObject();
is.close();
in.close();
But the error gets thrown on the ObjectInputStream line. I have also looked at the raw data by debugging and it all matches up. The bytes of the object, before it was split and sent, matches the bytes that were recombined after it was received. I've been stuck on this for a little while and it would be very helpful if someone could help.
I am having an issue with converting an object to a byte array, splitting that array into 2 parts, sending each over a TCP stream, receiving it, reconstructing the byte array, and then reforming the object.
Of course you are. It's pointless. There's too much fluffing around here. You're over-complicating it and making mistakes in the process. You don't need any of this. It's just a waste of time and space. TCP already does splitting into segments; IP already does splitting into packets, and routers already do splitting into fragments. You don't need to add another layer of that.
Get rid of the ByteArrayOutputStream and ByteArrayInputStream
Create one ObjectOutputStream and one ObjectInputStream, in that order, wrapped around the socket output and input streams respectively, at both ends, and keep them for the life of the socket
use writeObject() and readObject() directly on these object streams
don't use any other streams or readers or writers on the same socket.
Related
Right now, I'm trying to write a GUI based Java tic-tac-toe game that functions over a network connection. It essentially works at this point, however I have an intermittent error in which several chars sent over the network connection are lost during gameplay. One case looked like this, when println statements were added to message sends/reads:
Player 1:
Just sent ROW 14 COLUMN 11 GAMEOVER true
Player 2:
Just received ROW 14 COLUMN 11 GAMEOV
Im pretty sure the error is happening when I read over the network. The read takes place in its own thread, with a BufferedReader wrapped around the socket's InputStream, and looks like this:
try {
int input;
while((input = dataIn.read()) != -1 ){
char msgChar = (char)input;
String message = msgChar + "";
while(dataIn.ready()){
msgChar = (char)dataIn.read();
message+= msgChar;
}
System.out.println("Just received " + message);
this.processMessage(message);
}
this.sock.close();
}
My sendMessage method is pretty simple, (just a write over a DataOutputStream wrapped around the socket's outputstream) so I don't think the problem is happening there:
try {
dataOut.writeBytes(message);
System.out.println("Just sent " + message);
}
Any thoughts would be highly appreciated. Thanks!
As it turns out, the ready() method guaruntees only that the next read WON'T block. Consequently, !ready() does not guaruntee that the next read WILL block. Just that it could.
I believe that the problem here had to do with the TCP stack itself. Being stream-oriented, when bytes were written to the socket, TCP makes no guarantees as to the order or grouping of the bytes it sends. I suspect that the TCP stack was breaking up the sent string in a way that made sense to it, and that in the process, the ready() method must detect some sort of underlying break in the stream, and return false, in spite of the fact that more information is available.
I refactored the code to add a newline character to every message send, then simply performed a readLine() instead. This allowed my network protocol to be dependent on the newline character as a message delimiter, rather than the ready() method. I'm happy to say this fixed the problem.
Thanks for all your input!
Try flushing the OutputStream on the sender side. The last bytes might remain in some intenal buffers.
It is really important what types of streamed objects you use to operate with data. It seems to me that this troubleshooting is created by the fact that you use DataOutputStream for sending info, but something else for receiving. Try to send and receive info by DataOutputStream and DataInputStream respectively.
Matter fact, if you send something by calling dataOut.writeBoolean(b)
but trying to receive this thing by calling dataIn.readString(), you will eventually get nothing. DataInputStream and DataOutputStream are type-sensitive. Try to refactor your code keeping it in mind.
Moreover, some input streams return on invocation of read() a single byte. Here you try to convert this one single byte into char, while in java char by default consists of two bytes.
msgChar = (char)dataIn.read();
Check whether it is a reason of data loss.
I have a server-side function that draws an image with the Python Imaging Library. The Java client requests an image, which is returned via socket and converted to a BufferedImage.
I prefix the data with the size of the image to be sent, followed by a CR. I then read this number of bytes from the socket input stream and attempt to use ImageIO to convert to a BufferedImage.
In abbreviated code for the client:
public String writeAndReadSocket(String request) {
// Write text to the socket
BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream()));
bufferedWriter.write(request);
bufferedWriter.flush();
// Read text from the socket
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
// Read the prefixed size
int size = Integer.parseInt(bufferedReader.readLine());
// Get that many bytes from the stream
char[] buf = new char[size];
bufferedReader.read(buf, 0, size);
return new String(buf);
}
public BufferedImage stringToBufferedImage(String imageBytes) {
return ImageIO.read(new ByteArrayInputStream(s.getBytes()));
}
and the server:
# Twisted server code here
# The analog of the following method is called with the proper client
# request and the result is written to the socket.
def worker_thread():
img = draw_function()
buf = StringIO.StringIO()
img.save(buf, format="PNG")
img_string = buf.getvalue()
return "%i\r%s" % (sys.getsizeof(img_string), img_string)
This works for sending and receiving Strings, but image conversion (usually) fails. I'm trying to understand why the images are not being read properly. My best guess is that the client is not reading the proper number of bytes, but I honestly don't know why that would be the case.
Side notes:
I realize that the char[]-to-String-to-bytes-to-BufferedImage Java logic is roundabout, but reading the bytestream directly produces the same errors.
I have a version of this working where the client socket isn't persistent, ie. the request is processed and the connection is dropped. That version works fine, as I don't need to care about the image size, but I want to learn why the proposed approach doesn't work.
BufferedReader.read() isn't guaranteed to fill the buffer, and converting the image to String and back is not only pointless but wrong.
String is not a container for binary data, and the round-trip isn't guaranteed to work.
It would be better to redesign the protocol so that you can get rid of the readLine(), and send the length in binary and can read the entire stream with a DataInputStream.
In general when dealing with binary protocols, the answer is always DataInputStream and DataOutputStream, unless the byte order isn't the canonical network byte order, which is a protocol design mistake, and in which case you need to look into byte-ordered ByteBuffers.
In the server code, your use of sys.getsizeof is wrong. That returns the size of the bytestring object, whereas what you want is the number of bytes in the bytestring, i.e. its length len(img_string).
Also, in the client code the .readLine method reads characters until it sees either '\r' possibly followed '\n' or '\n', so using '\r' as the terminator will cause a problem if the first byte of the image data happens to be 0x0A, i.e. '\n'.
I expect that the problem is that you are trying to use a Reader and getBytes() to read binary data (the image).
The Reader stack will be taking the bytes from the underlying socket stream, converting them to characters (using the platform's default character encoding), and returning them as a String. Then you convert the String contents back into bytes using the default encoding again. The initial conversion of bytes to characters is likely to be "lossy" for binary data.
The fix is not to use a Reader / BufferedReader. Use an InputStream and a BufferedInputStream. You are not making it easy for yourself by sending the image size encoded as text, but you can deal with that by reading bytes one at a time until you get the newline, and converting them "by hand" into an integer.
(If the size was sent as a fixed-sized binary integer in "network order" you could use DataInputStream instead ... )
Java newbie here. Are there any helper functions to serialize data in and out of byte arrays? I am writing a Java package that implements a network protocol. So I have to write some typical variables like a version (1byte), sequence Number (long) and binary data (bytes) in a loop. How do I do this in Java? Coming from C I am thinking of creating a byte array of the required size and then since there is no memcpy() I am converting the long into a temporary byte array and then copying it into the actual byte array. It seems so inefficient and also really error prone. Is there a class I could use to marshall and unmarshall parameters to a byte array?
Also why does all the Socket classes only deals with char[] and not byte[]? A socket by definition has to deal with binary data also. How is this done in Java?
I am sure what I am missing is the Java mindset. Appreciate it if some one can point it to me.
EDIT: I did look at DataOutputStream and DataInputStream but I cannot convert the bytes to a String not to a byte[] which means the information might be lost in the conversion to write to a socket.
Pav
Have a look at DataInputStream, DataOutputStream, ObjectInputStream and ObjectOutputStream. Check first if the layout of the data is acceptable to you. Also, Serialization.
Sockets neither deal with char[] nor with byte[] but with InputStream and OutputStream which are used to read and write bytes.
If you are sending the data over a socket, then you don't need a temporary byte array at all; you can wrap the socket's OutputStream with DataOutputStream or ObjectOutputStream and just write what you want to write.
There might be an aspect I've missed that means you do actually need temporary byte arrays. If so, look at ByteArrayOutputStream. Also, there's no memcpy(), sure, but there is System.arraycopy.
As above, DataInputStream and DataOutputStream are exactly what you are looking for. Re your comment about String, if you're planning to use Java Strings over the wire, you're not designing a network protocol, youre designing a Java protocol. There are readUTF() and writeUTF() if you're sure the other end is Java or if you can code the other end to understand these formats. Or you can send as bytes along with the appropriate charset, or predefine the charset for the entire protocol if that makes sense.
I'm trying to have Java server and C++ clients communicate over TCP under the following conditions: text mode, and binary/encrypted mode. My problem is over the eof indicator for end of stream that DataInputStream's read(byte []) uses to return with -1. If I send binary data, what's to prevent a random byte sequence happening to represent an eof and falsely indicating to read() that the stream is ending? It seems I'm limited to text mode. I can live with that until I need to scale, but then I have the problem that I am going to encrypt the text and add message authentication. Even if I were sending from another Java program rather than C++, encrypting a string with AES+MAC would produce binary output not a normal string. What's to prevent some encrypted sequence containing a part identical to an eof?
So, what are the solutions here?
If I send binary data, what's to prevent a random byte sequence happening to represent an eof and falsely indicating to read() that the stream is ending?
In most cases (including TCP/IP and similar network protocols) there is no specific data representation for an EOF. Rather, EOF is a logical abstraction that means that you have reached the end of the data stream. For example, with a Socket it means that the input side of the socket has been closed and you have read all outstanding bytes. (And for a file, it means that you have read the last bytes of the file.)
Since there is no data representation for the (logical) EOF, you don't need to worry about getting false EOFs. In short, there is no problem to be solved here.
"end of stream" in TCP is normally signaled by closing the socket -- that is what makes the stream actually end. If you don't really want the stream to end, but just to signal the end of a "packet" (to be followed, quite possibly, by other packets on the same connection), you can start each packet with an unencrypted length indicator (say, 2 or 4 bytes depending on your need). DataInputStream, according to its docs, is suitable only to receive streams sent by a DataOutputStream, which appears to have nothing to do with your use case as you describe it.
Usually when using tcp streams you have a data header format which at a minimum has a field which holds the length of data to be expected so that the receiver knows exactly how many bytes to expect. Simple example is the TLV format.
As Thomas Pornin replied to Aelx Martelli, DataInputStream is used even on data not sent by DataOutputStream or Java. My question is the consequences of, as the documentation says, DataInputStream's read() returning when the stream ends--that is, is there some sequence of bytes that read() interprets as a stream end, and that I cannot use it thus if there's any possibility of it occurring in the data I'm sending, as can be if I send generic binary data?
My problem is over the eof indicator for end of stream that DataInputStream's read(byte []) uses to return with -1.
No it isn't. This problem is imaginary. -1 is the return code of InputStream.read() that indicates that the peer has closed the connection. It has nothing whatsoever to do with the data being sent over the connection.
I have made a socket listener in Java that listens on two ports for data and does operations on the listened data. Now the scenario is such that when both the listener and the device that transmits data are up and running, the listener receives data, one at a time ( each data starts with a "#S" and ends with a ".") and when the listener is not up or is not listening, the device stores the data in its local memory and as soon as the listener is up it sends all the data in the appended form like:
"#S ...DATA...[.]#S...DATA...[.]..."
Now I have implemented this in a way that, whatever data the listener gets on either port, it converts into the hex form, and then carries out operations on the hex format of the input data.The hex form of"#S" is "2353" and the hex form of "." is "2e". The code for handling the hex-converted form of the input data is as follows.
hexconverted1 is a string that contains the hex-converted form of the whole input data, that comes on any port.
String store[];
store=hexconverted1.split("2353");
for(int m=0;m<store.length;m++)
store[m]="2353"+store[m];
PrintWriter out2 = new PrintWriter(new BufferedWriter(new FileWriter("C:/Listener/array.bin", true)));
for(int iter=0;iter<store.length; iter++)
out2.println(store[iter]);
out2.close();
What I am trying to accomplish by the above code is that, whenever a bunch of data arrives, I'm trying to scan through the data and sore every single data from the bunch and store in in a string array so that the operations I wish to carry out on the hex converted form of the data can be done in an easier manner. So when I write the contents of the array to a BIN file,the output varies for the same input. When I send a bunched data of 280 data packets, appended one after the other, at times, the array contains 180, at other times 270. But for smaller bunch sizes I get the desired results and the size of the 'store' array is also as expected.
I'm pretty clueless about whats going on and any pointers would be of great help.
To make matters more lucid, the data I get on the ports are mostly unreadable and often the only readable parts are the starting bits"#S" and the end bit".". So I'm using a combination of BufferedInputStream and InputStream to read the incoming data and convert it into the hex format and I'm quite sure that the conversion to hex is coming about alright.
im using a combination of BufferedInputStream and InputStream to read the incoming data
Clutching at straws here. If you read from a Stream using both InputStream and BufferedInputStream methods, you'll get into difficulty:
InputStream is = ...
BufferedInputStream bis = new BufferedInputStream(is);
// This is OK
int b = bis.read();
...
// Reading the InputStream directly at this point is liable to
// give unpredictable results. It is likely that some bytes still
// remain in "bis"'s buffer, and a read on "is" will not return them.
int b2 = is.read();