Java Spring-integration TCP flush control - java

I am using spring-integration and the ServerSocketFactory is set to have decent receive and write buffers, as well as TCPNoDelay is set to false. This is verified set correctly on the socket with a debugger.
When writing to the outputstream in the spring-integration serializer, I see each write call being sent seperately with a TCP PSH (push) flag - i.e. a flush.
Why does this flush occur? How can I avoid this?

You would need to customize the serializer - they generally flush after all parts have been written (e.g. length header + payload; payload + CRLF; STX + payload + ETX; etc, etc).
Simply subclass the serializer of your choice and override the serialize() method to remove the flush(). Inject it into the connection factory.
EDIT:
Oh, I see - Nagle's algorithm only applies to subsequent writes (notice the payload and ETX are in a single packet). We need to wrap the stream in a buffered output stream. Please open a JIRA issue.
In the meantime, you can work around it with something like this...
/**
* Writes the byte[] to the stream, prefixed by an ASCII STX character and
* terminated with an ASCII ETX character.
*/
#Override
public void serialize(byte[] bytes, OutputStream outputStream) throws IOException {
BufferedOutputStream bos = new BufferedOutputStream(outputStream);
bos.write(STX);
bos.write(bytes);
bos.write(ETX);
bos.flush();
}

Related

java - NIO: read until buffer size or delimiter

Let's say I hava client connected via a NIO SocketChannel, that will send requests of the form
<command>\r\n
which may or may not be followed by
<value>\r\n
depending on the command. If it is, <command> will include the size (in bytes) of the <value> sent afterwards.
Now I'm new to this NIO stuff, but obvoiusly, I need to read the <command> first to prepare a buffer to receive the <value>. How do I do that, though?
For <command> can be of varying length and although I do know the maximum length, I suspect that if I get it wrong (e.g. read maximum length when command is shorter) I will end up reading some of that <value> into the same buffer I used to receive the <command> in.
Is there a way to cut the read at \r\n?
(I should mention that I am not allowed to use any external libraries.)
EDIT
My latest try,
private void read(SelectionKey k) throws IOException{
SocketChannel client = (SocketChannel)k.channel();
buffer.clear();
BufferedReader is = new BufferedReader(new InputStreamReader(client.socket().getInputStream()));
System.out.println(is.readLine());
}
results (somewhat unsurprisingly) in a java.nio.channels.IllegalBlockingModeException.

How to read a DataInputStream twice or more than twice?

I have a Socket connection to an application that I hosted elsewhere. Once I connected I made a OutputStream and DataInputStream.
Once the connection has been made, I use the OutputStream to send out a handshake packet to the application. Once this handshake has been approved, it returns a packet through the DataInputStream (1).
This packet is processed and is returned to the application with the OutputStream.
If this returned data is valid, I get another packet from the DataInputStream (2). However, I have not been able to read this packet through the DataInputStream.
I have tried to use DataInputStream.markSupported() and DataInputStream.mark() but this gave me nothing (except for an empty Exception message).
Is it possible to read the input stream for a second time? And if so, can someone please point me out what I'm doing wrong here?
EDIT: Here is my solution:
// First define the Output and Input streams.
OutputStream output = socket.getOutputStream();
BufferedInputStream bis = new BufferedInputStream(socket.getInputStream());
// Send the first packet to the application.
output.write("test"); // (not actual data that I sent)
// Make an empty byte array and fill it with the first response from the application.
byte[] incoming = new byte[200];
bis.read(incoming); //First packet receive
//Send a second packet to the application.
output.write("test2"); // (not actual data that I sent)
// Mark the Input stream to the length of the first response and reset the stream.
bis.mark(incoming.length);
bis.reset();
// Create a second empty byte array and fill it with the second response from the application.
byte[] incoming2 = new byte[200];
bis.read(incoming2);
I'm not sure if this is the most correct way to do this, but this way it worked for me.
I would use ByteArrayInput stream or something that you can reset. That would involve reading the data into another type of input stream and then creating one.
InputStream has a markSupported() method that you could check on the original and the byte array one to find one that the mark will work with:
https://docs.oracle.com/javase/7/docs/api/java/io/InputStream.html#markSupported()
https://docs.oracle.com/javase/7/docs/api/java/io/ByteArrayInputStream.html
The problem here is not re-reading the input. I don't see anything in the question that requires you to read the input twice. The problem is the BufferedInputStream, which will read everything that is available to be read, including the second message, if it has already arrived.
The solution is not to use a buffered stream until you have completed the handshake. Just issue a read on the socket input stream for exactly the length of the first message, do the handshake, and then proceed to construct and read the buffered stream.

difference between Java TCP Sockets and C TCP Sockets while trying to connect to JDBC

My problem is that C sockets look to act differently than Java sockets. I have a C proxy and I tested it between a workload generator (oltp benchmark client written in Java) and the JDBC connector of the Postgres DB.
This works great and forwards data from one to other, as it should. We need to make this proxy work in Java, so I used plain ServerSocket and Socket classes from java.net and I cannot make it work. The Postgres returns an authentication error message, assuming that the client did not send the correct password.
Here is how the authentication at the JDBC protocol works:
-client sends a requests to connect to a database specifying the database name and the username
-server responds back with a one time challenge message (13 byte message with random content)
-client concatenates this message with the user password and performs a md5 hash
-server compares the hash got from the client with the hash he computes
[This procedure is performed in order to avoid replay attacks (if client would send only the md5 hash of its password then an attacker could replay this message, pretending he is the client)]
So I inspected the packets with tcpdump and they look correct! The size is exactly as it should, so maybe the content is corrupted (??)
Sometimes though the DB server responds ok for the authentication (depending on the value of the challenge message)!! And then the oltp client sends a couple of queries, but it crashes in a while…
I guess that maybe it has to do with the encoding, so I tried with the encoding that C uses (US-ANSII), but still the same.
I send the data using fixed size character or byte arrays both in C and in Java!
I really don't have any more ideas, as I tried so many cases...
What is your guess of what would be the problem?
Here is a representative code that may help you have a more clear view:
byte [] msgBuf;
char [] msgBufChars;
while(fromInputReader.ready()){
msgBuf = new byte[1024];
msgBufChars = new char[1024];
// read data from one party
int read = fromInputReader.read(msgBufChars, 0, 1024);
System.out.println("Read returned : " + read);
for(int i=0; i<1024; i++)
msgBuf[i] = (byte) msgBufChars[i];
String messageRead = new String(msgBufChars);
String messageToWrite = new String(msgBuf);
System.out.println("message read : "+messageRead);
System.out.println("message to write : "+new String(messageToWrite));
// immediatelly write data to other party (write the amount of data we read (read value) )
// there is no write method that takes a char [] as a parameter, so pass a byte []
toDataOutputStream.write(msgBuf, 0, read);
toDataOutputStream.flush();
}
There are a couple of message exchanges in the beginning and then Postgres responds with an authentication failure message.
Thanks for your time!
What is your guess of what would be the problem?
It is nothing to do with C versus Java sockets. It is everything to do with bad Java code.
I can see some problems:
You are using a Reader in what should be a binary stream. This is going to result in the data being converted from bytes (from the JDBC client) to characters and then back to bytes. Depending on the character set used by the reader, this is likely to be destructive.
You should use plain, unadorned1 input streams for both reading and writing, and you should read / write to / from a preallocated byte[].
This is terrible:
for(int i=0; i<1024; i++)
msgBuf[i] = (byte) msgBufChars[i];
If the characters you read are not in the range 0 ... 255 you are mangling them when you stuff them into msgBuf.
You are assuming that you actually got 1024 characters.
You are using the ready() method to decide when to stop reading stuff. This is almost certainly wrong. Read the javadoc for that method (and think about it) and you should understand why it is wrong. (Hint: what happens if the proxy can read faster than the client can deliver?)
You should use a while(true), and then break out of the loop if read tells you it has reached the end of stream; i.e. if it returns -1 ...
1 - Just use the stream objects that the Socket API provides. DataXxxStream is unnecessary because the read and write methods are simply call-throughs. I wouldn't even use BufferedXxxStream wrappers in this case, because you are already doing your own buffering using the byte array.
Here's how I'd write that code:
byte [] buffer = new byte[1024]; // or bigger
while(true) {
int nosRead = inputStream.read(buffer);
if (nosRead < 0) {
break;
}
// Note that this is a bit dodgy, given that the data you are converting is
// binary. However, if the purpose is to see what embedded character data
// looks like, and if the proxy's charset matches the text charset used by
// the client-side JDBC driver for encoding data, this should achieve that.
System.out.println("Read returned : " + nosRead);
System.out.println("message read : " + new String(buffer, 0, nosRead));
outputStream.write(buffer, 0, nosRead);
outputStream.flush();
}
C sockets look to act differently than Java sockets.
Impossible. Java sockets are just a very thin layer over C sockets. You're on the wrong track with this line of thinking.
byte [] msgBuf;
char [] msgBufChars;
Why are you reading chars when you want to write bytes? Don't use Readers unless you know that the input is text.
And don't call ready(). There are very few correct uses, and this isn't one of them. Just block.

How can you force a flush on an OutputStream object without closing it?

My question lies on the following assumptions which I hope are true, because I believe these as I read them while Googling my problems:
Closing a Socket's OutputStream closes the socket too
The flush() method of OutputStream does nothing
So I basically need to anyhow flush the data out of my OutputStream object for my app to work.
If you're interested in details then please see the following two links :
. Weird behavior : sending image from Android phone to Java server (code working)
This issue was resolved by closing the OutputStream. Doing that flushed all the data to the other end of the socket and made my app working further but this fix soon gave rise to problem number 2 - the corresponding socket also gets closed :
. SocketException - 'Socket is closed' even when isConnected() returns true
You can call the flush method of OutputStream instead of close. The concrete classes inheriting from OutputStream will override flush() to do something other than nothing (writing the data to a file or sending it over the network).
The flush() method of OutputStream does nothing.
This is incorrect.
It is true that the base implementation of flush() provided by the OutputStream class does nothing. However, your app will be calling the version of that method that is provided by actual stream class that you are using. If the stream class doesn't have direct write semantics, it will override flush() to do what is required.
In short, if a flush is required (and it is required for a Socket output stream), then calling flush() will do the right thing. (If some internet source tells you otherwise it is either wrong or you are misinterpreting it.)
FYI, the reason that the base OutputStream implements flush() as a no-op is that:
some output stream classes don't need to do anything when flushed; e.g ByteArrayOutputStream, and
for the stream classes where flush() is not a no-op, there is no way to implement the operation at the base class level.
They could (in theory) have made designed the stream APIs so that OutputStream was an abstract class (and flush() an abstract method) or an interface. However this API was effectively frozen prior to Java 1.0, and at that time there wasn't enough experience with practical Java programming to realize that the API design was suboptimal.
Closing a Socket's OutputStream closes the socket too
True.
The flush() method of OutputStream does nothing
False. There are overrides. See the Javadoc for FilterOutputStream.flush(), BufferedOutputStream.flush(), ObjectOutputStream.flush(), to name a few.
So your initial problem is non-existent, so you have no need for the 'solution' that causes problem #2.
I'll take a stab at it. I was having the same problem. closing the outputstream is the only way i can "flush" the data. but since i still need the outputstream that's not an option. so 1st i send the byte array length, out.writeInt, then the array itself. when all bytes have been read ie buffer.length == in.readInt() i break loop
ByteArrayOutputStream dataBuffer = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
byte[] fileBytes;
int n;
int length;
try
{
size = in.readInt();
while((n = in.read(buffer)) != -1)
{
dataBuffer.write(buffer, 0, n);
if(dataBuffer.toByteArray().length == length)
{
fileBytes = dataBuffer.toByteArray(); break;
}
}
}
I had the same problem.
Add "\n" at the end of the stream. flush works but the destinary does not know if the message ended
YES on Android flush() do nothing (example based on api 23)
public Socket() {
this.impl = factory != null ? factory.createSocketImpl() : new PlainSocketImpl();
this.proxy = null;
}
public class PlainSocketImpl extends SocketImpl {
#Override protected synchronized OutputStream getOutputStream() throws IOException {
checkNotClosed();
return new PlainSocketOutputStream(this);
}
}
private static class PlainSocketOutputStream extends OutputStream {
// doesn't override base class flush();
}
to flush output stream without closing socket you can shutdown output:
protected void shutdownOutput() throws IOException
-- this will close WRITE file descriptor.
instead of using output stream you can write directly to file descriptor or by creating own Socket implementation with OutputStream which will override flush method (for example using a Berkeley socket implementation in c (via native call).
Do you really need to flush? I also had an issue, where listener on c# server couldn't receive data sent from android (I tried to get the data synchronously).
I was sure that this is because on Android side, the following code didn't flush.
OutputStream str = btSocket.getOutputStream();
str.write(data_byte);
// "This implementation does nothing"
str.flush();
It turned out, that if I use asynchronous data retrieval on server's listener - it gets the data, and no flush() on client's side is required!

Problems writing a protocol on top of sockets in Java

I'm writing a protocol on top of sockets, so I've decided to implement headers then send the information. So there is one thread per connection on the server which sits there reading in headers, then delegates off to methods to read in the rest of the information when it arrives.
So essentially it looks like this:
while ((length = inStream.read(buffer)) != -1)
{
dispatch(buffer, length);
}
So the dispatch method then decrypts the headers and delegates the method depending what is found in the header. It looks similar to:
byte[] clearText = decrypt(message,length);
if (cleartext == foo) sendFooToSocket();
So then sendFooToSocket() would then sit there and read from the instream or send to the outstream.
This is where I seem to run into some problems, in the client I'm sending the header then flushing, then sending the rest of the data, but it appears it's all coming as one and not being split up into header then data. Also is there a best way to force out of the sendFooToSocket method?
public void sendFooToSocket()
{
byte[] buffer = new byte[1024];
int length = 0;
while ((length = inStream.read(buffer) >0)
{
message = decrypt(buffer, length);
}
}
I would assume flush would allow me to break out of this method as it closes then opens the stream?
So I have 2 problems, flush doesn't seem to be breaking up my messages and flush doesn't seem to be allowing to drop out of methods such as sendFooToSocket(), any suggestions?
For clarity sake, the client just does this:
byte[] header = "MESG".getBytes();
cipher = encrypt(header);
outStream.write(cipher,0,cipher.length);
outStream.flush();
byte[] message = "Hi server".getBytes();
cipher = encrypt(message);
outStream.write(cipher,0,cipher.length);
outStream.flush();
But this is received by the server as 1 message even though it's been flushed after every write. Sending just the header works, and we get stuck in the sendFooToSocket() method, but if I send the data after the flush it comes all at once.
The client uses OutputStream and InputStreams just from the socket.get. The client also uses OutputStream and InputStream. Not sure if this matters?
What you seem to want is "record boundaries". With streams in general there are no implicit record boundaries. If you want that kind of functionality you will need to implement it yourself, by buffering the input and looking for, say, newlines to indicate the end of a record.
Look at BufferedInputStream.
inStream.read() may not be returning on a message boundary. You can't assume that it'll return at any particular boundary (such as a blank line separating headers and content if that's how you're doing it.) You'll have to manually parse the content and ignore the fact that it could come from multiple read()s or maybe one read() contains both the headers and content.
Unless you actually want control at the level you have implemented, you could consider Object streams (see ObjectInputStream and ObjectOutputStream). Such streams will allow you to send Java Objects over sockets and read them at the other end with out having to deal with headers and boundaries etc. See ObjectOutputStream for more details, but it's pretty much:
Sender:
writeObject(objectX)
Receiver:
myCopyOfObjectx = readObject()
and you can send any objects you like (as long as they are Serializable).

Categories