Apache Mina still puts data on heap with Direct Buffers - java

I am using latest version Apache mina 2.2.1 to do low-level communication. A server port opened which accepts messages from client, these messages may contain sensitive data.
With new PCI requirments, what I am trying is the data should not be written to heap, in-order to do that I have added “IoBuffer.setUseDirectBuffer(true);” in java main during startup of program. So, all the mina buffers inside will not be heap allocated but direct allocated. I am using following filters
SSLFilter
PrefixedString decoder filter extends CumulativeProtocolDecoder
protected boolean doDecode(IoSession session, IoBuffer in, ProtocolDecoderOutput out) throws Exception {
if (in.prefixedDataAvailable(4)) {
int length = in.getInt();
byte[] bytes = new byte[length];
in.get(bytes);
String str = new String(bytes, “UTF-8”);
out.write(str);
Arrays.fill(bytes, (byte)42);
//io.sweep();
return true;
} else {
return false;
}
}
Just after request response operation is completed, I still see the message with sensitive data in heap, it is in byte array with no reference attached to it.
Tried the following but no
Sending data without SSLFilter in chain but still the data is seen in heap.
Tried with old version of mina 2.10.0 and still the same issue with data on heap.
In doDecode(above), tried following but Mina seems to be not liking modifying the IoBuffer and subsequent messages are halted,
- tried sweep operation on IoBuffer,
- tried modifying the IoBuffer secure data byte by byte.

Related

Netty doesn't write

When trying to write with netty, the written data never ends up at the remote side, confirmed with Wireshark.
I have tried:
//Directly using writeAndFlush
channel.writeAndFlush(new Packet());
//Manually flushing
channel.write(new Packet());
channel.flush();
// Even sending bytes won't work:
channel.writeAndFlush(new byte[]{1,2,3});
No exception is caught when I wrap it in try{...}catch(Throwable e){e.printStackTrace();}
What can I do to debug this problem?
Netty is asynchronous, meaning that it won't throw exceptions when a write failed. Instead of throwing exceptions, it returns a Future<?> that will be updated when the request is done. Make sure to log any exceptions coming from this as your first debugging steps:
channel.writeAndFlush(...).addListener(new GenericFutureListener<Future<Object>>() {
#Override
public void operationComplete(Future<Object> future) {
// TODO: Use proper logger in production here
if (future.isSuccess()) {
System.out.println("Data written succesfully");
} else {
System.out.println("Data failed to write:");
future.cause().printStackTrace();
}
}
});
Or more simply:
channel.writeAndFlush(...).addListener(ChannelFutureListener.FIRE_EXCEPTION_ON_FAILURE);
After you get the root cause of the exception, there could be multiple problems:
java.lang.UnsupportedOperationException:unsupported message type: <type> (expected: ...)
Notice: This also throws when using an ObjectEncoder, but your object does not implements Serializable
A default Netty channel can only send ByteBufs and FileRegions. You need to convert your objects to these types either by adding more handlers to the pipeline, or converting them manually to ByteBufs.
A ByteBuf is the Netty variant of a byte array, but has the potential for performance because it can be stored in the direct memory space.
The following handlers are commonly used:
To convert a String use a StringEncoder
To convert a Serializable use a ObjectEncoder (warning, not compatible with normal Java object streams)
To convert a byte[] use a ByteArrayEncoder
Notice: Since TCP is a stream based protocol, you usually want some form of packet sizes attached, since you may not receive exact packets that you write. See Dealing with a Stream-based Transport in the Netty wiki for more information.

Java heap space error when uploading files through http basic authentication (JAVA) [duplicate]

I am trying to publish a large video/image file from the local file system to an http path, but I run into an out of memory error after some time...
here is the code
public boolean publishFile(URI publishTo, String localPath) throws Exception {
InputStream istream = null;
OutputStream ostream = null;
boolean isPublishSuccess = false;
URL url = makeURL(publishTo.getHost(), this.port, publishTo.getPath());
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
if (conn != null) {
try {
conn.setDoOutput(true);
conn.setDoInput(true);
conn.setRequestMethod("PUT");
istream = new FileInputStream(localPath);
ostream = conn.getOutputStream();
int n;
byte[] buf = new byte[4096];
while ((n = istream.read(buf, 0, buf.length)) > 0) {
ostream.write(buf, 0, n); //<--- ERROR happens on this line.......???
}
int rc = conn.getResponseCode();
if (rc == 201) {
isPublishSuccess = true;
}
} catch (Exception ex) {
log.error(ex);
} finally {
if (ostream != null) {
ostream.close();
}
if (istream != null) {
istream.close();
}
}
}
return isPublishSuccess;
}
HEre is the error i am getting...
Exception in thread "Thread-8773" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2786)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
at sun.net.www.http.PosterOutputStream.write(PosterOutputStream.java:61)
at com.test.HTTPClient.publishFile(HTTPClient.java:110)
at com.test.HttpFileTransport.put(HttpFileTransport.java:97)
The HttpUrlConnection is buffering the data so that it can set the Content-Length header (per HTTP spec).
One alternative, if your destination server supports it, is to use "chunked" transfers. This will buffer only a small portion of data at a time. However, not all services support it (Amazon S3, for example, doesn't).
Another alternative (and imo a better one) is to use Jakarta HttpClient. You can set the "entity" in a request from a file, and the connection code will set request headers appropriately.
Edit: nos commented that the OP could call HttpURLConnection.setFixedLengthStreamingMode(long length). I was unaware of this method; it was added in 1.5, and I haven't used this class since then.
However, I still suggest using Jakarta HttpClient, for the simple reason that it reduces the amount of code that the OP has to maintain. Code that is boilerplate, yet still has the potential for errors:
The OP correctly handles the loop to copy between input and output. Usually when I see an example of this, the poster either doesn't properly check the returned buffer size, or keeps re-allocating the buffers. Congratulations, but you now have to ensure that your successors take as much care.
The exception handling isn't quite so good. Yes, the OP remembers to close the connections in a finally block, and again, congratulations on that. Except that either of the close() calls could throw IOException, keeping the other from executing. And the method as a whole throws Exception, so that the compiler isn't going to help catch similar errors.
I count 31 lines of code to setup and execute the response (excluding the response code check and the URL computation, but including the try/catch/finally). With HttpClient, this would be somewhere in the range of a half dozen LOC.
Even if the OP had written this code perfectly, and refactored it into methods similar to those in Jakarta Commons IO, s/he shouldn't do that. This code has been written and tested by others. I know that it's a waste of my time to rewrite it, and suspect that it's a waste of the OP's time as well.
conn.setFixedLengthStreamingMode((int) new File(localpath).length());
And for buffering you could cover your streams into the BufferedOutputStream and BufferedInputStream
Good example of chunked uploading you could find there: gdata-java-client
The problem is that the HttpURLConnection class is using a byte array to store your data. Presumably this video you are pushing is taking more memory than available. You have a few options here:
Increase the memory to your application. You can use the -Xmx1024m option to give 1GB of memory to your application. This will increase the amount of data you can store in memory.
If you still run out of memory, you might want to consider trying another library to push the video up that does not store the data all in memory at once. The Apache Commons HttpClient has such a feature. See this site for more information: http://hc.apache.org/httpclient-3.x/features.html. See this section for multi-part form upload of large files: http://hc.apache.org/httpclient-3.x/methods/multipartpost.html
For anything other than basic GET operations, the built-in java.net HTTP stuff isn't very good. Using Apache Commons HttpClient is recommended for this. It lets you do much more intuitive stuff like this:
PutMethod put = new PutMethod(url);
put.setRequestEntity(new FileRequestEntity(localFile, contentType));
int responseCode = put.executeMethod();
which replaces a lot of your boiler-plate code.
HttpsURLConnection#setChunkedStreamingMode(1024 * 1024 * 10); //10MB chunk
This ensures that any file (of any size) is streamed over a https connection, without internal buffering. This should be used when the file size or the content length is unknown.
Your problem is that you're trying to fix X video bytes into X/N bytes of RAM, when N > 1.
You either need to read the video into a smaller buffer and write it out as you go or make the file smaller or increase the memory available to your process.
Check your heap size. You can use -Xmx to increase it if you've taken the default.

difference between Java TCP Sockets and C TCP Sockets while trying to connect to JDBC

My problem is that C sockets look to act differently than Java sockets. I have a C proxy and I tested it between a workload generator (oltp benchmark client written in Java) and the JDBC connector of the Postgres DB.
This works great and forwards data from one to other, as it should. We need to make this proxy work in Java, so I used plain ServerSocket and Socket classes from java.net and I cannot make it work. The Postgres returns an authentication error message, assuming that the client did not send the correct password.
Here is how the authentication at the JDBC protocol works:
-client sends a requests to connect to a database specifying the database name and the username
-server responds back with a one time challenge message (13 byte message with random content)
-client concatenates this message with the user password and performs a md5 hash
-server compares the hash got from the client with the hash he computes
[This procedure is performed in order to avoid replay attacks (if client would send only the md5 hash of its password then an attacker could replay this message, pretending he is the client)]
So I inspected the packets with tcpdump and they look correct! The size is exactly as it should, so maybe the content is corrupted (??)
Sometimes though the DB server responds ok for the authentication (depending on the value of the challenge message)!! And then the oltp client sends a couple of queries, but it crashes in a while…
I guess that maybe it has to do with the encoding, so I tried with the encoding that C uses (US-ANSII), but still the same.
I send the data using fixed size character or byte arrays both in C and in Java!
I really don't have any more ideas, as I tried so many cases...
What is your guess of what would be the problem?
Here is a representative code that may help you have a more clear view:
byte [] msgBuf;
char [] msgBufChars;
while(fromInputReader.ready()){
msgBuf = new byte[1024];
msgBufChars = new char[1024];
// read data from one party
int read = fromInputReader.read(msgBufChars, 0, 1024);
System.out.println("Read returned : " + read);
for(int i=0; i<1024; i++)
msgBuf[i] = (byte) msgBufChars[i];
String messageRead = new String(msgBufChars);
String messageToWrite = new String(msgBuf);
System.out.println("message read : "+messageRead);
System.out.println("message to write : "+new String(messageToWrite));
// immediatelly write data to other party (write the amount of data we read (read value) )
// there is no write method that takes a char [] as a parameter, so pass a byte []
toDataOutputStream.write(msgBuf, 0, read);
toDataOutputStream.flush();
}
There are a couple of message exchanges in the beginning and then Postgres responds with an authentication failure message.
Thanks for your time!
What is your guess of what would be the problem?
It is nothing to do with C versus Java sockets. It is everything to do with bad Java code.
I can see some problems:
You are using a Reader in what should be a binary stream. This is going to result in the data being converted from bytes (from the JDBC client) to characters and then back to bytes. Depending on the character set used by the reader, this is likely to be destructive.
You should use plain, unadorned1 input streams for both reading and writing, and you should read / write to / from a preallocated byte[].
This is terrible:
for(int i=0; i<1024; i++)
msgBuf[i] = (byte) msgBufChars[i];
If the characters you read are not in the range 0 ... 255 you are mangling them when you stuff them into msgBuf.
You are assuming that you actually got 1024 characters.
You are using the ready() method to decide when to stop reading stuff. This is almost certainly wrong. Read the javadoc for that method (and think about it) and you should understand why it is wrong. (Hint: what happens if the proxy can read faster than the client can deliver?)
You should use a while(true), and then break out of the loop if read tells you it has reached the end of stream; i.e. if it returns -1 ...
1 - Just use the stream objects that the Socket API provides. DataXxxStream is unnecessary because the read and write methods are simply call-throughs. I wouldn't even use BufferedXxxStream wrappers in this case, because you are already doing your own buffering using the byte array.
Here's how I'd write that code:
byte [] buffer = new byte[1024]; // or bigger
while(true) {
int nosRead = inputStream.read(buffer);
if (nosRead < 0) {
break;
}
// Note that this is a bit dodgy, given that the data you are converting is
// binary. However, if the purpose is to see what embedded character data
// looks like, and if the proxy's charset matches the text charset used by
// the client-side JDBC driver for encoding data, this should achieve that.
System.out.println("Read returned : " + nosRead);
System.out.println("message read : " + new String(buffer, 0, nosRead));
outputStream.write(buffer, 0, nosRead);
outputStream.flush();
}
C sockets look to act differently than Java sockets.
Impossible. Java sockets are just a very thin layer over C sockets. You're on the wrong track with this line of thinking.
byte [] msgBuf;
char [] msgBufChars;
Why are you reading chars when you want to write bytes? Don't use Readers unless you know that the input is text.
And don't call ready(). There are very few correct uses, and this isn't one of them. Just block.

How to read binary data from socket with Apache MINA?

I know that a server sends MP3 stream after connecting to it and sending few bytes. How to read it's transmission with Apache MINA? Can you provide any examples please?
You need a client to read data from server. If it is possible to make a TCP connection with the server you can get help from this tutorial on Apache MINA TCP client
[UPDATE]
Data will be received in ClientSessionHandler's messageReceived. You can override this function according to you need. You may go through SumUp example to understand it fully.
[UPDATE 2]
To receive bytes in your case, you will have to update messageReceived of your session handler a bit. You can use IoBuffer to read byte. Something like this :
public void messageReceived(IoSession session, Object message) {
if (message instanceof IoBuffer) {
IoBuffer buffer = (IoBuffer) message;
byte[] b = new byte[buffer.remaining()];
buffer.get(b);
}
}

OutputStream OutOfMemoryError when sending HTTP

I am trying to publish a large video/image file from the local file system to an http path, but I run into an out of memory error after some time...
here is the code
public boolean publishFile(URI publishTo, String localPath) throws Exception {
InputStream istream = null;
OutputStream ostream = null;
boolean isPublishSuccess = false;
URL url = makeURL(publishTo.getHost(), this.port, publishTo.getPath());
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
if (conn != null) {
try {
conn.setDoOutput(true);
conn.setDoInput(true);
conn.setRequestMethod("PUT");
istream = new FileInputStream(localPath);
ostream = conn.getOutputStream();
int n;
byte[] buf = new byte[4096];
while ((n = istream.read(buf, 0, buf.length)) > 0) {
ostream.write(buf, 0, n); //<--- ERROR happens on this line.......???
}
int rc = conn.getResponseCode();
if (rc == 201) {
isPublishSuccess = true;
}
} catch (Exception ex) {
log.error(ex);
} finally {
if (ostream != null) {
ostream.close();
}
if (istream != null) {
istream.close();
}
}
}
return isPublishSuccess;
}
HEre is the error i am getting...
Exception in thread "Thread-8773" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2786)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
at sun.net.www.http.PosterOutputStream.write(PosterOutputStream.java:61)
at com.test.HTTPClient.publishFile(HTTPClient.java:110)
at com.test.HttpFileTransport.put(HttpFileTransport.java:97)
The HttpUrlConnection is buffering the data so that it can set the Content-Length header (per HTTP spec).
One alternative, if your destination server supports it, is to use "chunked" transfers. This will buffer only a small portion of data at a time. However, not all services support it (Amazon S3, for example, doesn't).
Another alternative (and imo a better one) is to use Jakarta HttpClient. You can set the "entity" in a request from a file, and the connection code will set request headers appropriately.
Edit: nos commented that the OP could call HttpURLConnection.setFixedLengthStreamingMode(long length). I was unaware of this method; it was added in 1.5, and I haven't used this class since then.
However, I still suggest using Jakarta HttpClient, for the simple reason that it reduces the amount of code that the OP has to maintain. Code that is boilerplate, yet still has the potential for errors:
The OP correctly handles the loop to copy between input and output. Usually when I see an example of this, the poster either doesn't properly check the returned buffer size, or keeps re-allocating the buffers. Congratulations, but you now have to ensure that your successors take as much care.
The exception handling isn't quite so good. Yes, the OP remembers to close the connections in a finally block, and again, congratulations on that. Except that either of the close() calls could throw IOException, keeping the other from executing. And the method as a whole throws Exception, so that the compiler isn't going to help catch similar errors.
I count 31 lines of code to setup and execute the response (excluding the response code check and the URL computation, but including the try/catch/finally). With HttpClient, this would be somewhere in the range of a half dozen LOC.
Even if the OP had written this code perfectly, and refactored it into methods similar to those in Jakarta Commons IO, s/he shouldn't do that. This code has been written and tested by others. I know that it's a waste of my time to rewrite it, and suspect that it's a waste of the OP's time as well.
conn.setFixedLengthStreamingMode((int) new File(localpath).length());
And for buffering you could cover your streams into the BufferedOutputStream and BufferedInputStream
Good example of chunked uploading you could find there: gdata-java-client
The problem is that the HttpURLConnection class is using a byte array to store your data. Presumably this video you are pushing is taking more memory than available. You have a few options here:
Increase the memory to your application. You can use the -Xmx1024m option to give 1GB of memory to your application. This will increase the amount of data you can store in memory.
If you still run out of memory, you might want to consider trying another library to push the video up that does not store the data all in memory at once. The Apache Commons HttpClient has such a feature. See this site for more information: http://hc.apache.org/httpclient-3.x/features.html. See this section for multi-part form upload of large files: http://hc.apache.org/httpclient-3.x/methods/multipartpost.html
For anything other than basic GET operations, the built-in java.net HTTP stuff isn't very good. Using Apache Commons HttpClient is recommended for this. It lets you do much more intuitive stuff like this:
PutMethod put = new PutMethod(url);
put.setRequestEntity(new FileRequestEntity(localFile, contentType));
int responseCode = put.executeMethod();
which replaces a lot of your boiler-plate code.
HttpsURLConnection#setChunkedStreamingMode(1024 * 1024 * 10); //10MB chunk
This ensures that any file (of any size) is streamed over a https connection, without internal buffering. This should be used when the file size or the content length is unknown.
Your problem is that you're trying to fix X video bytes into X/N bytes of RAM, when N > 1.
You either need to read the video into a smaller buffer and write it out as you go or make the file smaller or increase the memory available to your process.
Check your heap size. You can use -Xmx to increase it if you've taken the default.

Categories