Serving a file with Netty - response is truncated by one byte - java

I've serving a files from Android assets via Netty server (images, html).
Text files such a html is saved as .mp3 to disable compression (I need an InputStream!)
My pipeline is looking like this:
pipeline.addLast("decoder", new HttpRequestDecoder());
pipeline.addLast("aggregator", new HttpChunkAggregator(65536));
pipeline.addLast("encoder", new HttpResponseEncoder());
pipeline.addLast("chunkedWriter", new ChunkedWriteHandler());
pipeline.addLast("handler", new AssetsServerHandler(context));
My handler is:
public class AssetsServerHandler extends SimpleChannelUpstreamHandler {
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
// some checks
final FileInputStream is;
final AssetFileDescriptor afd;
try {
afd = assetManager.openFd(path);
is = afd.createInputStream();
} catch(IOException exc) {
sendError(ctx, NOT_FOUND);
return;
}
final long fileLength = afd.getLength();
HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK);
setContentLength(response, fileLength);
final Channel ch = e.getChannel();
final ChannelFuture future;
ch.write(response);
future = ch.write(new ChunkedStream(is));
future.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
future.getChannel().close();
}
});
if (!isKeepAlive(request)) {
future.addListener(ChannelFutureListener.CLOSE);
}
}
// other stuff
}
With that handler i've got my resposes truncated by at least one byte. If I change ChunkedStream to ChunkedNioFile (and so use a is.getChannel() instead of is as a constructor to it) - everything works perfectly.
Please, help me understand what is wrong with ChunkedStream.

Your code looks right to me. Does the returned FileInputStream of AssetFileDescriptor contain "all the bytes" ? You could check this with a unit test. If there is no bug in it then its a bug in netty. I make heavy use of ChunkInputStream and never had such a problem yet, but maybe it really depends on the nature of the InputStream.
Would be nice if you could write a test case and open a issue at netty's github.

Related

Okio/Okhttp download file using BufferedSink and decode Base64 without having whole file in memory multiple times

Got a bit of a problem atm. for my "inapp"-update im downloading the new base64 encoded .apk from my webspace. I have the functionality pretty much down, this is the code without decoding.
public void onResponse(Call call, Response response) throws IOException {
if(response.isSuccessful()){
ResponseBody body = response.body();
BufferedSource source = body.source();
source.request(Long.MAX_VALUE);
Buffer buffer = source.buffer();
String rString = buffer.clone().readString(Charset.forName("UTF-8"));
Log.i("Test: ", AppUtils.decodeBase64(rString));
if(rString.equals("xxx")){
EventBus.getDefault().post(new KeyNotValid());
dispatcher.cancelAll();
}else{
EventBus.getDefault().post(new SaveKey(apikey));
BufferedSink sink = Okio.buffer(Okio.sink(myFile));
sink.writeAll(source);
sink.flush();
sink.close();
}
}
}
The Buffer/Log is not really necessary, just using it to check the response during testing.
How would i go about decoding the bytes before i write them to the sink?
I tried doing it via. ByteString, but i couldn't find a way to write the decoded String back to a BufferedSource.
Most alternatives are pretty slow like reopening the file afterwards, reading the bytes into memory, decode and write them back.
Would really appreciate any help on this
cheers
You can already consume the response as an InputStream via ResponseBody.byteStream. You can decorate this stream with https://commons.apache.org/proper/commons-codec/apidocs/org/apache/commons/codec/binary/Base64InputStream.html and use it to read a stream of bytes and write it to the Sink for the file in chunks.
I know this answer arrives quite late and that Yuri's answer is technically correct, but I think the most idiomatic way to do that is to take advantage of the composition pattern promoted by Okio to create a Source that decodes from Base64 (or a Sink that encodes to Base64, if you need so).
Here's a little proof of concept (I'm sure it can be improved):
public class Base64Source implements Source {
private Source delegate;
private Base64.Decoder decoder; // Using Java 8 API, but it can be any library
public Base64Source(Source delegate) {
this(delegate, Base64.getDecoder());
}
public Base64Source(Source delegate, Base64.Decoder decoder) {
this.delegate = delegate;
this.decoder = decoder;
}
#Override
public long read(Buffer sink, long byteCount) throws IOException {
Buffer buffer = new Buffer();
long actualRead = this.delegate.read(buffer, byteCount);
if (actualRead == -1) {
return -1;
}
byte[] encoded = buffer.readByteArray(actualRead);
byte[] decoded = decoder.decode(encoded);
sink.write(decoded);
return decoded.length;
}
#Override
public Timeout timeout() {
return this.delegate.timeout();
}
#Override
public void close() throws IOException {
this.delegate.close();
}
}
And here's how it can be used
BufferedSource source = Okio.buffer(new Base64Source(originalSource));
BufferedSink sink = ... // create sink
sink.writeAll(source);
// Don't forget to close the source/sink to flush and free resources
sink.close();
source.close();

How to abort large file upload with OkHttp?

I am using OkHttp 3.1.2.
I've created file upload similar to the original recipe which is found here: https://github.com/square/okhttp/blob/master/samples/guide/src/main/java/okhttp3/recipes/PostMultipart.java
I can't find example how to abort an upload of large file upon user request. I mean not how to get the user request but how to tell the OkHttp to stop sending data.
So far the only solution that I can imagine is to use custom RequestBody, add an abort() method and override the writeTo() method like this:
public void abort() {
aborted = true;
}
#Override
public void writeTo(BufferedSink sink) throws IOException {
Source source = null;
try {
source = Okio.source(mFile);
long transferred = 0;
long read;
while (!aborted && (read = source.read(sink.buffer(), SEGMENT_SIZE)) != -1) {
transferred += read;
sink.flush();
mListener.transferredSoFar(transferred);
}
} finally {
Util.closeQuietly(source);
}
}
Is there any other way?
It turns out it is quite easy:
Just hold reference to the Call object and cancel it when needed like this:
private Call mCall;
private void executeRequest (Request request) {
mCall = mOkHttpClient.newCall(request);
try {
Response response = mCall.execute();
...
} catch (IOException e) {
if (!mCall.isCanceled()) {
mLogger.error("Error uploading file: {}", e);
uploadFailed(); // notify whoever is needed
}
}
}
public void abortUpload() {
if (mCall != null) {
mCall.cancel();
}
}
Please note that when you cancel the Call while uploading an IOException will be thrown so you have to check in the catch if it is cancelled (as shown above) otherwise you will have false positive for error.
I think the same approach can be used for aborting download of large files.

Java Webdav File Synchronization

I have a cloud storage at strato namely hidrive. It uses the webdav protocol. Note that it's based on HTTP. The client application they provide is poor and buggy so I tried various other tools for synchronization but none just worked the way I need it.
I'm therefore trying to implement it in Java using the Sardine project. Is there any code for hard-copying a local source folder to an external cloud folder? I haven't found anything in that direction.
The following code is supposed to upload the file...
Sardine sardine = SardineFactory.begin("username", "password");
InputStream fis = new FileInputStream(new File("some/file/test.txt"));
sardine.put("https://webdav.hidrive.strato.com/users/username/Backup", fis);
... but throws an exception instead:
Exception in thread "main" com.github.sardine.impl.SardineException: Unexpected response (301 Moved Permanently)
at com.github.sardine.impl.handler.ValidatingResponseHandler.validateResponse(ValidatingResponseHandler.java:48)
at com.github.sardine.impl.handler.VoidResponseHandler.handleResponse(VoidResponseHandler.java:34)
at com.github.sardine.impl.handler.VoidResponseHandler.handleResponse(VoidResponseHandler.java:1)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:218)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:160)
at com.github.sardine.impl.SardineImpl.execute(SardineImpl.java:828)
at com.github.sardine.impl.SardineImpl.put(SardineImpl.java:755)
at com.github.sardine.impl.SardineImpl.put(SardineImpl.java:738)
at com.github.sardine.impl.SardineImpl.put(SardineImpl.java:726)
at com.github.sardine.impl.SardineImpl.put(SardineImpl.java:696)
at com.github.sardine.impl.SardineImpl.put(SardineImpl.java:689)
at com.github.sardine.impl.SardineImpl.put(SardineImpl.java:682)
at com.github.sardine.impl.SardineImpl.put(SardineImpl.java:676)
Printing out the folders in that directory works so the connection/ authentication did succeed:
List<DavResource> resources = sardine.list("https://webdav.hidrive.strato.com/users/username/Backup");
for (DavResource res : resources)
{
System.out.println(res);
}
Please either help me fix my code or link me to some file synchronization library that works for my purpose.
Sardine uses (internally) HttpClient. There is similar question here where you can find an answer Httpclient 4, error 302. How to redirect?.
Try converting the InputStream obj into byte array before you call put(). Something like the below,
byte[] fisByte = IOUtils.toByteArray(fis);
sardine.put("https://webdav.hidrive.strato.com/users/username/Backup", fisByte);
It worked for me. Let me know.
I had to extend the "org.apache.http.impl.client.LaxRedirectStrategy" and also the getRedirect() Method of org.apache.http.impl.client.DefaultRedirectStrategy with a treatment of the needed methods: PUT, MKOL, etc. . By default only GET is redirected.
It looks like this:
private static final String[] REDIRECT_METHODS = new String[] { HttpGet.METHOD_NAME, HttpPost.METHOD_NAME, HttpHead.METHOD_NAME, HttpPut.METHOD_NAME, HttpDelete.METHOD_NAME, HttpMkCol.METHOD_NAME };
isRedirectable-Method
for (final String m : REDIRECT_METHODS) {
if (m.equalsIgnoreCase(method)) {
System.out.println("isRedirectable true");
return true;
}
}
return method.equalsIgnoreCase(HttpPropFind.METHOD_NAME);
getRedirect-Method:
final URI uri = getLocationURI(request, response, context);
final String method = request.getRequestLine().getMethod();
if (method.equalsIgnoreCase(HttpHead.METHOD_NAME)) {
return new HttpHead(uri);
} else if (method.equalsIgnoreCase(HttpGet.METHOD_NAME)) {
return new HttpGet(uri);
} else if (method.equalsIgnoreCase(HttpPut.METHOD_NAME)) {
HttpPut httpPut = new HttpPut(uri);
httpPut.setEntity(((HttpEntityEnclosingRequest) request).getEntity());
return httpPut;
} else if (method.equalsIgnoreCase("MKCOL")) {
return new HttpMkCol(uri);
} else if (method.equalsIgnoreCase("DELETE")) {
return new HttpDelete(uri);
} else {
final int status = response.getStatusLine().getStatusCode();
if (status == HttpStatus.SC_TEMPORARY_REDIRECT) {
return RequestBuilder.copy(request).setUri(uri).build();
} else {
return new HttpGet(uri);
}
}
That worked for me.

upload progress in FTPClient

I'm using commons-net FTPClient to upload some files.
How can I get progress of upload (number of bytes uploaded up now)?
Thanks
Sure, just use CopyStreamListener. Below you will find an example (copied from commons-io wiki) of file retrieval, so You can easily change it other-way-round.
try {
InputStream stO =
new BufferedInputStream(
ftp.retrieveFileStream("foo.bar"),
ftp.getBufferSize());
OutputStream stD =
new FileOutputStream("bar.foo");
org.apache.commons.net.io.Util.copyStream(
stO,
stD,
ftp.getBufferSize(),
/* I'm using the UNKNOWN_STREAM_SIZE constant here, but you can use the size of file too */
org.apache.commons.net.io.CopyStreamEvent.UNKNOWN_STREAM_SIZE,
new org.apache.commons.net.io.CopyStreamAdapter() {
public void bytesTransferred(long totalBytesTransferred,
int bytesTransferred,
long streamSize) {
// Your progress Control code here
}
});
ftp.completePendingCommand();
} catch (Exception e) { ... }
I think perhaps it is better to us the CountingOutputStream since it seems intended for this very purpose ?
This is answered by someone here: Monitoring progress using Apache Commons FTPClient

How can you pipe an OutputStream to a StreamingDataHandler?

I've got a Java web service in JAX-WS that returns an OutputStream from another method. I can't seem to figure out how to stream the OutputStream into the returned DataHandler any other way than to create a temporary file, write to it, then open it back up again as an InputStream. Here's an example:
#MTOM
#WebService
class Example {
#WebMethod
public #XmlMimeType("application/octet-stream") DataHandler service() {
// Create a temporary file to write to
File fTemp = File.createTempFile("my", "tmp");
OutputStream out = new FileOutputStream(fTemp);
// Method takes an output stream and writes to it
writeToOut(out);
out.close();
// Create a data source and data handler based on that temporary file
DataSource ds = new FileDataSource(fTemp);
DataHandler dh = new DataHandler(ds);
return dh;
}
}
The main issue is that the writeToOut() method can return data that are far larger than the computer's memory. That's why the method is using MTOM in the first place - to stream the data. I can't seem to wrap my head around how to stream the data directly from the OutputStream that I need to provide to the returned DataHandler (and ultimately the client, who receives the StreamingDataHandler).
I've tried playing around with PipedInputStream and PipedOutputStream, but those don't seem to be quite what I need, because the DataHandler would need to be returned after the PipedOutputStream is written to.
Any ideas?
I figured out the answer, along the lines that Christian was talking about (creating a new thread to execute writeToOut()):
#MTOM
#WebService
class Example {
#WebMethod
public #XmlMimeType("application/octet-stream") DataHandler service() {
// Create piped output stream, wrap it in a final array so that the
// OutputStream doesn't need to be finalized before sending to new Thread.
PipedOutputStream out = new PipedOutputStream();
InputStream in = new PipedInputStream(out);
final Object[] args = { out };
// Create a new thread which writes to out.
new Thread(
new Runnable(){
public void run() {
writeToOut(args);
((OutputStream)args[0]).close();
}
}
).start();
// Return the InputStream to the client.
DataSource ds = new ByteArrayDataSource(in, "application/octet-stream");
DataHandler dh = new DataHandler(ds);
return dh;
}
}
It is a tad more complex due to final variables, but as far as I can tell this is correct. When the thread is started, it blocks when it first tries to call out.write(); at the same time, the input stream is returned to the client, who unblocks the write by reading the data. (The problem with my previous implementations of this solution was that I wasn't properly closing the stream, and thus running into errors.)
Sorry, I only did this for C# and not java, but I think your method should launch a thread to run "writeToOut(out);" in parralel. You need to create a special stream and pass it to the new thread which gives that stream to writeToOut. After starting the thread you return that stream-object to your caller.
If you only have a method that writes to a stream and returns afterwards and another method that consumes a stream and returns afterwards, there is no other way.
Of coure the tricky part is to get hold of such a -multithreading safe- stream: It shall block each side if an internal buffer is too full.
Don't know if a Java-pipe-stream works for that.
Wrapper pattern ? :-).
Custom javax.activation.DataSource implementation (only 4 methods) to be able to do this ?
return new DataHandler(new DataSource() {
// implement getOutputStream to return the stream used inside writeToOut()
...
});
I don't have the IDE available to test this so i'm only doing a suggestion. I would also need the writeToOut general layout :-).
In my application I use InputStreamDataSource implementation that take InputStream as constructor argument instead of File in FileDataSource. It works so far.
public class InputStreamDataSource implements DataSource {
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
private final String name;
public InputStreamDataSource(InputStream inputStream, String name) {
this.name = name;
try {
int nRead;
byte[] data = new byte[16384];
while ((nRead = inputStream.read(data, 0, data.length)) != -1) {
buffer.write(data, 0, nRead);
}
buffer.flush();
inputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public String getContentType() {
return new MimetypesFileTypeMap().getContentType(name);
}
#Override
public InputStream getInputStream() throws IOException {
return new ByteArrayInputStream(buffer.toByteArray());
}
#Override
public String getName() {
return name;
}
#Override
public OutputStream getOutputStream() throws IOException {
throw new IOException("Read-only data");
}
}

Categories