I am trying to read, in an async way, a file that i receive from the client. The idea is to recipt the file, validate it and if the validations are ok, send a response to the client saying that all it's ok, and process the file in background, so the user doesn't need to wait until the file is procesed.
For that I receive the file in my resource like an inputStrem:
#Override
#POST
#Path("/bulk")
#Consumes("text/csv")
#Produces(MediaType.APPLICATION_JSON)
public Response importEmployees(InputStream inputStream) {
if(fileIsNotValid(inputStream)){
throw exceptionFactoryBean.createBadRequestException("there was an error with the file");
}
try {
CompletableFuture.runAsync(() -> {
employeeService.importEmployees(inputStream);
}).exceptionally(e -> {
LOG.error(format(ERROR_IMPORTING_FILE, e.getMessage()));
return null;
});
} catch (RuntimeException e) {
LOG.error(format(ERROR_SENDING_EMAIL, e.getMessage()));
throw exceptionFactoryBean.createServiceException("payment-method.export.installment-schema.error");
}
return Response.ok().build();
}
For the async part I used the runAsync() method of CompletableFuture.
However, inside of my employeeService.importEmployees() method I tried to read the inputStream and I am getting a java.util.concurrent.CompletionException: java.lang.NullPointerException
public List<ImportResult> importEmployees(final InputStream inputStream) {
byte[] buffer = new byte[1024];
int len;
try {
while ((len = inputStream.read(buffer)) > -1) {
baos.write(buffer, NumberUtils.INTEGER_ZERO, len);
}
The inputStream is not null. And debuging in a low level, I can see that the wrapper of the class Http11InputBuffer is null when i try to read the inputStream.
Do you can see what errors i have or how i can set the wrapper attribute of the Http11InputBuffer previous to read the inputStream
You are not waiting for the result. So it will return the Responsebefore importEmployees is executed. You need to wait with a join/get before returning the response:
public Response importEmployees(InputStream inputStream) {
...
CompletableFuture.runAsync(() -> { ... }).get();
...
return Response.ok().build();
}
There is probably no point in making this code reactive, though.
Related
I noticed that the following code will use the same OutputStream object (same hash code on debug) if a curl request is ctrl+c, then re-run. Even with different parameters, it uses the same OutputStream. This causes very odd output obviously.
The original output stream starts to throw NullPointerException when writing to the OutputStream because the underlying HttpOutputStream no longer exists (broken pipe). This is why I think it is odd that a subsequent request would reuse the same OutputStream object.
Closing the output stream in a finally block fixes the issue. Some examples I've seen around the web don't explicitly close the output stream. Is the reuse of the OutputStream expected? Does anyone have any ideas on why I'd be seeing this behavior?
#RequestMapping(value= URI_ROOT, method= RequestMethod.POST, produces = {"text/event-stream"})
public StreamingResponseBody methodName(
... params
) {
return new StreamingResponseBody() {
#Override
public void writeTo(OutputStream outputStream) {
try {
... code ...
} catch (Exception e) {
LOGGER.info("Migration thread interrupted.",e);
} finally {
IoUtil.closeSilently(outputStream); // This fixes it.
}
}
};
}
I am using OkHttp 3.1.2.
I've created file upload similar to the original recipe which is found here: https://github.com/square/okhttp/blob/master/samples/guide/src/main/java/okhttp3/recipes/PostMultipart.java
I can't find example how to abort an upload of large file upon user request. I mean not how to get the user request but how to tell the OkHttp to stop sending data.
So far the only solution that I can imagine is to use custom RequestBody, add an abort() method and override the writeTo() method like this:
public void abort() {
aborted = true;
}
#Override
public void writeTo(BufferedSink sink) throws IOException {
Source source = null;
try {
source = Okio.source(mFile);
long transferred = 0;
long read;
while (!aborted && (read = source.read(sink.buffer(), SEGMENT_SIZE)) != -1) {
transferred += read;
sink.flush();
mListener.transferredSoFar(transferred);
}
} finally {
Util.closeQuietly(source);
}
}
Is there any other way?
It turns out it is quite easy:
Just hold reference to the Call object and cancel it when needed like this:
private Call mCall;
private void executeRequest (Request request) {
mCall = mOkHttpClient.newCall(request);
try {
Response response = mCall.execute();
...
} catch (IOException e) {
if (!mCall.isCanceled()) {
mLogger.error("Error uploading file: {}", e);
uploadFailed(); // notify whoever is needed
}
}
}
public void abortUpload() {
if (mCall != null) {
mCall.cancel();
}
}
Please note that when you cancel the Call while uploading an IOException will be thrown so you have to check in the catch if it is cancelled (as shown above) otherwise you will have false positive for error.
I think the same approach can be used for aborting download of large files.
I need to build a webservice with Jersey that downloads a big file from another service and returns to the client.
I would like jersey to read some bytes into a buffer and write those bytes to client socket.
I would like it to use non blocking I/O so I dont keep a thread busy. (This could not be achieved)
#GET
#Path("mypath")
public void getFile(final #Suspended AsyncResponse res) {
Client client = ClientBuilder.newClient();
WebTarget t = client.target("http://webserviceURL");
t.request()
.header("some header", "value for header")
.async().get(new InvocationCallback<byte[]>(){
public void completed(byte[] response) {
res.resume(response);
}
public void failed(Throwable throwable) {
res.resume(throwable.getMessage());
throwable.printStackTrace();
//reply with error
}
});
}
So far I have this code and I believe Jersey would download the complete file and then write it to the client which is not what I want to do.
any thoughts??
The client side async request, isn't going to do much for your use case. It's more mean for "fire and forget" use cases. What you can do though is just get the InputStream from the client Response and mix with a server side StreamingResource to stream the results. The server will start sending the data as it is coming in from the other remote resource.
Below is an example. The "/file" endpoint is the dummy remote resource that serves up the file. The "/client" endpoint consumes it.
#Path("stream")
#Produces(MediaType.APPLICATION_OCTET_STREAM)
public class ClientStreamingResource {
private static final String INFILE = "Some File";
#GET
#Path("file")
public Response fileEndpoint() {
final File file = new File(INFILE);
final StreamingOutput output = new StreamingOutput() {
#Override
public void write(OutputStream out) {
try (FileInputStream in = new FileInputStream(file)) {
byte[] buf = new byte[512];
int len;
while ((len = in.read(buf)) != -1) {
out.write(buf, 0, len);
out.flush();
System.out.println("---- wrote 512 bytes file ----");
}
} catch (IOException ex) {
throw new InternalServerErrorException(ex);
}
}
};
return Response.ok(output)
.header(HttpHeaders.CONTENT_LENGTH, file.length())
.build();
}
#GET
#Path("client")
public void clientEndpoint(#Suspended final AsyncResponse asyncResponse) {
final Client client = ClientBuilder.newClient();
final WebTarget target = client.target("http://localhost:8080/stream/file");
final Response clientResponse = target.request().get();
final StreamingOutput output = new StreamingOutput() {
#Override
public void write(OutputStream out) {
try (final InputStream entityStream = clientResponse.readEntity(InputStream.class)) {
byte[] buf = new byte[512];
int len;
while ((len = entityStream.read(buf)) != -1) {
out.write(buf, 0, len);
out.flush();
System.out.println("---- wrote 512 bytes client ----");
}
} catch (IOException ex) {
throw new InternalServerErrorException(ex);
}
}
};
ResponseBuilder responseBuilder = Response.ok(output);
if (clientResponse.getHeaderString("Content-Length") != null) {
responseBuilder.header("Content-Length", clientResponse.getHeaderString("Content-Length"));
}
new Thread(() -> {
asyncResponse.resume(responseBuilder.build());
}).start();
}
}
I used cURL to make the request, and jetty-maven-plugin to be able to run the example from the command line. When you do run it, and make the request, you should see the server logging
---- wrote 512 bytes file ----
---- wrote 512 bytes file ----
---- wrote 512 bytes client ----
---- wrote 512 bytes file ----
---- wrote 512 bytes client ----
---- wrote 512 bytes file ----
---- wrote 512 bytes client ----
---- wrote 512 bytes file ----
---- wrote 512 bytes client ----
...
while cURL client is keeping track of the results
The point to take away from this is that the "remote server" logging is happening the same time as the client resource is logging. This shows that the client doesn't wait to receive the entire file. It starts sending out bytes as soon as it starts receiving them.
Some things to note about the example:
I used a very small buffer size (512) because I was testing with a small (1Mb) file. I really didn't want to wait for a large file for testing. But I would imagine large files should work just the same. Of course you will want to increase the buffer size to something larger.
In order to use the smaller buffer size, you need to set the Jersey property ServerProperties.OUTBOUND_CONTENT_LENGTH_BUFFER to 0. The reason is that Jersey keeps in internal buffer of size 8192, which will cause my 512 byte chunks of data not to flush, until 8192 bytes were buffered. So I just disabled it.
When using AsyncResponse, you should use another thread, as I did. You may want to use executors instead of explicitly creating threads though. If you don't use another thread, then you are still holding up the thread from the container's thread pool.
UPDATE
Instead of managing your own threads/executor, you can annotate the client resource with #ManagedAsync, and let Jersey manage the threads
#ManagedAsync
#GET
#Path("client")
public void clientEndpoint(#Suspended final AsyncResponse asyncResponse) {
...
asyncResponse.resume(responseBuilder.build());
}
I am playing around with Jersey and would like to know how one should implement a "download" feature. For example let's say I have some resources under /files/ that I would like to be "downloaded" via a GET how should I do this? I already know the proper annotations and implementations for GET, PUT, POST, DELETE, but I'm not quite sure how one should treat binary data in this case. Could somebody please point me in the right direction, or show me a simple implementation? I've had a look at the jersey-samples-1.4, but I can't seem to be able to find what I am looking for.
Many thanks!
You should use #Produces annotation to specify which media type file is (pdf, zip, etc..). Java specification for this annotation can be found here.
Your server should return created file. For example in core java you can do something like this:
#GET
#Produces(MediaType.APPLICATION_OCTET_STREAM)
#Path("path")
public StreamingOutput getFile() {
return new StreamingOutput() {
public void write(OutputStream out) throws IOException, WebApplicationException {
try {
FileInputStream in = new FileInputStream(my_file);
byte[] buffer = new byte[4096];
int length;
while ((length = in.read(buffer)) > 0){
out.write(buffer, 0, length);
}
in.close();
} catch (Exception e) {
throw new WebApplicationException(e);
}
}
};
}
I've got a Java web service in JAX-WS that returns an OutputStream from another method. I can't seem to figure out how to stream the OutputStream into the returned DataHandler any other way than to create a temporary file, write to it, then open it back up again as an InputStream. Here's an example:
#MTOM
#WebService
class Example {
#WebMethod
public #XmlMimeType("application/octet-stream") DataHandler service() {
// Create a temporary file to write to
File fTemp = File.createTempFile("my", "tmp");
OutputStream out = new FileOutputStream(fTemp);
// Method takes an output stream and writes to it
writeToOut(out);
out.close();
// Create a data source and data handler based on that temporary file
DataSource ds = new FileDataSource(fTemp);
DataHandler dh = new DataHandler(ds);
return dh;
}
}
The main issue is that the writeToOut() method can return data that are far larger than the computer's memory. That's why the method is using MTOM in the first place - to stream the data. I can't seem to wrap my head around how to stream the data directly from the OutputStream that I need to provide to the returned DataHandler (and ultimately the client, who receives the StreamingDataHandler).
I've tried playing around with PipedInputStream and PipedOutputStream, but those don't seem to be quite what I need, because the DataHandler would need to be returned after the PipedOutputStream is written to.
Any ideas?
I figured out the answer, along the lines that Christian was talking about (creating a new thread to execute writeToOut()):
#MTOM
#WebService
class Example {
#WebMethod
public #XmlMimeType("application/octet-stream") DataHandler service() {
// Create piped output stream, wrap it in a final array so that the
// OutputStream doesn't need to be finalized before sending to new Thread.
PipedOutputStream out = new PipedOutputStream();
InputStream in = new PipedInputStream(out);
final Object[] args = { out };
// Create a new thread which writes to out.
new Thread(
new Runnable(){
public void run() {
writeToOut(args);
((OutputStream)args[0]).close();
}
}
).start();
// Return the InputStream to the client.
DataSource ds = new ByteArrayDataSource(in, "application/octet-stream");
DataHandler dh = new DataHandler(ds);
return dh;
}
}
It is a tad more complex due to final variables, but as far as I can tell this is correct. When the thread is started, it blocks when it first tries to call out.write(); at the same time, the input stream is returned to the client, who unblocks the write by reading the data. (The problem with my previous implementations of this solution was that I wasn't properly closing the stream, and thus running into errors.)
Sorry, I only did this for C# and not java, but I think your method should launch a thread to run "writeToOut(out);" in parralel. You need to create a special stream and pass it to the new thread which gives that stream to writeToOut. After starting the thread you return that stream-object to your caller.
If you only have a method that writes to a stream and returns afterwards and another method that consumes a stream and returns afterwards, there is no other way.
Of coure the tricky part is to get hold of such a -multithreading safe- stream: It shall block each side if an internal buffer is too full.
Don't know if a Java-pipe-stream works for that.
Wrapper pattern ? :-).
Custom javax.activation.DataSource implementation (only 4 methods) to be able to do this ?
return new DataHandler(new DataSource() {
// implement getOutputStream to return the stream used inside writeToOut()
...
});
I don't have the IDE available to test this so i'm only doing a suggestion. I would also need the writeToOut general layout :-).
In my application I use InputStreamDataSource implementation that take InputStream as constructor argument instead of File in FileDataSource. It works so far.
public class InputStreamDataSource implements DataSource {
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
private final String name;
public InputStreamDataSource(InputStream inputStream, String name) {
this.name = name;
try {
int nRead;
byte[] data = new byte[16384];
while ((nRead = inputStream.read(data, 0, data.length)) != -1) {
buffer.write(data, 0, nRead);
}
buffer.flush();
inputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public String getContentType() {
return new MimetypesFileTypeMap().getContentType(name);
}
#Override
public InputStream getInputStream() throws IOException {
return new ByteArrayInputStream(buffer.toByteArray());
}
#Override
public String getName() {
return name;
}
#Override
public OutputStream getOutputStream() throws IOException {
throw new IOException("Read-only data");
}
}