How to close a com.google.api.client.http.HttpResponse object properly - java

I am trying to close a com.google.api.client.http.HttpResponse object, but I get the Eclipse error
Unhandled exception type IOException
on the line response.disconnect();
Here's a code example:
HttpRequest request = null;
HttpResponse response = null;
try {
request = this.buildJsonApiRequest(apiUrl);
response = this.execute(request);
return response.parseAs(MyClass.class);
} catch (final IOException e) {
throw new DaoException(e);
} finally {
if (response != null) {
response.disconnect();
}
}
The code works without the finally block, but I am concerned about many response objects being opened and not closed. What is the proper way to do this?

You need to put the disconnect call within a try-catch block because according to Google API documentation that method could throw an IOException:
public void disconnect() throws IOException
Follow this link to learn more about it:
https://developers.google.com/api-client-library/java/google-http-java-client/reference/1.20.0/com/google/api/client/http/HttpResponse#disconnect()

This is in response to Eleazar Enrique answer that the disconnect needs to be within a try block. This is an example of how to possibly write it more elegantly and make it reusable.
You could create a handler class that implements autoCloseable then use try-with-resource
public class HttpResponseHandler implements AutoCloseable {
private HttpResponse response;
public HttpResponseHandler(HttpResponse response) {
this.response = response;
}
}
public <T> T parseAs(Class<T> clazz) throws IOException {
return response.parseAs(clazz);
}
#Override
public void close() {
if (response != null) {
try {
response.disconnect();
} catch (IOException ex) {}
}
}
then in your code it would be something like this
HttpRequest request = this.buildJsonApiRequest(apiUrl);
try (HttpResponseHandler handler = new HttpResponseHandler(this.execute(request)) {
return handler.parseAs(MyClass.class);
} catch (final IOException e) {
throw new DaoException(e);
}
The AutoCloseable will close the connection for you, so you wouldn't have to handle it in the finally block.

Related

How to correctly read Flux<DataBuffer> and convert it to a single inputStream

I'm using WebClient and custom BodyExtractorclass for my spring-boot application
WebClient webLCient = WebClient.create();
webClient.get()
.uri(url, params)
.accept(MediaType.APPLICATION.XML)
.exchange()
.flatMap(response -> {
return response.body(new BodyExtractor());
})
BodyExtractor.java
#Override
public Mono<T> extract(ClientHttpResponse response, BodyExtractor.Context context) {
Flux<DataBuffer> body = response.getBody();
body.map(dataBuffer -> {
try {
JaxBContext jc = JaxBContext.newInstance(SomeClass.class);
Unmarshaller unmarshaller = jc.createUnmarshaller();
return (T) unmarshaller.unmarshal(dataBuffer.asInputStream())
} catch(Exception e){
return null;
}
}).next();
}
Above code works with small payload but not on a large payload, I think it's because I'm only reading a single flux value with next and I'm not sure how to combine and read all dataBuffer.
I'm new to reactor, so I don't know a lot of tricks with flux/mono.
This is really not as complicated as other answers imply.
The only way to stream the data without buffering it all in memory is to use a pipe, as #jin-kwon suggested. However, it can be done very simply by using Spring's BodyExtractors and DataBufferUtils utility classes.
Example:
private InputStream readAsInputStream(String url) throws IOException {
PipedOutputStream osPipe = new PipedOutputStream();
PipedInputStream isPipe = new PipedInputStream(osPipe);
ClientResponse response = webClient.get().uri(url)
.accept(MediaType.APPLICATION.XML)
.exchange()
.block();
final int statusCode = response.rawStatusCode();
// check HTTP status code, can throw exception if needed
// ....
Flux<DataBuffer> body = response.body(BodyExtractors.toDataBuffers())
.doOnError(t -> {
log.error("Error reading body.", t);
// close pipe to force InputStream to error,
// otherwise the returned InputStream will hang forever if an error occurs
try(isPipe) {
//no-op
} catch (IOException ioe) {
log.error("Error closing streams", ioe);
}
})
.doFinally(s -> {
try(osPipe) {
//no-op
} catch (IOException ioe) {
log.error("Error closing streams", ioe);
}
});
DataBufferUtils.write(body, osPipe)
.subscribe(DataBufferUtils.releaseConsumer());
return isPipe;
}
If you don't care about checking the response code or throwing an exception for a failure status code, you can skip the block() call and intermediate ClientResponse variable by using
flatMap(r -> r.body(BodyExtractors.toDataBuffers()))
instead.
A slightly modified version of Bk Santiago's answer makes use of reduce() instead of collect(). Very similar, but doesn't require an extra class:
Java:
body.reduce(new InputStream() {
public int read() { return -1; }
}, (s: InputStream, d: DataBuffer) -> new SequenceInputStream(s, d.asInputStream())
).flatMap(inputStream -> /* do something with single InputStream */
Or Kotlin:
body.reduce(object : InputStream() {
override fun read() = -1
}) { s: InputStream, d -> SequenceInputStream(s, d.asInputStream()) }
.flatMap { inputStream -> /* do something with single InputStream */ }
Benefit of this approach over using collect() is simply you don't need to have a different class to gather things up.
I created a new empty InputStream(), but if that syntax is confusing, you can also replace it with ByteArrayInputStream("".toByteArray()) instead to create an empty ByteArrayInputStream as your initial value instead.
Here comes another variant from other answers. And it's still not memory-friendly.
static Mono<InputStream> asStream(WebClient.ResponseSpec response) {
return response.bodyToFlux(DataBuffer.class)
.map(b -> b.asInputStream(true))
.reduce(SequenceInputStream::new);
}
static void doSome(WebClient.ResponseSpec response) {
asStream(response)
.doOnNext(stream -> {
// do some with stream
// close the stream!!!
})
.block();
}
I was able to make it work by using Flux#collect and SequenceInputStream
#Override
public Mono<T> extract(ClientHttpResponse response, BodyExtractor.Context context) {
Flux<DataBuffer> body = response.getBody();
return body.collect(InputStreamCollector::new, (t, dataBuffer)-> t.collectInputStream(dataBuffer.asInputStream))
.map(inputStream -> {
try {
JaxBContext jc = JaxBContext.newInstance(SomeClass.class);
Unmarshaller unmarshaller = jc.createUnmarshaller();
return (T) unmarshaller.unmarshal(inputStream);
} catch(Exception e){
return null;
}
}).next();
}
InputStreamCollector.java
public class InputStreamCollector {
private InputStream is;
public void collectInputStream(InputStream is) {
if (this.is == null) this.is = is;
this.is = new SequenceInputStream(this.is, is);
}
public InputStream getInputStream() {
return this.is;
}
}
There's a much cleaner way to do this using the underlying reactor-netty HttpClient directly, instead of using WebClient. The composition hierarchy is like this:
WebClient -uses-> HttpClient -uses-> TcpClient
Easier to show code than explain:
HttpClient.create()
.get()
.responseContent() // ByteBufFlux
.aggregate() // ByteBufMono
.asInputStream() // Mono<InputStream>
.block() // We got an InputStream, yay!
However, as I've pointed out already, using InputStream is a blocking operation, that defeats the purpose of using a non-blocking HTTP client, not to mention aggregating the whole response. See this for a Java NIO vs. IO comparison.
You can use pipes.
static <R> Mono<R> pipeAndApply(
final Publisher<DataBuffer> source, final Executor executor,
final Function<? super ReadableByteChannel, ? extends R> function) {
return using(Pipe::open,
p -> {
executor.execute(() -> write(source, p.sink())
.doFinally(s -> {
try {
p.sink().close();
} catch (final IOException ioe) {
log.error("failed to close pipe.sink", ioe);
throw new RuntimeException(ioe);
}
})
.subscribe(releaseConsumer()));
return just(function.apply(p.source()));
},
p -> {
try {
p.source().close();
} catch (final IOException ioe) {
log.error("failed to close pipe.source", ioe);
throw new RuntimeException(ioe);
}
});
}
Or using CompletableFuture,
static <R> Mono<R> pipeAndApply(
final Publisher<DataBuffer> source,
final Function<? super ReadableByteChannel, ? extends R> function) {
return using(Pipe::open,
p -> fromFuture(supplyAsync(() -> function.apply(p.source())))
.doFirst(() -> write(source, p.sink())
.doFinally(s -> {
try {
p.sink().close();
} catch (final IOException ioe) {
log.error("failed to close pipe.sink", ioe);
throw new RuntimeException(ioe);
}
})
.subscribe(releaseConsumer())),
p -> {
try {
p.source().close();
} catch (final IOException ioe) {
log.error("failed to close pipe.source", ioe);
throw new RuntimeException(ioe);
}
});
}

Reading binary data from HttpServletRequest

Using Jetty, I'm sending bytes to URL http://localhost:8080/input/ like so -
public static void sampleBytesRequest (String url)
{
try
{
HttpClient client = new HttpClient();
client.start();
client.newRequest(url)
.content(new InputStreamContentProvider(new ByteArrayInputStream("batman".getBytes())))
.send();
}
catch (Exception e) { e.printStackTrace(); }
}
My server (also Jetty) has a handler like so -
public final class JettyHandler extends AbstractHandler implements JettyConstants, LqsConstants
{
#Override
public void handle (String target,
Request baseRequest,
HttpServletRequest request,
HttpServletResponse response)
throws IOException, ServletException
{
response.setContentType(UTF_ENCODING);
String requestBody = null;
try { requestBody = baseRequest.getReader().readLine(); }
catch (IOException e) { e.printStackTrace(); }
System.out.println(new String(IOUtils.toByteArray(request.getInputStream())));
}
}
As you can see, I'm trying to recreate the original string from the binary data and print it to stdout.
However, if I set a break point at the print statement in the handler, when the request reaches that line, the server abruptly seems to skip over it.
What am I doing wrong? How can I get the binary data I'm sending over and recreate the string?
Thank you!
Turns out the issue was with my client.
Instead of
client.newRequest(url)
.content(new InputStreamContentProvider(new ByteArrayInputStream("batman".getBytes())))
.send();
The proper way to do this is -
client.newRequest(url)
.content(new BytesContentProvider("batman".getBytes()), "text/plain")
.send();

Code Coverage for Catch Blocks using EclEMMA

I have catch block, i want to execute the catch block. My Class file is,
public class TranscoderBean implements TranscoderLocal {
public byte[] encode(final Collection<?> entitySet) throws TranscoderException {
Validate.notNull(entitySet, "The entitySet can not be null.");
LOGGER.info("Encoding entities.");
LOGGER.debug("entities '{}'.", entitySet);
// Encode the Collection
MappedEncoderStream encoderStream = null;
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
try {
// Create the encoder and write the the DSE Logbook messgae
encoderStream = new MappedEncoderStream(outputStream, this.encoderVersion);
encoderStream.writeObjects(new ArrayList<Object>(entitySet), false);
encoderStream.flush();
}
catch (Exception e) {
LOGGER.error("Exception while encoding entities", e);
throw new TranscoderException("Failed to encode entities", e);
}
finally {
if (encoderStream != null) {
try {
encoderStream.close();
}
catch (IOException e) {
LOGGER.error("Exception while closing the endcoder stream.", e);
throw new TranscoderException("Failed to close encoder stream", e);
}
}
}
}
My Test Class file is,
public class TranscoderBeanTest {
private TranscoderBean fixture;
#Mock
MappedEncoderStream mappedEncoderStream;
#Test
public void encodeTest() throws TranscoderException {
List<Object> entitySet = new ArrayList<Object>();
FlightLog log1 = new FlightLog();
log1.setId("F5678");
log1.setAssetId("22");
FlightLog log2 = new FlightLog();
log2.setId("F5679");
log2.setAssetId("23");
entitySet.add(log1);
entitySet.add(log2);
MockitoAnnotations.initMocks(this);
try {
Mockito.doThrow(new IOException()).when(this.mappedEncoderStream).close();
Mockito.doReturn(new IOException()).when(this.mappedEncoderStream).close();
}
catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
byte[] encode = this.fixture.encode(entitySet);
Assert.assertNotNull(encode);
}
}
I have tried, Mockito.doThrow and Mockito.doReturn methods but still the catch block is not executed. What am doing wrong.
To test a try-catch block, you can use a TestNG way which consists in implementing a test method with the following annotation expectedExceptions.
The code of this method, you have to implement it in order to provoke this exception, so the catch block will be executed.
You can have a look at http://testng.org/doc/documentation-main.html#annotations
Are you sure you have the right test class. I do not see any reference to TranscoderBean in your
You expect Mockito to do things that it does not claim to do:
Mockito.doThrow(new IOException()).when(this.mappedEncoderStream).close();
This statement asserts that whenever someone calls close() on the mapperEncoderStream-Object will receive an IOException. You never call close.
Try to add mapperEncoderStream.close(); after your Mockito-actions and the catch block will be entered - but note: this won't help you with your problem, since mockito cannot help here.
For your problem you can consider following alternative:
rewrite
encoderStream = new MappedEncoderStream(outputStream, this.encoderVersion);
to
encoderStream = createMappedEncoderStream(outputStream);
MappedEncoderStream createMappedEncoderStream(ByteArrayOutputStream outputStream) {
return new MappedEncoderStream(outputStream, this.encoderVersion);
}
this lets you inject the mock as dependency.
Then init your fixure like this:
fixture = new TranscoderBean() {
MappedEncoderStream createMappedEncoderStream(ByteArrayOutputStream outputStream) {
return mappedEncoderStream; //this is your mock
}
}
This injects the mock into your TranscoderBean.encode method.
Then change your mock Annotation:
#Mock(answer=CALLS_REAL_METHODS)
MappedEncoderStream mappedEncoderStream;
This is needed, because your encode method does not only call close on mappedEncoderStream, but also writeObjects and flush. These calls may throw exceptions so they have to be mocked or replaced by calls to the real object.
prune your test like that
#Test(expected=TranscoderException.class)
public void encodeTest() throws TranscoderException {
//... same as above
MockitoAnnotations.initMocks(this);
Mockito.doThrow(new IOException()).when(this.mappedEncoderStream).close();
this.fixture.encode(entitySet); //this will throw an exception
}
This does the following:
the encode method does not return null! It throws a TranscoderException, so it is placed as expected
override the close method with exception throwing
call encode

How to deal with multiple exceptions thrown in .close()?

This is Java 7. I have a class which maintains a list of FTP clients (I call them agents) in a class which implements Closeable.
On .close() I disconnect them, but of course each of them can throw an exception. Right now the code is as follows:
#Override
public void close()
throws IOException
{
IOException toThrow = null;
final List<FtpAgent> list = new ArrayList<>();
agents.drainTo(list); // <-- agents is a BlockingQueue
for (final FtpAgent agent: list)
try {
agent.disconnect();
} catch (IOException e) {
if (toThrow == null)
toThrow = e;
}
if (toThrow != null)
throw toThrow;
}
Apart from the lack of logging of each individual exception, is this the correct way to deal with this?

How do I refactor closing a stream in Java?

Due to my company's policy of using Eclipse and using Eclipse's code-autofix, the following code pattern appears excessively in the codebase:
InputStream is = null;
try {
is = url.openConnection().getInputStream();
// .....
} catch (IOException e) {
// handle error
} finally {
if (is != null) {
try {
is.close();
} catch (IOException e) {
// handle error
}
}
}
IMO it's extremely fugly and hard to read, especially the portion within the finally block (is there really a need to catch 2 instances of IOException?). Is there anyway to streamline the code such that it looks cleaner?
Why do anything? It's working code. It's correct.
Leave it be.
First, about using IOUtils - may worth a shot telling your supervisors that the very application-server / Java runtime environment they might use, uses IOUtils and similar libraries itself. so in essence you're not introducing new components to your architecture.
Second, no, not really. There isn't really any way around it other than writing your own utility that will immitate IOUtils' closeQuietly method.
public class Util {
public static void closeStream(inputStream is) {
if (is != null) {
try {
is.close();
} catch (IOException e) {
// log something
}
}
}
Now your code is
InputStream is = null;
try {
is = url.openConnection().getInputStream();
// .....
} catch (IOException e) {
// handle error
} finally {
Util.closeStream(is);
}
Not a lot else to do as the IOException in the catch might have some specific processing.
See this question, use the closeQuietly() solution.
InputStream is = null;
try {
is = url.openConnection().getInputStream();
// .....
} catch (IOException e) {
// handle error
} finally {
IoUtils.closeQuietly(is);
}
// stolen from the cited question above
public class IoUtils {
public static closeQuietly (Closeable closeable) {
try {
closeable.close();
} catch (IOException logAndContinue) {
...
}
}
}
Or wait for JDK7's ARM blocks.
You could define something like this somewhere:
private static interface InputStreamCallback {
public void doIt(InputStream is) throws IOException;
}
private void with(InputStreamCallback cb) {
InputStream is = null;
// Creational code. Possibly adding an argument
try {
cb.doIt(is);
} catch (IOException e) {
// handle error or rethrow.
// If rethrow add throws to method spec.
} finally {
if (is != null) {
try {
is.close();
} catch (IOException e) {
// handle error or rethrow.
}
}
}
}
And invoke your code like this:
with(new InputStreamCallback() {
#Override
public void doIt(InputStream is) throws IOException {
is = url.openConnection().getInputStream();
// .....
}
});
If you declare with method static in a helper class, then you could even do an import static of it.
There's a drawback. You need to declare url final.
EDIT: creational code is not the point. You can arrange it in several ways. The callback is the point. You could isolate what you need to do there.

Categories