How to download and process large data reactively? - java

I need to initiate download of some content over HTTP and then read the data as a reactive stream.
So, even though the downloaded data are big, I can almost immediately read the first few bytes of the response body (no need to wait for the whole response body). Then, do some computations and in a few seconds read another portion of the data. There has to be some limit of the cached data, because operation memory can't handle the whole content (its tens of GB).
I've been trying to use HttpClient's sendAsync method with BodyHandlers.ofInputStream(), but it always blocks and waits for all the data to arrive.
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://..."))
.build();
HttpResponse<InputStream> response = client
.sendAsync(request, HttpResponse.BodyHandlers.ofInputStream())
.get(); // this finishes as soon as the header is received
try {
InputStream stream = response.body();
byte[] test = stream.readNBytes(20); // trying to read just a few bytes
// but it waits for the whole body
} catch (IOException ex) {}
What do I need to change so the response body is downloaded gradually?

This is a bug. It has been fixed in Java 11.0.2:
https://bugs.openjdk.java.net/browse/JDK-8212926

Related

How to notify exception to client after HTTP code is returned

I use Java and Spring framework to create, inside a REST controller class, a method bound to GET requests.
However, the result returned by this method is sent as a stream which is fed asynchronously by another service (using InfluxDB).
Therefore, it immediately returns code 200 to the client, even though a timeout or any exception can occur afterwards.
I would like to notify the client about this.
/**
* InfluxDB service
*/
#Inject
InfluxDBService influxDBService;
/**
* #return CSV file containing the data
*/
#RequestMapping(value="/dump", method=RequestMethod.GET, produces="application/csv")
public #ResponseBody void getDump(
HttpServletResponse response,
#RequestParam(value = "app", required = false) String appFilter,
#RequestParam(value = "context", required = false) String contextFilter,
#RequestParam(value = "path", required = false) String pathRegex
) throws DataAnalysisException {
[...]
InputStream dump = influxDBService.dump( ... filters after treatment ...);
response.setContentType("application/csv");
long currentTime = System.currentTimeMillis() / 1000;
String fileName = "influxdb-dump_" + currentTime + ".csv";
response.setHeader("Content-Disposition", "attachment; filename=\"" + fileName + "\"");
try {
FileCopyUtils.copy(dump, response.getOutputStream());
} catch (IOException e) {
throw new DataAnalysisException("Could not get output from request results", e);
}
}
In the dump() method, an OkHttpClient creates a remote connection to an InfluxDB server and returns an InputStream of data. This client has a default timeout of 10 seconds.
If there is not too much data, everything works fine and the client downloads a CSV with correct data.
But if the InfluxDB server doesn't answer in time (too much data), then an empty CSV file is downloaded, even though HTTP code 200 is returned.
Thing is, when I debug, it goes through the FileCopyUtils.copy line which returns 200, but after 10 seconds it goes through the "throw new DataAnalysisException" catch-block. But at this time, client has already downloaded an empty CSV and got code 200.
DataAnalysisException is a custom exception returning HTTP code 500.
My question is : after the timeout, is there a way to notify the client that we actually had an issue even though he got 200 ? That could help me build an error page to notify him.
Thanks to you all.
I solved it.
Instead of FileCopyUtils.copy, I used StreamUtils.copy, which is basically the same thing, except it doesn't automatically close the input and output streams.
Then, in a catch clause, I do response.reset() then response.sendError(code, "msg"), and throw an exception.
And in a finally clause, I manually close both input and output streams.
Therefore, the CSV headers and remaining data are cleared, and when the streams close it doesn't tell the browser to download a CSV file.
Don't hesitate to contact me if you need more info or precise code.

HTTPClient never leaves socketRead() when executing GET on stream - workaround?

I am using Apache HttpClient (from Apache HTTP Components 4.3) in order to execute a GET against a ShoutCast stream:
CloseableHttpClient client = HttpClients.createDefault();
HttpGet request = new HttpGet("http://relay3.181.fm:8062/");
CloseableHttpResponse response = client.execute(request);
The call to client.execute() never returns, and according to the debugger it is a nested invocation to java.net.SocketInputStream#socketRead0() which is the last node in the call stack. From profiling the code, my only conclusion (based on a steadily rising number of char[] allocations) is that it simply "latches on" to the stream and keeps pulling bytes from the socket indefinitely.
What I would like is for the client to simply work normally and give me a HTTPResponse which I can use to pull what I want from the stream. As a matter of fact, I have been able to do so with other ShoutCast streams, but not this one.
Is there any way to work around this? Could I for example tell the client to break off after a certain number of bytes?
That site is very particular. If you don't specify a supported User-Agent (like Mozilla), the server keep streaming bytes. I don't know what these bytes are meant to represent, audio perhaps.
If you print out the bytes that you receive, you will see
ICY 200 OK
icy-notice1:<BR>This stream requires Winamp<BR>
icy-notice2:SHOUTcast Distributed Network Audio Server/Linux v1.9.8<BR>
icy-name:181.FM - The Beatles Channel
icy-genre:Oldies
icy-url:http://www.181.fm
content-type:audio/mpeg
icy-pub:1
icy-br:128
which indicates that the response is not a valid HTTP response. It is an ICY response from the ICY protocol.
Now the default HttpClient you are using uses a DefaultHttpResponseParser which is a
Lenient HTTP response parser implementation that can skip malformed
data until a valid HTTP response message head is encountered.
In other words, it keeps reading the bytes the server is sending until it finds a valid HTTP response header, which will never happen, thus the infinite read.
I don't think you will be able to accomplish what you want with the Http Components library. Either look for an ICY client implementation in Java or spin your own.

Apache HTTPClient Streaming HTTP POST Request?

I'm trying to build a "full-duplex" HTTP streaming request using Apache HTTPClient.
In my first attempt, I tried using the following request code:
URL url=new URL(/* code goes here */);
HttpPost request=new HttpPost(url.toString());
request.addHeader("Connection", "close");
PipedOutputStream requestOutput=new PipedOutputStream();
PipedInputStream requestInput=new PipedInputStream(requestOutput, DEFAULT_PIPE_SIZE);
ContentType requestContentType=getContentType();
InputStreamEntity requestEntity=new InputStreamEntity(requestInput, -1, requestContentType);
request.setEntity(requestEntity);
HttpEntity responseEntity=null;
HttpResponse response=getHttpClient().execute(request); // <-- Hanging here
try {
if(response.getStatusLine().getStatusCode() != 200)
throw new IOException("Unexpected status code: "+response.getStatusLine().getStatusCode());
responseEntity = response.getEntity();
}
finally {
if(responseEntity == null)
request.abort();
}
InputStream responseInput=responseEntity.getContent();
ContentType responseContentType;
if(responseEntity.getContentType() != null)
responseContentType = ContentType.parse(responseEntity.getContentType().getValue());
else
responseContentType = DEFAULT_CONTENT_TYPE;
Reader responseStream=decode(responseInput, responseContentType);
Writer requestStream=encode(requestOutput, getContentType());
The request hangs at the line indicated above. It seems that the code is trying to send the entire request before it gets the response. In retrospect, this makes sense. However, it's not what I was hoping for. :)
Instead, I was hoping to send the request headers with Transfer-Encoding: chunked, receive a response header of HTTP/1.1 200 OK with a Transfer-Encoding: chunked header of its own, and then I'd have a full-duplex streaming HTTP connection to work with.
Happily, my HTTPClient has another NIO-based asynchronous client with good usage examples (like this one). My questions are:
Is my interpretation of the synchronous HTTPClient behavior correct? Or is there something I can do to continue using the (simpler) synchronous HTTPClient in the manner I described?
Does the NIO-based client wait to send the whole request before seeking a response? Or will I be able to send the request incrementally and receive the response incrementally at the same time?
If HTTPClient will not support this modality, is there another HTTP client library that will? Or should I be planning to write a (minimal) HTTP client to support this modality?
Here is my view on skim reading the code:
I cannot completely agree with the fact that a non-200 response means failure. All 2XX responses are mostly valid. Check wiki for more details
For any TCP request, I would recommend to receive the entire response to confirm that it is valid. I say this because, a partial response may mostly be treated as bad response as most of the client implementations cannot make use of it. (Imagine a case where server is responding with 2MB of data and it goes down during this time)
A separate thread must be writing to the OutputStream for your code to
work.
The code above provides the HTTPClient with a PipedInputStream.
PipedInputStream makes bytes available as they are written to the corresponding OutputStream.
The code above does not write to the OutputStream (which must be done by a separate thread.
Therefore the code is hanging exactly where your comment is.
Under the hood, the Apache client says "inputStream.read()" which in the case of piped streams requires that outputStream.write(bytes) was called previously (by a separate thread).
Since you aren't pumping bytes into the associated OutputStream from a separate thread the InputStream just sits and waits for the OutputStream to be written to by "some other thread."
From the JavaDocs:
A piped input stream should be connected to a piped output stream;
the piped input stream then provides whatever data bytes are written
to the piped output stream.
Typically, data is read from a PipedInputStream object by one thread
and data is written to the corresponding PipedOutputStream by some
other thread.
Attempting to use both objects from a single thread is not
recommended, as it may deadlock the thread.
The piped input stream contains a buffer, decoupling read operations
from write operations, within limits. A pipe is said to be "broken"
if a thread that was providing data bytes to the connected piped
output stream is no longer alive.
Note: Seems to me, since piped streams and concurrency were not mentioned in your problem statement, that it's not necessary. Try wrapping a ByteArrayInputStream() with the Entity object instead first for a sanity check... that should help you narrow down the issue.
Update
Incidentally, I wrote an inversion of Apache's HTTP Client API [PipedApacheClientOutputStream] which provides an OutputStream interface for HTTP POST using Apache Commons HTTP Client 4.3.4. This may be close to what you are looking for...
Calling-code looks like this:
// Calling-code manages thread-pool
ExecutorService es = Executors.newCachedThreadPool(
new ThreadFactoryBuilder()
.setNameFormat("apache-client-executor-thread-%d")
.build());
// Build configuration
PipedApacheClientOutputStreamConfig config = new
PipedApacheClientOutputStreamConfig();
config.setUrl("http://localhost:3000");
config.setPipeBufferSizeBytes(1024);
config.setThreadPool(es);
config.setHttpClient(HttpClientBuilder.create().build());
// Instantiate OutputStream
PipedApacheClientOutputStream os = new
PipedApacheClientOutputStream(config);
// Write to OutputStream
os.write(...);
try {
os.close();
} catch (IOException e) {
logger.error(e.getLocalizedMessage(), e);
}
// Do stuff with HTTP response
...
// Close the HTTP response
os.getResponse().close();
// Finally, shut down thread pool
// This must occur after retrieving response (after is) if interested
// in POST result
es.shutdown();
Note - In practice the same client, executor service, and config will likely be reused throughout the life of the application, so the outer prep and close code in the above example will likely live in bootstrap/init and finalization code rather than directly inline with the OutputStream instantiation.

Problem reading request body in servlet

I'am writing a HTTP proxy that is part of a test/verification
system. The proxy filters all requests coming from the client device
and directs them towards various systems under test.
The proxy is implemented as a servlet where each request is forwarded
to the target system, it handles both GET and POST. Somtimes the
response from the target system is altered to fit various test
conditions, but that is not the part of the problem.
When forwarding a request, all headers are copied except for those
that is part of the actual HTTP transfer such as Content-Length and
Connection headers.
If the request is a HTTP POST, then the entity body of the request is
forwarded as well and here is where it doesnt work sometimes.
The code reading the entity body from the servlet request is the following:
URL url = new URL(targetURL);
HttpURLConnection conn = (HttpURLConnection)url.openConnection();
String method = request.getMethod();
java.util.Enumeration headers = request.getHeaderNames();
while(headers.hasMoreElements()) {
String headerName = (String)headers.nextElement();
String headerValue = request.getHeader(headerName);
if (...) { // do various adaptive stuff based on header
}
conn.setRequestProperty(headerName, headerValue);
}
// here is the part that fails
char postBody[] = new char[1024];
int len;
if(method.equals("POST")) {
logger.debug("guiProxy, handle post, read request body");
conn.setDoOutput(true);
BufferedReader br = request.getReader();
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(conn.getOutputStream()));
do {
logger.debug("Read request into buffer of size: " + postBody.length);
len = br.read(postBody, 0, postBody.length);
logger.debug("guiProxy, send request body, got " + len + " bytes from request");
if(len != -1) {
bw.write(postBody, 0, len);
}
} while(len != -1);
bw.close();
}
So what happends is that the first time a POST is received, -1
characters are read from the request reader, a wireshark trace shows
that the entity body containing URL encoded post parameters are there
and it is in one TCP segment so there is no network related
differences.
The second time, br.read successfully returns the 232 bytes in the
POST request entity body and every forthcoming request works as well.
The only difference between the first and forthcoming POST requests is
that in the first one, no cookies are present, but in the second one,
a cookie is present that maps to the JSESSION.
Can it be a side effect of entity body not being available since the
request processing in the servlet container allready has read the POST
parameters, but why does it work on forthcoming requests.
I believe that the solution is of course to ignore the entity body on
POST requests containing URL encoded data and fetch all parameters
from the servlet request instead using getParameter and reinsert them
int the outgoing request.
Allthough that is tricky since the POST request could contain GET
parameters, not in our application right now, but implementing it
correctly is some work.
So my question is basically: why do the reader from
request.getReader() return -1 when reading and an entity body is
present in the request, if the entity body is not available for
reading, then getReader should throw an illegal state exception. I
have also tried with InputStream using getInputStream() with the same
results.
All of this is tested on apache-tomcat-6.0.18.
So my question is basically: why do the reader from request.getReader() return -1 when reading.
It will return -1 when there is no body or when it has already been read. You cannot read it twice. Make sure that nothing before in the request/response chain has read it.
and an entity body is present in the request, if the entity body is not available for reading, then getReader should throw an illegal state exception.
It will only throw that when you have already called getInputStream() on the request before, not when it is not available.
I have also tried with InputStream using getInputStream() with the same results.
After all, I'd prefer streaming bytes than characters because you then don't need to take character encoding into account (which you aren't doing as far now, this may lead to future problems when you will get this all to work).
Seems, that moving
BufferedReader br = request.getReader()
before all operations, that read request (like request.getHeader() ), works for me well .

HTTP 1.1 Persistent Connections using Sockets in Java

Let's say I have a java program that makes an HTTP request on a server using HTTP 1.1 and doesn't close the connection. I make one request, and read all data returned from the input stream I have bound to the socket. However, upon making a second request, I get no response from the server (or there's a problem with the stream - it doesn't provide any more input). If I make the requests in order (Request, request, read) it works fine, but (request, read, request, read) doesn't.
Could someone shed some insight onto why this might be happening? (Code snippets follow). No matter what I do, the second read loop's isr_reader.read() only ever returns -1.
try{
connection = new Socket("SomeServer", port);
con_out = connection.getOutputStream();
con_in = connection.getInputStream();
PrintWriter out_writer = new PrintWriter(con_out, false);
out_writer.print("GET http://somesite HTTP/1.1\r\n");
out_writer.print("Host: thehost\r\n");
//out_writer.print("Content-Length: 0\r\n");
out_writer.print("\r\n");
out_writer.flush();
// If we were not interpreting this data as a character stream, we might need to adjust byte ordering here.
InputStreamReader isr_reader = new InputStreamReader(con_in);
char[] streamBuf = new char[8192];
int amountRead;
StringBuilder receivedData = new StringBuilder();
while((amountRead = isr_reader.read(streamBuf)) > 0){
receivedData.append(streamBuf, 0, amountRead);
}
// Response is processed here.
if(connection != null && !connection.isClosed()){
//System.out.println("Connection Still Open...");
out_writer.print("GET http://someSite2\r\n");
out_writer.print("Host: somehost\r\n");
out_writer.print("Connection: close\r\n");
out_writer.print("\r\n");
out_writer.flush();
streamBuf = new char[8192];
amountRead = 0;
receivedData.setLength(0);
while((amountRead = isr_reader.read(streamBuf)) > 0 || amountRead < 1){
if (amountRead > 0)
receivedData.append(streamBuf, 0, amountRead);
}
}
// Process response here
}
Responses to questions:
Yes, I'm receiving chunked responses from the server.
I'm using raw sockets because of an outside restriction.
Apologies for the mess of code - I was rewriting it from memory and seem to have introduced a few bugs.
So the consensus is I have to either do (request, request, read) and let the server close the stream once I hit the end, or, if I do (request, read, request, read) stop before I hit the end of the stream so that the stream isn't closed.
According to your code, the only time you'll even reach the statements dealing with sending the second request is when the server closes the output stream (your input stream) after receiving/responding to the first request.
The reason for that is that your code that is supposed to read only the first response
while((amountRead = isr_reader.read(streamBuf)) > 0) {
receivedData.append(streamBuf, 0, amountRead);
}
will block until the server closes the output stream (i.e., when read returns -1) or until the read timeout on the socket elapses. In the case of the read timeout, an exception will be thrown and you won't even get to sending the second request.
The problem with HTTP responses is that they don't tell you how many bytes to read from the stream until the end of the response. This is not a big deal for HTTP 1.0 responses, because the server simply closes the connection after the response thus enabling you to obtain the response (status line + headers + body) by simply reading everything until the end of the stream.
With HTTP 1.1 persistent connections you can no longer simply read everything until the end of the stream. You first need to read the status line and the headers, line by line, and then, based on the status code and the headers (such as Content-Length) decide how many bytes to read to obtain the response body (if it's present at all). If you do the above properly, your read operations will complete before the connection is closed or a timeout happens, and you will have read exactly the response the server sent. This will enable you to send the next request and then read the second response in exactly the same manner as the first one.
P.S. Request, request, read might be "working" in the sense that your server supports request pipelining and thus, receives and processes both request, and you, as a result, read both responses into one buffer as your "first" response.
P.P.S Make sure your PrintWriter is using the US-ASCII encoding. Otherwise, depending on your system encoding, the request line and headers of your HTTP requests might be malformed (wrong encoding).
Writing a simple http/1.1 client respecting the RFC is not such a difficult task.
To solve the problem of the blocking i/o access where reading a socket in java, you must use java.nio classes.
SocketChannels give the possibility to perform a non-blocking i/o access.
This is necessary to send HTTP request on a persistent connection.
Furthermore, nio classes will give better performances.
My stress test give to following results :
HTTP/1.0 (java.io) -> HTTP/1.0 (java.nio) = +20% faster
HTTP/1.0 (java.io) -> HTTP/1.1 (java.nio with persistent connection) = +110% faster
Make sure you have a Connection: keep-alive in your request. This may be a moot point though.
What kind of response is the server returning? Are you using chunked transfer? If the server doesn't know the size of the response body, it can't provide a Content-Length header and has to close the connection at the end of the response body to indicate to the client that the content has ended. In this case, the keep-alive won't work. If you're generating content on-the-fly with PHP, JSP etc., you can enable output buffering, check the size of the accumulated body, push the Content-Length header and flush the output buffer.
Is there a particular reason you're using raw sockets and not Java's URL Connection or Commons HTTPClient?
HTTP isn't easy to get right. I know Commons HTTP Client can re-use connections like you're trying to do.
If there isn't a specific reason for you using Sockets this is what I would recommend :)
Writing your own correct client HTTP/1.1 implementation is nontrivial; historically most people who I've seen attempt it have got it wrong. Their implementation usually ignores the spec and just does what appears to work with one particular test server - in particular, they usually ignore the requirement to be able to handle chunked responses.
Writing your own HTTP client is probably a bad idea, unless you have some VERY strange requirements.

Categories