I instantiated a netty 4 (using netty-all-4.0.9.jar) service and initialized the channel by adding 3 ChannelHandler objects:
pipeline.addLast("decoder", new HttpRequestDecoder());
pipeline.addLast("encoder", new HttpResponseEncoder());
pipeline.addLast("handler", new MyHandler());
When testing w/ curl HTTP PUTing a file to my server and I found MyHandler.channelRead is not called immediately for requests with Expect: 100-continue header is sent (curl is waiting for the server to reply with 100 Continue. This means my handler is not able to reply with a HTTP/1.1 100 Continue response to tell the client (curl) to initiate the actual upload of the file immediate.
Interestingly further debugging this issue with telnet shows that a channelRead is called right once the actual body is being uploaded (right after the first byte is received).
Any hints on how to handle PUT requests with 'Expect: 100-continue' header properly to trigger 100 Continue response immediately?
Examples coming with netty (e.g. HttpHelloWorldServerHandler.java) have the following code in the channelRead() method:
if (is100ContinueExpected(req)) {
ctx.write(new DefaultFullHttpResponse(HTTP_1_1, CONTINUE));
}
Related
I am experimenting with RFC Expect Header in java HttpURLConnection which works perfectly except for one detail.
There is an 5 second wait period between sending the body of an Fixed Length Mode or between each chunk in Chunk Streaming Mode
Here is the client class
public static void main(String[] args)throws Exception
{
HttpURLConnection con=(HttpURLConnection)new URL("http://192.168.1.2:2000/ActionG").openConnection();
//for 100-Continue logic
con.setRequestMethod("POST");
con.setRequestProperty("Expect", "100-Continue");
//responds to 100-continue logic
con.setDoOutput(true);
con.setChunkedStreamingMode(5);
con.getOutputStream().write("Hello".getBytes());
con.getOutputStream().flush();
con.getOutputStream().write("World".getBytes());
con.getOutputStream().flush();
con.getOutputStream().write("123".getBytes());
con.getOutputStream().flush();
//decode response and response body/error if any
System.out.println(con.getResponseCode()+"/"+con.getResponseMessage());
con.getHeaderFields().forEach((key,values)->
{
System.out.println(key+"="+values);
System.out.println("====================");
});
try(InputStream is=con.getInputStream()){System.out.println(new String(is.readAllBytes()));}
catch(Exception ex)
{
ex.printStackTrace(System.err);
InputStream err=con.getErrorStream();
if(err!=null)
{
try(err){System.err.println(new String(is.readAllBytes()));}
catch(Exception ex2){throw ex2;}
}
}
con.disconnect();
}
I am uploading 3 chunks. In the server side 5 packets of data are received
All headers. respond with 100 Continue
3 Chunks. For each chunk respond with 100 Continue
Last Chunk[Length 0]. respond with 200 OK
Here is the Test Server
final class TestServer
{
public static void main(String[] args)throws Exception
{
try(ServerSocket socket=new ServerSocket(2000,0,InetAddress.getLocalHost()))
{
int count=0;
try(Socket client=socket.accept())
{
int length;
byte[] buffer=new byte[5000];
InputStream is=client.getInputStream();
OutputStream os=client.getOutputStream();
while((length=is.read(buffer))!=-1)
{
System.out.println(++count);
System.out.println(new String(buffer,0,length));
System.out.println("==========");
if(count<5)
{
os.write("HTTP/1.1 100 Continue\r\n\r\n".getBytes());
os.flush();
}
else
{
os.write("HTTP/1.1 200 Done\r\nContent-Length:0\r\n\r\n".getBytes());
os.flush();
break;
}
}
}
}
}
}
Output:
1
POST /ActionG HTTP/1.1
Expect: 100-Continue
User-Agent: Java/17.0.2
Host: 192.168.1.2:2000
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Connection: keep-alive
Content-type: application/x-www-form-urlencoded
Transfer-Encoding: chunked
========== //5 seconds later
2
5
Hello
========== //5 seconds later
3
5
World
========== //5 seconds later
//this and the last chunk come seperatly but with no delay
4
3
123
==========
5
0
==========
I have checked every Timeout Method in my Connection Object
System.out.println(con.getConnectTimeout());
System.out.println(con.getReadTimeout());
Both return 0
So where is this 5 second delay coming from?
I am using jdk 17.0.2 with windows 10
3 Chunks. For each chunk respond with 100 Continue
That is not how Expect: 100-Continue works. Your server code is completely wrong for what you are attempting to do. In fact, your server code is completely wrong for an HTTP server in general. It is not even attempting to parse the HTTP protocol at all. Not the HTTP headers, not the HTTP chunks, nothing. Is there a reason why you are not using an actual HTTP server implementation, such as Java's own HttpServer?
When using Expect: 100-Continue, the client is required to send ONLY the request headers, and then STOP AND WAIT a few seconds to see if the server sends a 100 response or not:
If the server responds with 100, the client can then finish the request by sending the request body, and then receive a final response.
If the server responds with anything other than 100, the client can fail its operation immediately without sending the request body at all.
If no response is received, the client can finish the request by sending the request body and receive the final response.
The whole point of Expect: 100-Continue is for a client to ask for permission before sending a large request body. If the server doesn't want the body (ie, the headers describe unsatisfactory conditions, etc), the client doesn't have to waste effort and bandwidth to send a request body that will just be rejected.
HttpURLConnection has built-in support for handling 100 responses, but see How to wait for Expect 100-continue response in Java using HttpURLConnection for caveats. Also see JDK-8012625: Incorrect handling of HTTP/1.1 " Expect: 100-continue " in HttpURLConnection.
But, your server code as shown needs a major rewrite to handle HTTP properly, let alone handle chunked requests properly.
So thank you #Remy Lebeau for the insights on how to properly parse this special header. I noticed after creating an basic parser and after responding properly to chunked[Transfer-Encoding:chunked header] and fixed length streaming[Content-Length header] headers that some times my client would still get stuck and wait for 5 seconds occasionally and sometimes would work with no problem.
After hours of debugging the com.sun.www.http.HttpURLConnection class i realized Another flaw was no longer in the server side but actually in the Client Side of the code. Especially this bit
con.getOutputStream().write("Hello".getBytes());
con.getOutputStream().flush();
con.getOutputStream().write("World".getBytes());
con.getOutputStream().flush();
con.getOutputStream().write("123".getBytes());
con.getOutputStream().flush();
I had mistakenly assumed that getOutputStream() in this class would cache the outputstream returned but infact it returns an new output stream every single time.
So i had to change the code to this
OutputStream os=client.getOutputStream();
os.write("Hello".getBytes());
os.flush();
os.write("World".getBytes());
os.flush();
os.write("123".getBytes());
os.flush();
This finally solved all my problems. Works for chunked and fixed length streaming
may be I am not understanding the ASYNC part of this.
I have a resteasy client ( only one client )
client = new ResteasyClientBuilder().connectionPoolSize(maxConnections).connectTimeout(timeout, TimeUnit.SECONDS).readTimeout(timeout, TimeUnit.SECONDS).httpEngine(engine).build();
where maxConenctions is 10, the client is created on startup and does async calls on each request like this in a for loop :
ResteasyWebTarget resteasyWebTarget = client.target(restRequestFromGateway.getUrl());
Future<Response> response = resteasyWebTarget.
request().
headers(headers).
async().
get(restWSResponseCallback);
if (response!= null){
response.isDone();
}
I created ten requests and in a for loop send them through. on the WebService end I put a debug point and just waited. So i got the first request and didnt let it go through.
I was expecting that as it was async the client will continue and will send multiple requests and wait for the response in the invocationCallback but it did not
the next request didnt come thorugh to the webservice till the response from the webservice was received.
Why didn't the requests go through one by one after the another from the client.
Some third party is sending an Http Post request whenever something changes in their DB (e.g. when a contact has been updated, they send the contactID and 'contact_updated'). I have build a socket listener that catches those requests and is able to parse the information. However, I just can't get it to work to send back a response with the status '200 - OK'. Thus, the server on the client side keeps on trying (four times or so) to re-send the request.
Is there any, simple way to just send the response status without the need of adding external libs etc.?
It should be enough to send the string HTTP/1.1 200 OK back in your socket-listener.
If you have troubles, you can check out this answer, it shows how to use a HttpServer in Java just via plain JavaSE features.
Use
response.setStatus(HttpServletResponse.SC_OK);
to set the status code in your response header.
You may also set the content type.
response.setContentType("text/html;charset=UTF-8");
I am writing a simple HTTP client using netty-4.0x.
Build the pipeline as below :
pipeline.addLast("codec", new HttpClientCodec());
pipeline.addLast("inflater", new HttpContentDecompressor());
pipeline.addLast("handler", new HttpResponseHandler());
where HttpResponseHandler provides implementation of messageReceived(),
Now there is a thread-pool which call the client and keep sending http message,
I understand that ChannelFuture future = channel.write(request); is async call and will come out without blocking
The query which i am having is, is there a way to link request-response, without calling
future.sync() call.
Thanks for all the help in advance !!!
If you code the server too, you could have the client add a unique id to the HTTP header and have the server echo it back in the response.
If you are following strict HTTP pipelining rules then, on any given channel, the responses will be returned in the order the requests were sent in. It should be enough to maintain a request queue, removing the front of the queue for each response received.
If you are creating a new channel, and a new pipeline for that channel, for each request, it's even easier. Either way you can add a handler to your pipeline which remembers the request (or queue of requests), and returns the request and response to your application when the response is received.
Let's say I have a java program that makes an HTTP request on a server using HTTP 1.1 and doesn't close the connection. I make one request, and read all data returned from the input stream I have bound to the socket. However, upon making a second request, I get no response from the server (or there's a problem with the stream - it doesn't provide any more input). If I make the requests in order (Request, request, read) it works fine, but (request, read, request, read) doesn't.
Could someone shed some insight onto why this might be happening? (Code snippets follow). No matter what I do, the second read loop's isr_reader.read() only ever returns -1.
try{
connection = new Socket("SomeServer", port);
con_out = connection.getOutputStream();
con_in = connection.getInputStream();
PrintWriter out_writer = new PrintWriter(con_out, false);
out_writer.print("GET http://somesite HTTP/1.1\r\n");
out_writer.print("Host: thehost\r\n");
//out_writer.print("Content-Length: 0\r\n");
out_writer.print("\r\n");
out_writer.flush();
// If we were not interpreting this data as a character stream, we might need to adjust byte ordering here.
InputStreamReader isr_reader = new InputStreamReader(con_in);
char[] streamBuf = new char[8192];
int amountRead;
StringBuilder receivedData = new StringBuilder();
while((amountRead = isr_reader.read(streamBuf)) > 0){
receivedData.append(streamBuf, 0, amountRead);
}
// Response is processed here.
if(connection != null && !connection.isClosed()){
//System.out.println("Connection Still Open...");
out_writer.print("GET http://someSite2\r\n");
out_writer.print("Host: somehost\r\n");
out_writer.print("Connection: close\r\n");
out_writer.print("\r\n");
out_writer.flush();
streamBuf = new char[8192];
amountRead = 0;
receivedData.setLength(0);
while((amountRead = isr_reader.read(streamBuf)) > 0 || amountRead < 1){
if (amountRead > 0)
receivedData.append(streamBuf, 0, amountRead);
}
}
// Process response here
}
Responses to questions:
Yes, I'm receiving chunked responses from the server.
I'm using raw sockets because of an outside restriction.
Apologies for the mess of code - I was rewriting it from memory and seem to have introduced a few bugs.
So the consensus is I have to either do (request, request, read) and let the server close the stream once I hit the end, or, if I do (request, read, request, read) stop before I hit the end of the stream so that the stream isn't closed.
According to your code, the only time you'll even reach the statements dealing with sending the second request is when the server closes the output stream (your input stream) after receiving/responding to the first request.
The reason for that is that your code that is supposed to read only the first response
while((amountRead = isr_reader.read(streamBuf)) > 0) {
receivedData.append(streamBuf, 0, amountRead);
}
will block until the server closes the output stream (i.e., when read returns -1) or until the read timeout on the socket elapses. In the case of the read timeout, an exception will be thrown and you won't even get to sending the second request.
The problem with HTTP responses is that they don't tell you how many bytes to read from the stream until the end of the response. This is not a big deal for HTTP 1.0 responses, because the server simply closes the connection after the response thus enabling you to obtain the response (status line + headers + body) by simply reading everything until the end of the stream.
With HTTP 1.1 persistent connections you can no longer simply read everything until the end of the stream. You first need to read the status line and the headers, line by line, and then, based on the status code and the headers (such as Content-Length) decide how many bytes to read to obtain the response body (if it's present at all). If you do the above properly, your read operations will complete before the connection is closed or a timeout happens, and you will have read exactly the response the server sent. This will enable you to send the next request and then read the second response in exactly the same manner as the first one.
P.S. Request, request, read might be "working" in the sense that your server supports request pipelining and thus, receives and processes both request, and you, as a result, read both responses into one buffer as your "first" response.
P.P.S Make sure your PrintWriter is using the US-ASCII encoding. Otherwise, depending on your system encoding, the request line and headers of your HTTP requests might be malformed (wrong encoding).
Writing a simple http/1.1 client respecting the RFC is not such a difficult task.
To solve the problem of the blocking i/o access where reading a socket in java, you must use java.nio classes.
SocketChannels give the possibility to perform a non-blocking i/o access.
This is necessary to send HTTP request on a persistent connection.
Furthermore, nio classes will give better performances.
My stress test give to following results :
HTTP/1.0 (java.io) -> HTTP/1.0 (java.nio) = +20% faster
HTTP/1.0 (java.io) -> HTTP/1.1 (java.nio with persistent connection) = +110% faster
Make sure you have a Connection: keep-alive in your request. This may be a moot point though.
What kind of response is the server returning? Are you using chunked transfer? If the server doesn't know the size of the response body, it can't provide a Content-Length header and has to close the connection at the end of the response body to indicate to the client that the content has ended. In this case, the keep-alive won't work. If you're generating content on-the-fly with PHP, JSP etc., you can enable output buffering, check the size of the accumulated body, push the Content-Length header and flush the output buffer.
Is there a particular reason you're using raw sockets and not Java's URL Connection or Commons HTTPClient?
HTTP isn't easy to get right. I know Commons HTTP Client can re-use connections like you're trying to do.
If there isn't a specific reason for you using Sockets this is what I would recommend :)
Writing your own correct client HTTP/1.1 implementation is nontrivial; historically most people who I've seen attempt it have got it wrong. Their implementation usually ignores the spec and just does what appears to work with one particular test server - in particular, they usually ignore the requirement to be able to handle chunked responses.
Writing your own HTTP client is probably a bad idea, unless you have some VERY strange requirements.