I was little experimenting with some HttpServlet stuff to understand it better. I wanted to build the scenario that a request is incoming and I need to send the response accordingly and as fast as possible and to do later some more work within the servlet. From my current understanding, the response should only be send to the client when the doGet or doPost method was returned. But from my example, the response is already sent back to the client within the processing of the commands in the servlet. So it is already returned when I was not expecting it to be.
protected void processRequest(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
try {
Thread.sleep(500);
} catch (InterruptedException ex) {
Logger.getLogger(DisplayHeader.class.getName()).log(Level.SEVERE, null, ex);
}
response.setContentType("text/plain; charset=ISO-8859-1");
response.setStatus(HttpServletResponse.SC_FORBIDDEN);
final StringWriter sw = new StringWriter();
PrintWriter out = new PrintWriter(sw);
//TODO most be implemented SynchronizedStatusCodeDimo
out.println("StatusCode=0");
out.println("StatusText=Accepted");
out.println("paymentType=PaymentXY");
out = response.getWriter();
out.print(sw.toString());
out.flush();
out.close();
try {
Thread.sleep(1000);
} catch (InterruptedException ex) {
Logger.getLogger(DisplayHeader.class.getName()).log(Level.SEVERE, null, ex);
} }
What is happening here, via Firebug I see that I already received the response generated after 510ms. I thought I would need more then 1500ms because of the sleeps. My understanding was based on this post here: Link
The HttpServletResponse will be controlled by your servlet container (Tomcat, Jetty etc.) .
If you write into the response the servlet container automatically flush the response after a defined buffer size (e.g Tomcat after 9000 byte). Usually you can configure it (in Tomcat with the parameter socketBuffer).
This is the way it works, if you do not control it by yourself.
In your case, you control the response by your self and after you call response.flush() the response will be send to the client.
If you had writte more the 9000 Byte (in Tomcat), the response will be send automatically (in the middle of it all).
(excuse my bad english)
Related
I have an Ajax POST call that uploads one or more files to a servlet.
In my servlet, I use Commons FileUpload library to manage the uploading file process:
private RequestInfo getRequestInfoMultipart(HttpServletRequest request, HttpSession session) throws SupportException {
RequestInfo multipartReqestInfo = new RequestInfo();
try {
DiskFileItemFactory factory = new DiskFileItemFactory();
factory.setRepository(new File(TMP_DIR));
ServletFileUpload upload = new ServletFileUpload(factory);
upload.setFileSizeMax(MAX_SIZE_UPLOADED_FILE);
List<FileItem> items = upload.parseRequest(request); // <-- Throws exception on max file size reached
...
} catch (FileUploadException e) {
throw new SupportException("SOP_EX00009");
} catch (Exception e) {
throw new SupportException("SOP_EX00001", e);
}
}
When I catch the exception outside the getRequestInfoMultipart method, I'm writing in the http response a JSon object with two parameters (result and message):
private RequestInfo getRequesInfo(HttpServletRequest request, HttpServletResponse response, HttpSession session, boolean isMultipart) {
try {
if (isMultipart) {
return getRequestInfoMultipart(request, session);
}
return getRequestInfo(request, session);
} catch (SupportException e) {
response.setContentType("application/json");
response.setCharacterEncoding("UTF-8");
try (PrintWriter writer = response.getWriter()) {
JSonResult jsonResult = new JSonResult();
jsonResult.setResult(KO);
jsonResult.setMessage(e.getMessage());
writer.print(new Gson().toJson(jsonResult));
writer.flush();
} catch (IOException ex) {
log.error("Error getting PrintWriter", ex);
}
return null;
}
}
After that, the Ajax call should get the response in the success block, but instead, the http request is being repeated and then, the Ajax call is entering in the error block, so I can't show the result to the user but a generic error message.
Does anybody know why request is being repeated and why the Ajax call is ending with error?
Thank you very much.
Quique.
Finally I found the error and it was nothing to do with Ajax or servlet code, but with Tomcat default connector maxSwallowSize property:
When somebody uploads a file that Tomcat knows is greater than maximum size allowed, Tomcat aborts the upload. Then, Tomcat does not swallow the body and the client is unlikely to see the response. So, it seems the request is repeated and finally a connection reset occurs and ajax received a 0 error code and none information from servlet.
In development you can set the property as unlimited with -1 value in the server.xml file:
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"
maxSwallowSize="-1" />
In production, you must set an appropiate value for your interests.
Doing that, server will response after process all the body from client. But keep in mind that if a client uploads a file greter than the maxSwallowSize, you will get the same behavior.
I am trying my hand at using http.core & client 4.3. In general it works well, and is quite pleasant to deal with. However, I am getting a ConnectionClosedException on one of my transfers and I can't see why. Others work just fine as far as I can tell.
Everything follows the examples in a pretty straight forward way. If it didn't, it was re-written to as much as possible in an effort to get rid of this.
There are 2 servers, both running the same code [A & B]
A HttpClient sends a request "AX" (POST) to B
B HttpService receives the "AX" post, processes it
B HttpClient sends a reply "BR" (POST) to A on a different port
Later This should happen after the connection to A is closed, or as close as possible
Right now the code doesn't actually care
A receives the reply from B (on a different thread) and does things
In the problem scenario, A is running as the server, and B is sending a POST. Sorry it isn't always clear, since in one transaction both sides end up running server and client code.
A Sends POST to B:8080. Get back a proper response inline, everything ok.
POST Connection to B:8080 gets closed properly
B sends new POST (like an ACK) to A (ex... B:53991 => A:9000).
A Processs everything. No issues
A rasies ConnectionClosedException
Since I don't why it's happening for sure, I tried to put everything I think is relevant in there. My only thought right now is that it has something to with making sure I add/change connection control headers, but I can't see how that would affect anything.
Stack Trace from machine "A", when the reply from B comes
org.apache.http.ConnectionClosedException: Client closed connection
at org.apache.http.impl.io.DefaultHttpRequestParser.parseHead(DefaultHttpRequestParser.java:133)
at org.apache.http.impl.io.DefaultHttpRequestParser.parseHead(DefaultHttpRequestParser.java:54)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at org.apache.http.impl.DefaultBHttpServerConnection.receiveRequestHeader(DefaultBHttpServerConnection.java:131)
at org.apache.http.protocol.HttpService.handleRequest(HttpService.java:307)
at com.me.HttpRequestHandlerThread.processConnection(HttpRequestHandlerThread.java:45)
at com.me.net.http.HttpRequestHandlerThread.run(HttpRequestHandlerThread.java:70)
com.me.ExceptionHolder: Client closed connection
at com.me.log.Log.logIdiocy(Log.java:77)
at com.me.log.Log.error(Log.java:54)
at com.me.net.http.HttpRequestHandlerThread.run(HttpRequestHandlerThread.java:72)
Caused by: org.apache.http.ConnectionClosedException: Client closed connection
at org.apache.http.impl.io.DefaultHttpRequestParser.parseHead(DefaultHttpRequestParser.java:133)
at org.apache.http.impl.io.DefaultHttpRequestParser.parseHead(DefaultHttpRequestParser.java:54)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at org.apache.http.impl.DefaultBHttpServerConnection.receiveRequestHeader(DefaultBHttpServerConnection.java:131)
at org.apache.http.protocol.HttpService.handleRequest(HttpService.java:307)
at com.me.net.http.HttpRequestHandlerThread.processConnection(HttpRequestHandlerThread.java:45)
at com.me.net.http.HttpRequestHandlerThread.run(HttpRequestHandlerThread.java:70)
This is the code running on B, the "client" in this scenario. It is trying to POST the reply acknowledging that the first POST from A was received properly. There really isn't much to transmit, and the response should only be an HTTP 200:
try (CloseableHttpClient client = HttpClients.createDefault()) {
final HttpPost post = new HttpPost(url);
post.setHeaders(/* create application specific headers */);
ByteArrayEntity entity = new ByteArrayEntity(IOUtils.toByteArray(myStream));
post.setEntity(entity);
ResponseHandler<Void> responseHandler = new ResponseHandler<Void>() {
#Override
public Void handleResponse(HttpResponse response) throws ClientProtocolException, IOException {
StatusLine status = response.getStatusLine();
if (!NetUtil.isValidResponseCode(response)) {
throw new ClientProtocolException("Unexpected Error! Oops");
}
// consume the response, if there is one, so the connection will close properly
EntityUtils.consumeQuietly(response.getEntity());
return null;
}
};
try {
client.execute(post, responseHandler);
} catch (ClientProtocolException ex) {
// logic to queue a resend for 10 minutes later. not triggered
throw ex;
}
}
On A: This is called async because the response doesn't come in over the same http connection.
The main request handler does a lot more work, but it is amazing how little code there is actually controlling the HTTP in the handler/server side. Great library... that I am misusing somehow. This is the actual handler, with everything simplified a bit, validation removed, etc.
public class AsyncReceiverHandler implements HttpRequestHandler {
#Override
public void handle(HttpRequest request, HttpResponse response, HttpContext context) throws HttpException, IOException {
// error if not post, other logic. not touching http. no errors
DefaultBHttpServerConnection connection = (DefaultBHttpServerConnection) context.getAttribute("connection");
Package pkg = NetUtil.createPackageFrom(connection); // just reads sender ip/port
NetUtil.copyHttpHeaders(request, pkg);
try {
switch (recieive(request, pkg)) {
case EH_OK:
response.setStatusCode(HttpStatus.SC_OK);
break;
case OHNOES_BAD_INPUT:
response.setStatusCode(HttpStatus.SC_BAD_REQUEST);
response.setEntity(new StringEntity("No MDN entity found in request body"));
// bunch of other cases, but are not triggered. xfer was a-ok
}
} catch (Exception ex) {
//log
}
}
private MyStatus receiveMdn(HttpRequest request, Package pkg) throws Exceptions..., IOException {
// validate request, get entity, make package, no issues
HttpEntity resEntity = ((HttpEntityEnclosingRequest) request).getEntity();
try {
byte[] data = EntityUtils.toByteArray(resEntity);
// package processing logic, validation, fairly quick, no errors thrown
} catch (Exceptions... ex) {
throw ExceptionHolder(ex);
}
}
}
This is the request handler thread. This and the server are taken pretty much verbatim from the samples. The service handler just starts the service and accept()s the socket. When it gets one, it creates a new copy of this, and calls start():
public HttpRequestHandlerThread(final HttpService httpService, final HttpServerConnection conn, HttpReceiverModule ownerModule) {
super();
this.httpService = httpService;
this.conn = (DefaultBHttpServerConnection) conn;
}
private void processConnection() throws IOException, HttpException {
while (!Thread.interrupted() && this.conn.isOpen()) {
/* have the service create a handler and pass it the processed request/response/context */
HttpContext context = new BasicHttpContext(null);
this.httpService.handleRequest(this.conn, context);
}
}
#Override
public void run() {
// just runs the main logic and reports exceptions.
try {
processConnection();
} catch (ConnectionClosedException ignored) {
// logs error here (and others).
} finally {
try { this.conn.shutdown(); } catch (IOException ignored) {}
}
}
}
Well, this seems stupid now, and really obvious. I ignored the issue for a while and moved on to other things, and the answer bubbled up from the subconscious, as they will.
I added this header back and it all cleared up:
post.setHeader("Connection", "close, TE")
Somehow the line to set the Connection header got removed, probably accidentally by me. A lot of them get set, and it was still there, just wrong in this code path. Basically, the server expects this connection to close immediately but the header was reverting to the default keep-alive. Since the client closes the connection as soon as it is done with it this was surprising the server, who was told otherwise, and rightly compliained :D In the reverse path everything was OK.
Since I had just changed the old stack to use HttpComponents I didn't look at headers and such, and I just assumed I was using it wrong. The old stack didn't mind it.
Assume this scenario: I send a request to my server(apache tomcat) from a browser(firefox). The response come after about 2 minutes from server. How can I cancel this request in a way that my server do not continue for making the response ?
When a user sent a request to the server, it is common to go to another link(or menu or cancel the request) of application and it is so important for developers to cancel such these cases and so reduce the server load.
OK then, I want a technique to detect such incoming requests which are sent to my server and cancel all actions for making the response for client( because the client is not waiting for server response anymore).
I couldn't find useful information in google and stack.
I also set keep-alive-timeout in my server.xml, but nothing happened!
I also read this topic cancle request , but it didn't help me.
So please help me to solve this issue ...
You can cancel the request sending content to its outputstream (and flushing it).
If a user cancels the request, the servlet will loss the connection to the browser and then, sending content to the outputstream will throw a ClientAbortException making the process stop.
Try this:
protected void processRequest(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
ServletOutputStream out = response.getOutputStream();
try {
out.println("<!DOCTYPE html>");
out.println("<html><body>");
out.println("<h1 id=\"count\"></h1>");
for (int i = 0; i < 60; i++) {
Thread.sleep(1000);
out.print("<script>document.getElementById(\"count\").innerHTML=\"" + i + "\";</script>");
out.flush();
System.out.println(i);
}
out.println("</body></html>");
} catch (Exception ex) {
System.out.println("Stoped due to " + ex.toString());
} finally {
out.close();
}
}
Lead by several examples and questions answered here ( mainly
http://www.javaworld.com/javaworld/jw-02-2009/jw-02-servlet3.html?page=3 ), I want to have server sending the response multiple times to a client without completing the request. When request times out, I create another one and so on.
I want to avoid long polling, since I have to recreate request every time I get the response. (and that quite isn't what async capabilities of servlet 3.0 are aiming at).
I have this on server side:
#WebServlet(urlPatterns = {"/home"}, name = "async", asyncSupported = true)
public class CometServlet extends HttpServlet {
public void doGet(final HttpServletRequest request, final HttpServletResponse response) throws IOException, ServletException {
AsyncContext ac = request.startAsync(request, response);
HashMap<String, AsyncContext> store = AppContext.getInstance().getStore();
store.put(request.getParameter("id"), ac);
}
}
And a thread to write to async context.
class MyThread extends Thread {
String id, message;
public MyThread(String id, String message) {
this.id = id;
this.message = message;
}
public void run() {
HashMap<String, AsyncContext> store = AppContext.getInstance().getStore();
AsyncContext ac = store.get(id);
try {
ac.getResponse().getWriter().print(message);
} catch (IOException e) {
e.printStackTrace();
}
}
}
But when I make the request, data is sent only if I call ac.complete(). Without it request will always timeout. So basically I want to have data "streamed" before request is completed.
Just to make a note, I have tried this with Jetty 8 Continuation API, I also tried with printing to OutputStream instead of PrintWriter. I also tried flushBuffer() on response. Same thing.
What am I doing wrong?
Client side is done like this:
var xhr = new XMLHttpRequest();
xhr.open('GET', 'http://localhost:8080/home', true);
xhr.onreadystatechange = function () {
if (xhr.readyState == 3 || xhr.readyState == 4) {
document.getElementById("dynamicContent").innerHTML = xhr.responseText;
}
}
xhr.send(null);
Can someone at least confirm that server side is okay? :)
Your server-side and client-side code is indeed ok.
The problem is actually with your browser buffering text/plain responses from your web-server.
This is the reason you dont see this issue when you use curl.
I took your client-side code and I was able to see incremental responses, with only just one little change:
response.setContentType("text/html");
The incremental responses showed up immediately regardless of their size.
Without that setting, when my output was a small message, it was considered as text/plain and wasnt showing up at the client immediately. When I kept adding more and more to the client responses, it got accumulated until the buffer size reached about 1024 bytes and then the whole thing showed up on the client side. After that point, however, the small increments showed up immediately (no more accumulation).
I know this is a bit old, but you can just flushBuffer on the response as well.
I need to write a servlet that basically just proxies each incoming request to the same URL path on a different host. Here's what I came up with using Apache Commons Http Client 4.1.3:
#WebServlet("/data/*")
public class ProxyServlet extends HttpServlet {
protected void doGet (HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
HttpClient client = new DefaultHttpClient();
try {
String url = getMappedServiceUrlFromRequest(request);
HttpGet get = new HttpGet(url);
copyRequestHeaders(request, get);
HttpResponse getResp = client.execute(get);
response.setStatus(getResp.getStatusLine().getStatusCode());
copyResponseHeaders(getResp, response);
HttpEntity entity = getResp.getEntity();
if (entity != null) {
OutputStream os = response.getOutputStream();
try {
entity.writeTo(os);
} finally {
try { os.close(); } catch (Exception ignored) { }
}
}
} catch (Exception e) {
throw new ServletException(e);
} finally {
client.getConnectionManager().shutdown();
}
}
private void getMappedServiceUrlFromRequest (...)
private void copyResponseHeaders (...)
private void copyRequestHeaders (...)
}
This works just fine the first time the servlet is called. However, after the first time, the servlet hangs on the line client.execute(get).
There are plenty of Google hits for "HttpClient execute hangs", most of which suggest using an instance of ThreadSafeClientConnManager. Tried that, sadly didn't help.
I've spent several hours googling for the problem, but I haven't found anything that fixes it yet. I'd seriously appreciate any pointers as to what I am doing wrong here.
I suggest you are doing this the hard way. Just write a Filter that does the redirect.
Or even just a TCP server that listens at the port and just copies bytes back and forth. You don't really need to engage in the HTTP protocol at all in a proxy, unless you are implementing the CONNECT command, in which case that's the only piece of HTTP you need to understand, and its reply is the only HTTP response you need to know about. Everything else is just bytes.