I have an application using Apache HttpClient 3.1.
I want to know how much data the application sends and receives and I want to measure this "per thread" (i.e. I want to know how much data a specific thread sent and received).
I currently simply count the number of bytes received in the content, but that's not what I need. Traffic sent should include the request headers and traffic received should include the response headers... (many requests just return "OK" or an empty body so I think the headers make up for more than 50% of total traffic).
I would very much like a solution for HttpClient 3.1 since the upgrade to 4.x does not seem trivial and I'm otherwise happy with 3.1.
I've done some searching and found connection.getMetrics() which seems to exist only for 4.x :(
If it really has to be 4.x, then how would I do it, I never explicitly handle connections, I currently simply call client.executeMethod(method); and then method.getResponseBodyAsStream()...
Thanks in advance for any help :)
Related
I have following use case: User uploads files to Java 6 servlet (Apache 6). In order to be able to upload, he must have security token assigned. Is there a way to check this token before accepting whole request with multipart data, and possibly reject? I dont want to use unnecessary bandwith, and defend server against unauthorized access. Of course I have front end validations, but you could still get upload URL from web page and use it for DOS attack, or fill server memory to crash.
Every solution I googled stated you cannot process request before server downloads it. Is there any way to bypass this? Possibly check againts session in some filter? Or maybe I am missing some easiers solution and overthinking it.
Your problem is essentially "How will I handle a request before the request has arrived". Unfortunately that's not possible in our limited universe.
But just because the request has arrived doesn't mean the request is complete. Checking the headers before starting to stream the complete data should be quite enough to prevent any excess bandwidth being used.
So in reality you don't even have a problem.
I have a web application based on a Jetty server. My problem is that I have some long running requests which can take up to a few minutes. Some of my clients do have a connection-timeout of some seconds though. So I though I could serve them with 102 PROCESSING responses to prevent the connection timeout.
I haven't found any sources or examples though on the internet, which makes me wonder if this is the right approach. I am for sure not the first persons trying to solve this problem :)
So anybody has a suggestion for making this work in Jetty? Maybe using Continuations? Or is there a hidden configuration option?
cheers
Philipp
I've been thinking to use code 102 for the same purposes as you.
But it appears that HTTP response code 102 was never part of HTTP spec itself, but instead it was defined for WebDAV extensions. See answers to this question. So I doubt that there is any reasonable support for this response code in HTTP clients and servers.
I am not looking specific answer, just an idea or a tip.
I have following problem:
Android application is a client for web service. It has a thread, which sends events (XML form with request ID) through http protocol and for every request server sends confirmation, that he understand message right with granted event ID - server is a synchronizer for few clients. I want use websocket protocol to send events through websocket too but it is a little bit tricky - instead of http, I don't expect to get response for every request. Moreover, incoming websocket messages is parsed in other thread. Primary mechanism it's a little bit overgrown and I don't want to write everything from scratch.
I want to make this asynchronous websocket mechanism to pretend to be synchronous.
There is my idea for now - after send event through websocket I will wait no more for e.g 5 seconds for response which will processed in other thread (it's came as XML) and regarding too request ID it will notify proper paused thread. I worry Condition.await() and condition.signal isn't the best idea, what do you think?
According to this problem, I've realized that I have problems with project this kind of mechanism. Do you have an idea, where can I find information about good pattern and tips which good to now to avoid bad approach? Thanks in advance!
The only difference between websocket and HTTP requests is the lack of HTTP headers when a message comes in. In websocket, you have a heartbeat that keeps the connection alive and allows full duplex communication, and then you have pure payloads. It's your job to find which message headers you will use to route the requests properly in your server/client.
So, that doesn't stop you from communicating in a request/response manner by simply writing to the output stream right after receiving. I suggest you take a look at the RFC
https://www.rfc-editor.org/rfc/rfc6455
If you're a little more visual, this slideshow can help:
http://www.slideshare.net/sergialmar/websockets-with-spring-4
Or if you want some more serious implementations as an example, take a look at spring's docs:
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/websocket.html
I have a third party server that is periodically sending http post request messages to an URL(can be configured). In my application I am reading data by starting a jetty server and listening for data on the configured URL.
Wondering if it is possible to listen for the data sent by the server without starting any server like the jetty?
You can always create a socket yourself and listen at port 80 (or something similar) for HTTP requests. See http://download.oracle.com/javase/6/docs/api/java/net/ServerSocket.html
But there are several problems: Theres a lot of overhead that you need to do yourself. Parse the HTTP request, extract the headers and the body and depending on the headers you need to do certain things like caching, authentication, etc. And that's a lot of stuff you need to implement. Using an existing web server is usually a better idea, since the people who wrote it (usually) know exactly what they are doing.
Another option is the Apache HttpCore library (http://hc.apache.org/httpcomponents-core-ga/index.html). You can use it to write your own Http Server... But again, there's still a lot of stuff you need to take care of ...
If you want to do it for learning purposes, go ahead and implement it yourself. When it is for production, stick with the commonly used web servers.
Is it possible to send "100 Continue" HTTP status code, and then later some other status code after processing entire request using Java Servlet API (HttpServletResponse)?
I can't find any definitive "No" answer, although API doesn't seem to support it.
I assume you mean "100 Continue".
The answer is: no, you can't (at least not the way it's intended, as provisional response). In general, the servlet engine will do it automatically, when the request requires it. Of course this makes it impossibe for the servlet to prevent sending the 100 status -- this issue is a known problem in the Servlet API, and has been known for what feels like eons now.
I know that Jetty will wait until getReader() or getInputStream() is called before it sends a 100. I think this is the behavior you are looking for. I don't know what Tomcat does.
Did you mean to ask How do I send a status code before the complete request is received, to interrupt an in-progress request due to a missing header field? It seems that's not possible with standard servlets.
What server are you using?
Some server's servlet extensions may allow this, e.g. Tomcat's Comet servlet might send EventType.BEGIN once the headers are available to process, which may allow you to interrupt a PUT that doesn't have the correct authentication.
Alternatively your server might have a plugin to reject requests based on headers.
Do you mean status code 100 ?
The API does support sending SC_CONTINUE.