It's very annoying when you are making a simple socket server and you get an http request. Very anoying if your server doesn't support http requests. Is there a way to detect and deny http requests (from webbrowsers) and only accept tcp/socket connections?
no, because you dont know the payload a client intends to send over a socket until youve accepted the connection and read enough of it to understand its talking HTTP.
In my opinion you should think of the frustration clients are experiencing, and do 2 things:
Why are browsers being directed to your (obviously not http) service? can you stop it?
Assuming the answer to #1 above is no, maybe implement some simple detection for http requests and respond with a hardcoded http response that renders as a readable error on the browser who sent it? (yes, even though your service isnt http) - a simple detection would be take your input buffer, decode as a us-ascii string, and if it starts with one of the 9 HTTP request methods send out some hardcoded http error response. i suggest error code 402 - payment required :-)
Related
Good afternoon everyone,
I'm having some issues with my HTTP server. I've made my own HTTP server (A lightweight HTTP server due some circumstances and needings) that I want to implement on a software I have. This HTTP API also is used to allow HTTPS, but my main issue comes actually with HTTP.
One issue I'm facing is retrieving HTTPS connections on the HTTP server. Using HTTPS as the server and HTTP as the connection from the client gets denied, as the Handshake fails and gives an Exception to the server. The problem using the HTTP server with an HTTPS client is that this connections keeps running, but the message is encrypted. As it's encrypted, I can't read the information and get details like the Content-Length, so the server is waiting for an end that will never come as it can't read correctly the data.
I was wondering if there's a way in Java to detect if the client is using encrypted responses to deny this connections instead of trying to read them. The main issue with this sockets is that they aren't detected as SSLSockets, they are normal sockets that can't decrypt the information in the InputStream.
Thank you in advance.
Are you aware that HTTP and HTTPS are usually served on different port numbers? So 80 is for HTTP and 443 for HTTPS. For non-privileged ports often 8000 and 8443 are used. A client that connects using TLS on a HTTP-only port is faulty, and your HTTP server should easily detect non-HTTP traffic:
If the first word received isn't one of the HTTP verbs supported by your server, such as GET, HEAD, POST, PUT, OPTIONS, etc. your server should send a 400 or 408 response (408 is request timeout, your server should only wait a reasonable amout of time for the request header) and then close the connection.
I need some assistance on a project I am working on. It's a library itself using Jersey 1.x (1.19.1) aiming at HTTP posting a JSON document and getting the corresponding JSON response from a server.
I am facing a problem when the response from the server is "big". The JSON document that is posted by my client application contains several jobs that must be executed by the server, and
the JSON document sent back by the server is made of the outputs of these jobs. The jobs can be considered independent from each other. The server works in streaming mode, which means it
starts to process the jobs before it receives the entire JSON document posted by the client. And it starts to send the outputs of the jobs as soon as they are finished. So the server
starts to reply to my client application while it is still posting the request. Here is my problem. When the request gets big so gets the response (more jobs to do), my application freezes
and at some point terminates.
I spent a lot of time trying to figure out what's happening and here is what is found and what I infered.
Jersey, for handling HTTP communication is using a class from the JDK (in rt.jar) I forgot the exact name and don't have access to my work right now but let's call it HttpConnection.
In this class there is a method checkError() that is invoked and throws a IOException with only a message saying it was impossible to write to server.
Debugging I was able to understand that an attribute of this class named trouble was set to true because a write() method caught an IOException before. checkError() throws a
IOException based on that trouble boolean flag. It's not possible to easily see the cause IOException because the classes of the JRE are compiled without the debugging symbols but
I managed to see that this IOExeption was a "connection reset by peer" problem.
Then I tried to understand why the server resets the connection. I used a HTTP proxy that captures the HTTP traffic between my client application and the server but this gave me no more clues,
it even seems that the proxy is unable to handle properly the connection with the server as well!
So I tried to use Wireshark to capture the traffic and see what's wrong. Here is what I found.
On client side, packets corresponding to the post of the request JSON document are sent and the server starts to reply shortly after, as explained above. The server side sends
more and more packets and I noticed that the buffer of the TCP layer (called TCP window in Wireshark) on client side has a size that decreases more and more as the server sends packets.
Until it beomes full (size: 0 byte). So the TCP layer on server side cannot send data to the TCP layer on client side anymore and thus becomes full too. The conversation, in the end is
only about retrying to send data, on both sides, failing again and again. Ultimately the server decides to send a reset packet. This corresponds to the cause IOExcpetion I mentioned
above I believe.
My understanding is: as long as the server does not start to stream the response everything is fine. When the server starts to send the response, the TCP buffer on client side starts to
get filled. But as the client application does not read the response yet, the content of this buffer is not consumed. When the server has sent enough data to fill this buffer it cannot
send anymore data and the buffer of its TCP layer gets full too because the server continues to push data. As a result, the client application cannot finish to send the request JSON
document. The communication is blocked on both sides and the server decides to reset the connection.
My conclusion is: the code, as currently written, does not support such full duplex communication, because the response from the server is not consumed as it is received. Indeed, walking
through the Jersey code that is executed by my library, by debugging, it is clear that the pattern is:
first: connection.getOutputStream().write()
and then: response.getInputStream().read()
In my opinion, the root cause of the problem is that the library I am working on uses Jersey in this synchronous manner which does not fit well the way the server works (streaming the
response while the request is still being sent to it).
I searched a lot on the Internet a solution keeping Jersey 1.19.1 for me to improve the library with as few impacts as possible but I failed. This is the reason why I am asking help
here now ;)
So basicaly my question is: is it possible to do what I need to do keeping Jersey client library 1.19.1 and if yes how? If not, what HTTP client library should I use for my library (to
write a post request and read the corresponding response at the same time) and if you could give me a basic example so I can be on track quickly it would be much appreciated.
One last thing: curl works just fine, I can fully post the exact same JSON document and get the response using it, so there is no problem on server side as I suspected at the very
beginning of my investigation. And it scales fine (I tried to send huge JSON documents). Of course I made sure the HTTP header of the post is the same in the case of my library and in the
curl case.
Thanks a lot for reading me and for your answers.
Best regards,
Loïc
I'm using httpurlconnection to send an xml file in java and I know we can get the response from the server and print it out. Wondering is there a way to actively get the server update log/response like every two seconds?
Thanks!
Http protocol is all about simple request and response. Nothing else. So,for your current task the best way is to implement some sort of server polling.
Some simillar question is here: Polling an Http server (sending http get requests repeatedly) in java
I have an assignment where I need to create a Proxy server, that will manipulate some of the requests/responses it gets, implement caching, etc.
For starters, I want to create the simplest proxy, that simply passes on all requests and responses. I've done some searches online and I am a bit confused on how to listen to requests in a certain port and get the HTTP requests. I've stumbled on the classes Socket, ServerSocket, HttpURLConnection, but I'm not sure how all these interact. I tried to read the docs, but they are all intertwined and a bit hard to understand.
Can you point me in the right direction regarding which classes I should probably use for this assignment, and maybe share a snippet for listening on a port, getting HTTP request headers, etc.?
Well, I can only assume that your Proxy will be a ServerSocket listening for requests on the HTTP port. You read the request through the server socket input stream. After checking the request is compliant with the proxy's rules you will open a HttpConnection to the real HTTP Server, and using the output stream in the http connection you will forward the client's request, then using the http connection input stream, you read the real HTTP Server's response, which you will ultimately forward back to the client using the socket's output stream.
In the proxy, since you intercept requests and responses you can manipulate them before forwarding.
Sounds right?
Here's some introductory Java socket material: http://www.oracle.com/technetwork/java/socket-140484.html
I'm running a servlet in Tomcat 6.0.26. The servlet accepts file upload from the client by HTTP POST. I'd like to stop the file uploading from the HttpServlet side. I tried the following methods with no luck:
close the request inputstream
send error code HttpServletResponse.SC_REQUEST_ENTITY_TOO_LARGE and flush response
do 1 and 2 in a Filter
I googled but found no direct answers. Please advise solutions.
Thanks.
This is not possible using the standard Servlet nor Commons FileUpload API's. Basically, to be able to abort the connection immediately, you should grab the underlying socket physically and close it. However, this socket is controlled by the webserver. See also this related question: How to explicitly terminate http connection from server with no response header.
Little tests have however confirmed that Commons FileUpload doesn't buffer up the entire file in memory when its size exceeds the limit. It will read the input stream, but just ignore and throw away the read bytes (also the ones which are already read). So memory efficiency isn't necessarily the problem here.
To fix the real problem, you'd basically like to validate the file size in the client side rather than the server side. This is possible with a Java Applet or a Flash Application. For example, respectively JumpLoader and SWFUpload.
This is not possible using standard APIs. And you're violating some protocol standards/RFC if you do. So why would you do that?
Instead, send a "Connection: close" response (http header) with no http body.
here is some crazy workaround: you can write (or find somewhere) some firewall standalone application based on Sockets that handles HTTP request, parses headers and if the request matches some your custom conditions - firewall forwards it to Tomcat, otherwise returns HTTP error response. Or you can try to tune Apache <-> Tomcat with some Apache rules.