I have a web service provided by jetty.
How can i filter URL with illegal characters?
I can not control some return information when the request URL has illegal characters.
actually, i want to return some specific info when the URL is invalid.
for example: i added a filter in my application to validate the URL, if illegal then i will return defined info.
but, I can not filter some URL like "%adsasd", it seem be handled by jetty.
curl -v -X PUT -u user:password 'http://myip.com:8080/%adsasd'
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
* Server auth using Basic with user 'user'
> PUT /%adsasd HTTP/1.1
> Authorization: Basic YWRtaW46MTIzNDU2
> User-Agent: curl/7.35.0
> Accept: */*
> Host:127.0.0.1:8080
> < HTTP/1.1 400 Bad Request
< Content-Length: 0
* Server Jetty(9.0.6.v20130930) is not blacklisted
< Server: Jetty(9.0.6.v20130930) <
* Connection #0 to host 127.0.0.1 left intact
The error response from Jetty
HTTP/1.1 400 Bad Request
indicates that Jetty did detect that as a bad URL and rejected it.
As for how to customize this, that is really tricky, mainly due to how early in the processing of this specific request it occurs in.
This kind of error (400 Bad Request) occurs during the parsing of the raw incoming http request, well before the server container has even attempted to figure out what context to talk to.
There is no way to have a custom error handler in a specific webapp context handle this sort of fundamental http error. As the server has yet to figure out what context to even talk to.
There is also no way at the server side (even at a global level) to customize this error message.
If you want such a feature, please file a feature request.
Related
My code is running on localhost and i am hitting one of my urls as below
curl -k -vv --http1.1 "https://localhost:8443/versa/login" -H 'Host: google.com'
Now i m a trying to read the url in my code using following
StringBuffer url = httpServletRequest.getRequestURL();
The value is always as follows irrespective the protocol used is HTTP/1.1 or HTTP/2
https://google.com/versa/login
How to read the original url here?
You can't. Have a look at the output from curl - it should give the clue [I dropped the HTTPS to HTTP to simplify but same for HTTPS just more debug from CURL]
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8443 (#0)
> GET /versa/login HTTP/1.1
> Host: google.com
> User-Agent: curl/7.64.1
> Accept: */*
The first two lines are just debug. The next 4 is what is sent to the server .. See nothing about port or localhost ..
I am supporting another vendors legacy application.
This is a J2EE application that runs on Glassfish v3.1.2.2. It has a REST API implemented using JAX-RS. I have limited visibility to the application and source.
The symptoms are:
make an HTTP request to a REST API
application has its own auditing system, this shows a successful request
no errors in GF logs
GF access log notes the request
0 bytes are returned from the request to the caller
This happens for both remote calls as well as from calls made using curl on localhost.
If we make the same requests to a different port over HTTPS they succeed. We are reluctant to move the calls to that other port without knowing a root cause. These failed intermittently last night and now fail constantly today.
A packet capture of the request shows:
- TCP overhead/handshake
- A GET request
- A single ACK from the application back to the caller
- then nothing after that
What would cause Glassfish v3 to successfully handle and process an HTTP request but return no data?
Is there a mechanism in Glassfish v3 to flush or reset an HTTP listener and its associated thread pool?
Since this happens on a curl request on the same server to localhost I think I can rule out the network being the issue.
The ports being used communicate directly with Glassfish. There is no proxy (like Apache or Nginx) between the caller and the app server.
Are there logging or monitoring settings I should be enabling in Glassfish to observe what the HTTP listener is doing relative to the application and the network stack?
I have obfuscated some examples that show the symptoms:
Glassfish Access log:
"0:0:0:0:0:0:0:1" "NULL-AUTH-USER" "25/Oct/2018:11:21:02 -0500" "GET /api/obfuscated/by/me HTTP/1.1" 200 9002
Curl response for that same call:
* Trying OFBBFUSCATED
* Connected to hostname.local (OFBBFUSCATED) port 11080 (#0)
> GET /api/obfuscated/by/me HTTP/1.1
> Host: hostname.local:11080
> User-Agent: curl/7.43.0
> Accept: */*
> Authorization: Basic asdfdsfsdfdsfsdafsdafsdafw==
>
* Empty reply from server
* Connection #0 to host hostname.local left intact
UPDATE I changed a timeout setting for the HTTP network listener. I bumped it from 30 to 35 seconds because I was seeing a packet capture where the app was sending a FIN after 30 seconds. After making this change it started to work again.
It is not clear if this somehow flushed or reset something or if I had some kind of race condition.
The apparent root cause was high I/O on the system running these services. The applications normally used 50MB/sec, a new process drove that usage to 250MB/sec. Once the I/O problem was resolved all of the HTTP errors went away and haven't come back.
I am making a proxy application for a browser. It has to use only the standard libraries. So far, I've managed to create the server. When trying to access a web page from a client, i get the following information:
CONNECT gmail.com:443 HTTP/1.1
User-Agent: Mozilla/5.0 Firefox/49.0
Proxy-Connection: keep-alive
Connection: keep-alive
Host: gmail.com:443
My question is: what to use in order to handle the requests? How to handle a file download?
Once you get that CONNECT command, do what is asked: create the upstream connection, and return the appropriate success/failure response. If the upstream connection was successful, all you have to do now is copy bytes in both directions, simultaneously. The endpoints will take care of all SSL issues, uploads, downloads, etc. You have no further role to play.
The general behaviour of a proxy is as follows:
Receive request from browser
Make a request to the actual server, resolving all redirects if necessary
Get the response from server and passit on to client
I am not getting into complications of changing request/response headers, caching etc.
Now from the above, you are making a SSL connection to gmail.com refer.
The browser is actually sending correct request, in this case you need to implement the handshake and connect to gmail with HTTPS offloading SSL on your side and sending the response received to the browser through the negotiated SSL with the browser.
Suggestion is to use HTTP instead of HTTPS, if this is not a production grader system and try out the concept first
I can see that the following curl command works remotely:
curl -X GET -d '{"begin":22, "end":33}' http://myRemoteApp.com:8080/publicApi/user/test/data
However as per the docs at http://curl.haxx.se/docs/manpage.html,
-d, --data
(HTTP) Sends the specified data in a POST request to the HTTP server,
in the same way that a browser does when a user has filled in an HTML
form and presses the submit button. This will cause curl to pass the
data to the server using the content-type
application/x-www-form-urlencoded. Compare to -F, --form.
So how is the GET working with curl if we are using -d to post data ?
Also there is no HttpUrlConnection method OR Restlet method to send json in a GET call. Is there ?
According to the curl documentation, -X forces the method word to be a particular value, regardless of whether it results in a sensible request or not. We can run curl with tracing to see what it actually sends in this case:
$ curl -X GET -d '{"begin":22, "end":33}' --trace-ascii - http://localhost:8080/
== Info: About to connect() to localhost port 8080 (#0)
== Info: Trying ::1... == Info: connected
== Info: Connected to localhost (::1) port 8080 (#0)
=> Send header, 238 bytes (0xee)
0000: GET / HTTP/1.1
0010: User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7
0050: NSS/3.14.3.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
0084: Host: localhost:8080
009a: Accept: */*
00a7: Content-Length: 22
00bb: Content-Type: application/x-www-form-urlencoded
00ec:
=> Send data, 22 bytes (0x16)
0000: {"begin":22, "end":33}
So this command does in fact cause curl to send a GET request with a message body.
This question has some extensive discussion of using GET with a request body. The answers agree that it's not actually illegal to send a GET request with a message body, but you can't expect the server to pay attention to the body. It seems that the specific server which you're using does handle these requests, whether due to a bug, happy accident, or deliberate design decision.
I am trying to establish a connection to Red5 server through RTMPT. I am using the Red5 client jar red5-client-1.0.jar. While trying to connect, the following are successful.
POST /open/1 HTTP/1.1
POST /send/DMPDQNDRFPCCV/1 HTTP/1.1
POST /idle/DMPDQNDRFPCCV/2 HTTP/1.1
POST /idle/DMPDQNDRFPCCV/3 HTTP/1.1
After this, when the client sends
POST /idle/DMPDQNDRFPCCV/4 HTTP/1.1
I get the following error on the client side:
"Idle: unknown client session: DMPDQNDRFPCCV"
What is the cause of this error? Is there any configuration to do in Red5. I have done all the necessary configurations to enable RTMPT as in http://gregoire.org/2009/01/28/rtmpt-and-red5/
The unknown client message indicates that the session has been removed from the connection manager for some reason. I would suggest turning up the log levels and watching the console for more information.