I have an app that was working on Tomcat 8.5.38.
Now I decided upgrade to Tomcat 9.0.27 and there appears problem with GET request and RFC 7230, Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing.
The request:
/api/vehicle/power_off?vehicleId=1428714&dtStart=2019-10-21 08:00:00&dtEnd=2019-10-21 08:30:00
It was working perfectly from both browser (any - IE, Opera, Chrome, FF) and another client (1C ERP system).
After version upgrade from browser it still works perfectly but from 1C don't. Tomcat shows error:
28-Oct-2019 17:29:26.201 INFO [http-nio-8080-exec-3] org.apache.coyote.http11.Http11Processor.service Error parsing HTTP request header
Note: further occurrences of HTTP request parsing errors will be logged at DEBUG level.
java.lang.IllegalArgumentException: The HTTP header line [get /api/vehicle/power_off?deviceId=1428714&dtStart=2019-10-21%2008:00:00&dtEnd=2019-10-21%2008:30:00 HTTP/1.1: ] does not conform to RFC 7230 and has been ignored.
at org.apache.coyote.http11.Http11InputBuffer.skipLine(Http11InputBuffer.java:962)
at org.apache.coyote.http11.Http11InputBuffer.parseHeader(Http11InputBuffer.java:825)
at org.apache.coyote.http11.Http11InputBuffer.parseHeaders(Http11InputBuffer.java:564)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:309)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:860)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1587)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
The same error is on my dev machine (MacOS + Tomcat 9.0.24) and production server (Ubuntu 16.04 + Tomcat 9.0.27).
The reason is in colon in datetime parameters. When I remove colons from query string (leave just "2019-10-21 080000") the request works as expected (with error "datetime cannot be parsed..."). Also when I manually change colons to "%3A" the request works and returns normal result.
Then I add relaxedQueryChars parameter to Tomcat Connector with colon (though colon is allowed symbol):
relaxedQueryChars=':[]|{}^\`"<>'
and it still fails.
What's the difference between 8 and 9 Tomcat versions that my request works in 8 but in 9 do not?
Is there anything I can do in Tomcat to make this request works? Changing requests on client sides is very hard task...
What's the difference between 8 and 9 Tomcat versions that my request
works in 8 but in 9 do not?
I think the difference is that Tomcat 9.x has tightened up on what should be permitted to be unencoded within a URL, so from a technical perspective there is no problem with Tomcat 9.x; the issue lies with earlier Tomcat releases, and browsers not strictly following specifications.
That said, I couldn't identify any specific fix that has triggered this issue for you, nor could I see anything in the Release Notes.
I add relaxedQueryChars parameter to Tomcat Connector with colon
(though colon is allowed symbol)...and it still fails.
From the Tomcat 9.0 documentation for relaxedQueryChars:
The HTTP/1.1 specification requires that certain characters are %nn encoded when used in URI query strings. Unfortunately, many user agents including all the major browsers are not compliant with this specification and use these characters in unencoded form. To prevent Tomcat rejecting such requests, this attribute may be used to specify the additional characters to allow. If not specified, no additional characters will be allowed. The value may be any combination of the following characters: " < > [ \ ] ^ ` { | } . Any other characters present in the value will be ignored.
Note the last two sentences. The colon character is not mentioned, so it "will be ignored".
Is there anything I can do in Tomcat to make this request work?
I don't think so, but the real problem is that you are not encoding colons within your parameters, and you have already mentioned that this resolves the issue. See this SO answer, and in particular the final sentence:
There are reserved characters, that have a reserved meanings, those are delimiters — :/?#[]# — and subdelimiters — !$&'()*+,;=
There is also a set of characters called unreserved characters — alphanumerics and -._~ — which are not to be encoded.
That means, that anything that doesn't belong to unreserved characters set is supposed to be %-encoded, when they do not have special meaning (e.g. when passed as a part of GET parameter).
The colon is a reserved character with a special meaning, and therefore it must be encoded within your parameters.
Notes:
Also see Bug 62273 - Add support for alternate URL specification. Although it doesn't specifically address your issue with the colon character, there is an interesting discussion on how browsers have not been adhering to RFC 3986 (Uniform Resource Identifier (URI): Generic Syntax).
The error message you are getting from Tomcat is vague, and could probably be improved. Perhaps raise a bug report with them?
In my case, I had increased the request header size by manually adding new cookies onto the Curl request.
In doing so, ENTER key was used instead of space.
Backspacing & using a space instead resolved the issue for me.
Related
Some requests are rejected by Tomcat with an empty HTTP 400 response.
A couple of examples:
A request url containing unencoded characters (e.g. '[' or ']' since Tomcat 8.5.x) triggers:
INFO o.a.c.h.Http11Processor Error parsing HTTP request header
Note: further occurrences of HTTP header parsing errors will be logged at DEBUG level.
java.lang.IllegalArgumentException: Invalid character found in the request target. The valid characters are defined in RFC 7230 and RFC 3986
A 400 error page is also returned for example when the header size is too large:
INFO: Error parsing HTTP request header
Note: further occurrences of HTTP header parsing errors will be logged at DEBUG level.
java.lang.IllegalArgumentException: Request header is too large
Is it possible to have a custom error page for those errors? More generally for when Tomcat triggers this HTTP 400 response. Delivering an empty response is the worst UX. I am aware that the creation of such requests should be avoided, but I am nonetheless looking for a fallback.
I have set up a custom error page in my (embedded) Tomcat context with ctx.addErrorPage(...) for the error code 400.
It works properly when triggered from my webapp.
E.g. when delegating the error handling to the servlet error handling mechanism with res.sendError(SC_BAD_REQUEST); - res being a HttpServletResponse.
Unfortunately for the kind of tomcat errors described at the top, the custom error page is not used.
Thanks!
This is a nuisance to me as well. Unfortunately, from having a look at the sources, it seems to be wired deep in Tomcat's internals, and can't be changed easily.
In particular, the exceptions you note are thrown in org.apache.coyote.http11.Http11InputBuffer, which is part of one of Tomcat's component called the Coyote HTTP/1.1 Connector (old docs, newer docs don't have this):
The Coyote HTTP/1.1 Connector element represents a Connector component
that supports the HTTP/1.1 protocol. It enables Catalina to function
as a stand-alone web server, in addition to its ability to execute
servlets and JSP pages.
Also, the exceptions end up in catalina.log and are very short - compare this to when you get an exception from the JSP processor, which are several times that size.
So I think it isn't trivial to patch this - at least not without knowledge about Tomcat internals, which I don't have :(
I am using tomcat8.0.43 as my server.
When reviewing my logs, occasionally I see:
[...]INFO[...] org.apache.coyote.http11.AbstractHttp11Processor.process
Error parsing HTTP request header
Note: further occurrences of HTTP
header parsing errors will be logged at DEBUG level.
java.lang.IllegalArgumentException: Invalid character found in the
HTTP protocol
Or:
java.lang.IllegalArgumentException: Invalid character found in the
request target. The valid characters are defined in RFC 7230 and RFC
3986
If I look at my access logs, I see that the urls that were requested to yield these exceptions were things like:
"GET /scripts/index.php?OPT_Session= null" 400
or:
"GET null null" 400
Was I correct in identifying the requests that caused the exceptions to be thrown?
Is there anything that I can do to stop these exceptions from being thrown or restrict these requests from being made?
A normal browser doesn't even allow a client to enter a url with a space in it. It appears these requests do have spaces in them though.
Thanks.
The requests are most probably attacks. If you are running an Internet-facing web server you have to live with them. It is fairly common to put a web server such as Apache in front of Tomcat, possibly configured with mod_security (https://modsecurity.org). In addition you could use fail2ban or a similar solution in order to ban IPs based on errors in the log. However, in my recent experience attackers tend to use a wide range of IP addresses, so fail2ban may not be very effective.
Unable to access SFTP location using Apache Camel with private key.
The SFTP URI: sftp://user#host:22/usr/users/me/inbox/myfolder/?privateKeyFile=ssk-key.pem
the key file is confirmed to be correct.
The error:
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot connect to sftp://user#host:22
at org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:146)
at org.apache.camel.component.file.remote.RemoteFileConsumer.connectIfNecessary(RemoteFileConsumer.java:203)
at org.apache.camel.component.file.remote.SftpConsumer.doStart(SftpConsumer.java:52)
at org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61)
at org.apache.camel.impl.DefaultCamelContext.startService(DefaultCamelContext.java:3269)
at org.apache.camel.impl.DefaultCamelContext.doStartOrResumeRouteConsumers(DefaultCamelContext.java:3563)
at org.apache.camel.impl.DefaultCamelContext.doStartRouteConsumers(DefaultCamelContext.java:3499)
at org.apache.camel.impl.DefaultCamelContext.safelyStartRouteServices(DefaultCamelContext.java:3429)
at org.apache.camel.impl.DefaultCamelContext.doStartOrResumeRoutes(DefaultCamelContext.java:3197)
at org.apache.camel.impl.DefaultCamelContext.doStartCamel(DefaultCamelContext.java:3053)
at org.apache.camel.impl.DefaultCamelContext.access$000(DefaultCamelContext.java:175)
at org.apache.camel.impl.DefaultCamelContext$2.call(DefaultCamelContext.java:2848)
at org.apache.camel.impl.DefaultCamelContext$2.call(DefaultCamelContext.java:2844)
at org.apache.camel.impl.DefaultCamelContext.doWithDefinedClassLoader(DefaultCamelContext.java:2867)
at org.apache.camel.impl.DefaultCamelContext.doStart(DefaultCamelContext.java:2844)
at org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61)
at org.apache.camel.impl.DefaultCamelContext.start(DefaultCamelContext.java:2813)
at org.apache.camel.main.Main.doStart(Main.java:127)
at org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61)
at org.apache.camel.main.MainSupport.run(MainSupport.java:138)
at org.apache.camel.main.MainSupport.run(MainSupport.java:390)
at com.me.mypackage.MainApp.main(MainApp.java:27)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: com.jcraft.jsch.JSchException: SSH_MSG_DISCONNECT: 2 Protocol error: no matching DH grp found
at com.jcraft.jsch.Session.read(Session.java:996)
at com.jcraft.jsch.Session.connect(Session.java:323)
at org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:118)
... 26 more
EDIT: before trying the below, first check which Java version you're running on. If it's version 7 or earlier, try upgrading to a JRE 8 and see if the problem persists. Since answering this I've encountered as situation where things refused to work with Java 7, but worked fine with 8. It might have something to do with some default security provider settings.
Looking at the end of the stack trace, Camel is using the JSch library for FTP over SSH support. Knowing that can be useful in further troubleshooting, because you can look up which key exchange algorithms are supported by JSch.
When the client tries to establish a secure connection with the server, a list of supported algorithms is exchanged to figure out which algorithms both the client and server support. An algorithm is then chosen for the key exchange.
Judging by the error message returned from the server, the SFTP server is most likely using OpenSSH. The part where the error message is returned and the server disconnects is here in the OpenSSH source:
kex->dh = PRIVSEP(choose_dh(min, nbits, max));
if (kex->dh == NULL) {
sshpkt_disconnect(ssh, "no matching DH grp found");
r = SSH_ERR_ALLOC_FAIL;
goto out;
}
"DH grp" means Diffie-Hellman group. Diffie-Hellman is a method of public key exchange. The groups determine which key-lengths are supported. Some examples:
Group 1: 768-bit
Group 2: 1024-bit
Group 5: 1536-bit
Group 14: 2048-bit
In the above bit of C code you notice that a DH group is searched for a minimum number of bits, a preferred number of bits (nbits) and a maximum number of bits. These numbers are provided by the client (JSch in Camel) to indicate what it supports. The server then seeks the best group. If it can't find any for these criteria, it disconnects with message no matching DH grp found.
You can find some info in this IETF memo: https://www.rfc-editor.org/rfc/rfc4419. A relevant bit:
C sends "min || n || max" to S, indicating the minimal acceptable group size, the preferred size of the group, and the maximal group size in bits the client will accept.
S finds a group that best matches the client's request, and sends "p || g" to C.
C being the client and S the server.
So, what to do? First, check what the length is of the public key corresponding to your private key. Then request information regarding the supported cipher algorithms, key exchange algorithms and DH groups from whoever manages the SFTP server. It is possible that the server only supports groups with a higher minimum key than the key length you're using. Or the other way around: the client's public key is longer than the maximum supported by the server.
If the people on the server side are the type that install some package without really understanding what they're doing or configuring, you might have a hard time getting the info. In that case you might have some luck finding out about supported cipher and key exchange algorithms from both server and client by doing network packet capture (using a tool such as Wireshark), but be very careful about this. You'll want to get your superior's permission for this so it's not misconstrued as trying to defeat security measures or eavesdropping. The laws and their interpretation regarding this are slightly dumb in some countries, to put it mildly.
Depending on the outcome, the server might need to update their OpenSSH version, or configure it for additional DH groups; or perhaps you need to choose a key of a different length. Since that might affect the level of security you'd have to seek permission of the operators of the SFTP server and whoever you're doing a project for.
It looks like Camel allows you to specify which cryptographic ciphers to allow, with the ciphers option in the URI. If you don't specify it, the default list from JSch is used. Unfortunately I don't see an option to specify which key exchange algorithm to use. It looks like JSch does support many exchange algorithms (found it here under key exchange: http://www.jcraft.com/jsch/README)
Try to find out which version of JSch your version of Camel uses. If you can update Camel and the newer version includes a newer JSch version, try that first. If you can't update or you're already on the latest version of Camel, see which version of JSch is included and if you can replace it with a newer release without breaking things. It's possible that the latest JSch release supports something that an older one didn't, and with the updates and deprecation of certain algorithms and key lengths due to security vulnerabilities, sometimes older versions of clients refuse to work with up-to-date servers (or the other way around).
Also look up how to enable logging in JSch (it seems that it doesn't use a default framework like Log4j or java.util.logging), and try setting the system property javax.net.debug to value all (for example, via command line parameter -Djavax.net.debug=all). It might supply extra info.
Good luck. I wish I could provide a specific solution, but issues like these often require communication between the SFTP server admin and the user to fix, since it involves knowing the configuration at both sides.
Attempting to authenticate a windows client (IE/Firefox) via SPNEGO and Kerberos. The server side is Java/Tomcat with JCIFS for SPNEGO authentication. The SSO (Kerberos) auth works fine when hosting the server side on a Win 2008 R2 server. However, when on a 2012 server it fails with a GSSException: Defective token detected.
Digging a bit deeper with network tracing I found that, in the working case the IE client sends the negotiate tokens with 4 mechTypes :
1.2.840.48018.1.2.2 - MS KRB5,
1.2.840.113554.1.2.2 - KRB5,
1.3.6.1.4.1.311.2.2.30 - NEGOEX, and
1.3.6.1.4.1.311.2.2.10 - NTLMSSP
In this case my server side will complete the SPNEGO selecting MS KRB5. However, in the problem case, the IE client only sends token with 2 mechtypes - NEGOEX and NTLMSSP. And this is initiator preferred. Java doesn't support NEGOEX and hence it fails.
Some search revealed that this problem is associated with bugs in JDK*, or otherwise issues with DNS. However, I'm on latest JDK and DNS seems to be okay. So my question is, when does a browser in Windows switch to NEGOEX in SPNEGO and why ? The closest answer I found was in an msdn blog which says Kerberos is not available since it's not in a domain environment. However, the client is indeed in domain environment and klist shows a valid Kerberos ticket. If it indeed is a domain problem, what exactly could be the root cause, and how can I avoid the problem ?
Footnote, some background research information : JDK8 has seen many fixes in the GSS mechanism. There were things broken in jdk8u40 and jdk8u45. Then further fixes are present in jdku65. A bug report which was supposed to implement NEGOEX was closed with a fix
"a fix to SPNEGO that allows NEGOEX be presented and bypassed"
however, I'm not sure if NEGOEX is indeed working. The NEGOEX IETF standard also looks abandoned with the draft RFC in the expired state. So I doubt if it will really be supported by Java, libraries.
The description of the URLConnection caching API states as the last sentence:
There is no default implementation of URLConnection caching in the Java 2 Standard Edition. However, Java Plugin and Java WebStart do provide one out of the box.
Where can I find more information about the Webstart ResponseCache?
Which Versions of Webstart on which platforms activate Caching?
In which cases is it active? Only HTTP Get?
Can it be configured?
Is the sourcecode available?
Background:
Case 1
With following (groovy) code
def url = new URL('http://repo1.maven.org/maven2/')
def connection = url.openConnection()
def result = connection.inputStream.text
I would expect that every time the code is executed the server is contacted. But when executed in
Java Web Start 10.9.2.05
JRE-Version verwenden 1.7.0_09-b05 Java HotSpot(TM) Client VM
the behavior is different. The first time the code is executed, the server is contacted. All subsequent executions of the code don't involve any communication to the server (traced using wireshark).
But it gets even stranger. After re-start of the webstart app, the first time the code is executed, the url http://repo1.maven.org/maven2/.pack.gz is requested resulting in a 404. Only than the original url is requested resulting in a 304 NOT MODIFIED. All subsequent executions don't involve any communication to the server.
I think the approach of transparently enhancing the urlconnection with caching capabilities is nice and fine and helps improve performance of client applications. But since the server in this case didn't define an Expires header nor a cache-control header, I think the code above should always ask the server and not silently ignore my request.
Case 2
Following code does not work when executed with webstart 10.1.1.255 (this was installed by some early beta Version of java 7, but I don't know which one this was)
URL url = new URL("http://repo1.maven.org/maven2/");
URLConnection connection = url.openConnection();
connection.setRequestProperty("Accept-Encoding", "gzip");
connection.connect();
InputStream is = connection.getInputStream();
if ("gzip".equalsIgnoreCase(connection.getContentEncoding()))
{
is = new GZIPInputStream(is);
}
is.close();
With Java Web Start 10.1.1.255 starting with the second execution I got a
java.io.IOException: Not in GZIP format
at java.util.zip.GZIPInputStream.readHeader(Unknown Source)
at java.util.zip.GZIPInputStream.<init>(Unknown Source)
at java.util.zip.GZIPInputStream.<init>(Unknown Source)
With both Java Web Start 1.6.0_24 and now Java Web Start 10.2.1.255 I am not able to reproduce the problem.
With Wireshark I saw that in the case where I got the error, the http header contained an If-Modified-Since entry, and the return code therefore was 304. In the other cases there was no If-Modified-Since. Therefore I think that caching is not active in the stable versions of webstart -- despite the last sentence of the above link.
It seems, that the cache of the beta version does aggressive tuning to http get requests: It does use If-Modified-Since and automatically tries to use gzip encoding -- even if the client code does not set this header. But when the cache is hit, the returned stream is not gzipped, although getContentEncoding returns "gzip".
Since the caching seems not to be active in the stable version of webstart on my machine, I cannot verify if the bug is in the code any more and therefore hesitate to file a bug report.
The only information I have found so far is at Java Rich Internet Applications Enhancements in JDK 7
Caching enabled by default: Caching of network content for application code running in Web Start mode is now enabled by default. This allows application improved performance and consistency with applet execution mode. To ensure the latest copy of content is used, the application can use URLConnection.setUseCaches(false) or request header Cache-Control values no-cache/no-store.
[...]
Improvements for handling content with gzip encoding: The deployment cache will keep application content in compressed form and return it to the application as-is with gzip content-encoding in the HTTP header. This makes behavior more consistent across different execution modes (first launch versus subsequent launch, cache enabled versus cache disabled). See 6575586 for more details.
I modified your code. Hope it works for you.
URL url = new URL("http://repo1.maven.org/maven2/");
URLConnection connection = url.openConnection();
connection.setRequestProperty("Accept-Encoding", "ISO-8859-1");
connection.connect();
InputStream is = connection.getInputStream();
if ("gzip".equalsIgnoreCase(connection.getContentEncoding()))
{
is = new GZIPInputStream(is);
}
is.close();
The cache appears to be implemented by com.sun.deploy.cache.DeployCacheHandler, which lives in deploy.jar. I can't find the source in any official repositories; that link is to some sort of grey-market copy.
I can't, at a glance, find any indications that it is disabled (or enabled!) on any particular platforms. This cache handler has been present since at least Java 6.
It only caches GET requests. A comment in the isResourceCacheable method explains:
// do not cache resource if:
// 1. cache disabled
// 2. useCaches is set to false and resource is non jar/zip file
// 3. connection is not a GET request
// 4. cache-control header is set to no-store
// 5. lastModified and expiration not set
// 6. resource is a partial body resource
I don't see any way to directly configure the cache.