The third party tool we used for security test is giving Slow HTTP POST Vulnerability on Tomcat 8. We have a simple Spring Controller and JSP in the application.
Existing Tomcat connector config is below:
<Connector port="8643" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true" compression="on"
clientAuth="false" sslProtocol="TLS" maxPostSize="20480"
maxSwallowSize="20480" maxHeaderCount="25" maxParameterCount="100"/>
Note that we don't have Apache or Nginx in front of tomcat. Please suggest the configs that we can use directly on Tomcat.
An example of Slow HTTP Attack is SLOWLORIS
To mitigate it with Tomcat, the solution is to use the NIO Connector, as explained in this tutorial.
What is unclear with your problem, is that Tomcat already uses the NIO connector by default on Tomcat 8, which is your configuration :
The default value is HTTP/1.1 which uses an auto-switching mechanism
to select either a non blocking Java NIO based connector or an
APR/native based connector.
Maybe should you set some other Connector parameters to specifically limit POST abuse, I suggest :
maxPostSize="1048576" (1 MByte)
connectionTimeout="10000" (10 seconds between the connection and the URI request)
disableUploadTimeout="false" (activate the POST maximum time allowed)
connectionUploadTimeout="20000" (maximum POST of 20 seconds)
An option is also to limit the headers number (default being 100), but this can have side effects with people using smartphones (which are known to send many headers) :
maxHeaderCount="25"
But it depends if your traffic is coming from Internet, or if it is a pro intranet with known users. In this latter case you could adjust the settings to be more permissive.
Edit 1: hardening with MultipartConfig
As stated on some other posts, maxPostSize might not work for limitting uploads. When using Java 7 built-in uploads, it is possible to configure limits by an annotation to the Servlet, or by configuration. It's not a pure Tomcat configuration as you asked, but it is necessary to know about it and talk with the DEV team as security must be taken in account since the early stages of development.
Edit 2: disabling chunked Transfer-Encoding
Some Slow HTTP POST attacks are based on requests sent with a Transfer-Encoding : chunked header, and then send many or an infinite number of chunks. To counter this attack, I suggest configuring a Rewrite Valve.
To achieve this, add the valve in your Host definition in server.xml :
<Valve className="org.apache.catalina.valves.rewrite.RewriteValve" />
Supposing your host name is the default one (localhost), you need to create $CATALINA_BASE/conf/Catalina/localhost/rewrite.config file with this content :
RewriteCond %{HTTP:Transfer-Encoding} chunked
RewriteRule ^(.*)$ / [F]
If necessary, you can adapt the RewriteRule to reply with something else than a 403 Forbidden which is due to the F flag. This is pure Tomcat config and flexible.
Related
I have a Java app deployed in tomcat 6. The app sends messages to another service via socket and it needs to use ONLY TLSv1.2 protocol.
In my tomcat6.conf file I put this configuration:
JAVA_HOME=/usr/lib/jvm/jre1.7.0_75
JAVA_OPTS="${JAVA_OPTS} -Djavax.sql.DataSource.Factory=org.apache.commons.dbcp.BasicDataSourceFactory -Dhttps.protocols=TLSv1.2"
But stll use the older tls version.
It there any configuration to apply in java or tomcat to force use TLSv1.2?
Edit 1:
The answer provided by #Peter Walser is good and could work. The problem is I can't modify the code because is a jar provided by third party, and I can only configure the enviroment, not the code.
The https.protocols system property is only considered for HttpsURLConnection and URL.openStream(), as stated in Diagnosing TLS, SSL, and HTTPS
Controls the protocol version used by Java clients which obtain https connections through use of the HttpsURLConnection class or via URL.openStream() operations. ...
For non-HTTP protocols, this can be controlled through the SocketFactory's SSLContext.
You can configure the SSLSocket as follows:
SSLSocketFactory factory = (SSLSocketFactory) SSLSocketFactory.getDefault();
SSLSocket socket = (SSLSocket) factory.createSocket(host, port);
socket.setEnabledProtocols(new String[] {"TLSv1.2"});
When working with REST-clients, most of them support configuring the protocols over the SSLContext. Example (JAX-RS client):
Client client = ClientBuilder.newBuilder()
.sslContext(SSLContext.getInstance("TLSv1.2"))
// more settings, such as key/truststore, timeouts, logging
.build();
If you are trying to force the server to use TLSv1.2 the following link may provide what you need.
The Apache Tomcat 5.5 Servlet/JSP Container - SSL Configuration HOW-TO
As the doc specifies edit the Tomcat Configuration File as below,
The implementation of SSL used by Tomcat is chosen automatically unless it is overridden as described below. If the installation uses APR - i.e. you have installed the Tomcat native library - then it will use the APR SSL implementation, otherwise it will use the Java JSSE implementation.
To avoid auto configuration you can define which implementation to use by specifying a classname in the protocol attribute of the Connector.
To define a Java (JSSE) connector, regardless of whether the APR library is loaded or not do:
<Connector protocol="org.apache.coyote.http11.Http11AprProtocol" port="8443" .../>
Configure the Connector in the $CATALINA_BASE/conf/server.xml file, where $CATALINA_BASE represents the base directory for the Tomcat 6 instance. An example <Connector> element for an SSL connector is included in the default server.xml file installed with Tomcat. For JSSE, it should look something like this:
<!--
<Connector
port="8443" maxThreads="200"
scheme="https" secure="true" SSLEnabled="true"
SSLCertificateFile="/usr/local/ssl/server.crt"
SSLCertificateKeyFile="/usr/local/ssl/server.pem"
clientAuth="optional" SSLProtocol="TLSv1"/>
-->
You will note that the example SSL connector elements are commented out by default. You can either remove the comment tags from around the the example SSL connector you wish to use or add a new Connector element of your own. In either case, you will need to configure the SSL Connector for your requirements and environment.
The port attribute (default value is 8443) is the TCP/IP port number on which Tomcat will listen for secure connections. You can change this to any port number you wish (such as to the default port for https communications, which is 443). However, special setup (outside the scope of this document) is necessary to run Tomcat on port numbers lower than 1024 on many operating systems.
After completing these configuration changes, you must restart Tomcat as you normally do, and you should be in business. You should be able to access any web application supported by Tomcat via SSL.
Try changing the SSLProtocol attribute in <Connector> element to SSLProtocol="TLSv1.2".
<Connector
port="8443" maxThreads="200"
scheme="https" secure="true" SSLEnabled="true"
SSLCertificateFile="/usr/local/ssl/server.crt"
SSLCertificateKeyFile="/usr/local/ssl/server.pem"
clientAuth="optional" SSLProtocol="TLSv1.2"/>
I want to apply compression to my responses at tomcat level however it does not work. It seems like an esay protocol, however somehow I am unable to apply it. Here is my connector conf in server.xml
<Connector port="80" protocol="org.apache.coyote.http11.Http11Nio2Protocol"
maxThreads="500"
processorCache="500"
maxConnections="10000"
acceptCount="5000"
URIEncoding="UTF-8"
useSendfile="false"
compression="force"
compressionMinSize="4"
noCompressionUserAgents="gozilla, traviata"
compressableMimeType="text/html,text/xml,text/plain,text/css,text/javascript"/>
I disabled antivirus on my local macihne(Client-side) and the requests have Accept-Encoding:gzip header. Thank you in advance.
Too late but:
Note: There is a tradeoff between using compression (saving your bandwidth) and using the sendfile feature (saving your CPU cycles). If the connector supports the sendfile feature, e.g. the NIO2 connector, using sendfile will take precedence over compression. The symptoms will be that static files greater that 48 Kb will be sent uncompressed. You can turn off sendfile by setting useSendfile attribute of the protocol, as documented below, or change the sendfile usage threshold in the configuration of the DefaultServlet in the default conf/web.xml or in the web.xml of your web application.
I have secured apache reverse proxy configured in front of my websphere 8 application server. I have set generic JVM arguments -Dhttps.proxyHost and -Dhttps.proxyPort but the requests on response.sendRedirect are not directed to peoxy server. It is directed to defualt port 9080.
How to solve this issue ?
I have solved this issue on Tomcat & Jboss by modifying my connector port as follows
connector name="http" protocol="HTTP/1.1" socket-binding="http" scheme="https" proxy-name=" 192.168.1.1 " proxy-port="443" secure="true"
How do I solve this for Websphere ?
I assume that you are using like below
response.sendRedirect(request.getContextPath() +
"/my/main.jsp");
Here - request.getContextPath gives the proxied server info.
As a quick fix I resolved it using the proxy server values from properties files.
response.sendRedirect("get proxy server name from prop file" +
"/my/main.jsp");
Solved this problem by following below steps.
Add following in Apache web server's virtual host tag . What you actually need is to forward along the protocol that was used to access the server.
VirtualHost *:443>
RequestHeader set X-Forwarded-Proto "https"
….
/VirtualHost>
For more explanation refer site
https://www.nczonline.net/blog/2012/08/08/setting-up-apache-as-a-ssl-front-end-for-play/
Following properties needs be added in Websphere webcontainer properties through admin console.
Go to Application servers > server1 > Web container > Custom properties
Add Following properties
httpsIndicatorHeader -
X-Forwarded-Proto (Request header value set in web server (in our case it is https) )
com.ibm.ws.webcontainer.extractHostHeaderPort -
true (To obey request port no)
trusthostheaderport -
true (To obey request port no)
Refered the below site for this settings
http://www-01.ibm.com/support/docview.wss?uid=swg21569667
http://129.33.205.81/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.base.iseries.doc/ae/rweb_custom_props.html
In our case (Websphere Liberty 21.0.0.9), we simply added a couple of directives in the corresponding Apache virtual host configuration:
RequestHeader set X-Forwarded-Proto "https"
ProxyPreserveHost On`
These directives are valid only from Apache 2.3.3 on
Both are mentioned in the article mentioned below by #Darshan Shah
I have a web app hosted on two Tomcat servers, identical WARs and server.xml.
I don't have access to change any of the Apache settings - as such, I know that load balancing works as we have tested by shutting down one server and not the other, etc. Based on Tomcat logs I see both being used.
When using the default Tomcat HTTP port 8080 (in my case that was changed to 8083, nonissue) sessions are retained fine.
However, I made a change to use SSL, port 443 and now sessions are invalidated anywhere from 30 seconds - 5 minutes after creation. I assume this has something to do with session replication as I have not made any changes to the server.xml or web.xml besides pointing to the keystore :
<Connector port="443" protocol="HTTP/1.1" SSLEnabled="true" secure="true" scheme="https" sslProtocol="TLS"
maxThreads="150"
clientAuth="false" keystoreFile="ssl/keystore.jks" keystorePass="123" />
I'm not exactly sure what else to copy here as the only change to the server.xml has literally be these few lines - I assume the added security is for some reason invalidating the session.
I currently have a test server serving the application on HTTP and HTTPS - even accessing the app at the same time, HTTP goes through fine and retains the session, HTTPS refuses to hold on to the session for longer than 5 minutes.
For what it's worth the SSL cert is installed to both Tomcat servers rather than at the load balancer - according to https://serverfault.com/questions/248139/apache-ssl-losing-session-over-load-balancer that may be the issue? I've only coded the application and changed some of the xml configurations - not sure if this is necessarily an issue I can fix.
Need Step-by-Step Overview for Compression on Tomcat 7 ... I've been at this for days. Particularly interested in compressing text/xml in response from a servlet, but would also like to test other compressions.
From my googling and reading, it seems like I only need to add a few lines to configure the http connector in server.xml (see below). But I'm checking on sites like webpagetest.org and not seeing any results (not even gzip in the response header). What more do I need? Filters? Use of GZip methods within my app? Specifying the servlet(s) for output compression in web.xml? I'll be more than happy to continue getting the details right and would be happy just now to be sure I know what all the necessary parts are.
<Connector port="80" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"
compression="on"
compressionMinSize="2048"
noCompressionUserAgents="gozilla, traviata"
compressableMimeType="text/html,text/xml,application/xml,text/javascript,text/css" />
UPDATE. SOLVED ... see comments under accepted answer below.
Did you restart Tomcat after editing server.xml file ?
Did you check the logs (logs/catalina.out) to see if there is any error on server startup ? (ie. typo in the config files)
compression="on"
should work.
Maybe webpagetest.org doesn't support gzip compression. Why don't you use Chrome Developper Tools (F12, you can see headers in the Network tab) ? or Firefox Web Console (Ctrl+Shift+K) ?