So I have a java webapp that uses tomcat with an apache proxy layer. I'm looking to make all cookies set from the app have the httpOnly flag. The problem with this is that tomcat is responsible for setting the flag from the application side and its default (in servlet api 2.5) is false. I was hoping I could set this flag for all cookies on the fly using apache.
I've been trying different combinations and the closest I have gotten is setting the last cookie passed to httpOnly which is of course wrong:
Header append Set-Cookie "; HttpOnly"
I have no way of knowing what cookies/values are going to be passed from the app. Is this even possible?
The following mod_headers rewrite has the benefit that it won't duplicate HttpOnly if it's already there, if that sort of thing matters to you:
Header edit Set-Cookie "(?i)^((?:(?!;\s?HttpOnly).)+)$" "$1; HttpOnly"
See:
Where I originally found the above regex
An explanation of why all those parentheses with the negative lookahead assertion, search for Finding Lines Containing or Not Containing Certain Words
A post where I found a small improvement to the regex (search for Header edit Set-Cookie)
Try the following mod_headers directive.
Header edit Set-Cookie ^(.*)$ $1;HttpOnly
Related
I have an Apache Tomcat server to read request from my webapp.
In my webapp I have a form that is submitted and posts a large number of POST parameters, around 8k~
However when I try to debug the entrypoint, where the HttpServletRequest, I always receive exactly 6841. The inputs from the form are created iterating over a number of elements, meaning that the last ones are exactly the same form as the other that are succeding
I can't show code for NDA reasons.
I discarded the frontend as an issue because with a sniffer I was able to see that the complete post param list is sent.
I believe I'm on the right track, I think Tomcat is dropping the other post params. the post size limit is well beyond the size of the request, and we don't have a post parameter count configured on server.xml (defaults to 10,000 and I don't hit that amount).
All answers I have found are about not sending parameters at all or errors being thrown, in this case they are simply ignored by Tomcat.
Increasing the number of POST parameters (not size of post) to 20,000 fixed the issue in my case. This was done in the tomcat server.xml configuration using maxParameterCount:
The maxParameterCount attribute controls the maximum number of
parameter and value pairs (GET plus POST) that can be parsed and
stored in the request. Excessive parameters are ignored. If you want
to reject such requests, configure a FailedRequestFilter.
I am trying to create a servlet request filter which filters any incoming request based on the whitelist characters.
I want to accept only those characters which matches the whitelist pattern to avoid any malicious code to be executed by the attacker in the form of script or modified URL.
Does anyone know which whitelist characters should be used for filtering any HTTP request string?
Any help would be appreciated
Thanks in Advance
Implement pattern matching mechanism to find whitelist characters from your URL pattern by using RegEx..
Follow this link1
Or you can try:
if (inputUrl.contains(whiteList)) {
// your code goes here
}
Or If you need to know where it occurs, you can use indexOf:
int index = inputUrl.indexOf(whiteList);
if (index != -1) // -1 means "not found"
{
...
}
Thanks,
~Chandan
The problem is that "malicious" is very broad term. You should have clear idea what types of attacks are you trying to protect from and then take measures to prevent it.
You cannot specify set of characters in general which need to be filtered out, you need to know domain in which your input in url will be used. Generally dangerous is not url itself but url parameters which are provided by your users and then interpreted by your application. Depending on how your application will use this input, you need to take specific precautions. So for example:
Url param is used to determine target of redirect. User can use this to navigate victim to malicious site, site which masks as your site but will steal users credentials providing false credentials and so on. In that case you should construct whitelist of allowed destinations expected by your aplication and forbid others. See OWASP top TEN - Unvalidated redirects and forwards.
You save data from url param to DB. You should prevent SQL injection by using Parametrized queries. See OWASP SQL injection Cherat Sheet,
Url param data will be displayed as html. You should sanitize your html by some already proven sanitizer such as OWASP html sanitizer or AntiSamy to prevent Cross Site Scripting.
And so on...
The point is, there is no silver bullet to protect you from all the malicious attack vectors especially not by whitelisting certain characters in servlet filter. You should know where is potentially malicious data used and process it with its specific usage in mind because different targets will have different vulnerabilities and will require different measures for protection.
Good start for high level overview of security issues and measures form protection against them is OWASP TOP TEN. Then I recommend some more detailed guides and resources provided by owasp.
I'm using Tomcat 6.0.20, HttpServlet
my servlet code are as followings :-
response.setContentType("application/xml; charset=utf-8");
but each time i will got the content type as :
application/xml;charset=utf-8
which is without the space between " ; ".
May i know how to bypass the space being trimmed?
Is there anyway to do so (eg : modify the servet-api.jar)?
It could be happening in Tomcat, in a reverse proxy in front of Tomcat, in a proxy, a firewall or somewhere in the client-side stack. It is probably impossible to stop whatever is doing this.
But it should not matter. The HTTP standard says that there is optional whitespace after the semicolon. Your client-side code should work whether the space is present or not.
First off my Java is beyond rusty and I've never done JSPs or servlets, but I'm trying to help someone else solve a problem.
A form rendered by JavaScript is posting back to a JSP.
Some of the fields in this form are over 100KB in size.
However when the form field is being retrieved on the JSP side the value of the field is being truncated to 100KB.
Now I know that there is a similar problem in ASP Request.Form which can be gotten around by using Request.BinaryRead.
Is there an equivalent in Java?
Or alternatively is there a setting in Websphere/Apache/IBM HTTP Server that gets around the same problem?
Since the posted request must be kept in-memory by the servlet container to provide the functionality required by the ServletRequest API, most servlet containers have a configurable size limit to prevent DoS attacks, since otherwise a small number of bogus clients could provoke the server to run out of memory.
It's a little bit strange if WebSphere is silently truncating the request instead of failing properly, but if this is the cause of your problem, you may find the configuration options here in the WebSphere documentation.
We have resolved the issue.
Nothing to do with web server settings as it turned out and nothing was being truncated in the post.
The form field prior to posting was being split into 102399 bytes sized chunks by JavaScript and each chunk was added to the form field as a value so it was ending up with an array of values.
Request.Form() appears to automatically concatenate these values to reproduce the single giant string but Java getParameter() does not.
Using getParameterValues() and rebuilding the string from the returned values however did the trick.
You can use getInputStream (raw bytes) or getReader (decoded character data) to read data from the request. Note how this interacts with reading the parameters. If you don't want to use a servlet, have a look at using a Filter to wrap the request.
I would expect WebSphere to reject the request rather than arbitrarily truncate data. I suspect a bug elsewhere.
I am debugging some code in the Selenium-rc proxy server. It seems the culprit is the HttpURLConnection object, whose interface for getting at the HTTP headers does not cope with duplicate header names, such as:
Set-Cookie: foo=foo; Path=/
Set-Cookie: bar=bar; Path=/
The way of getting at the headers through the HttpURLConnection (using getHeaderField(int n) and getHeaderFieldKey(int n)) seems to be causing my second cookie to be lost. My question is
Is it true that HttpURLConnection itself can't cope with it, and
If so, is there a workaround to it?
My recommended workaround is to not use HttpUtilConnection at all, which is crude and unintuitive, but use commons-httpclient instead.
http://hc.apache.org/httpclient-3.x/
Without actually having tried it (can't remember to have handled that topic myself), there's also getHeaderFields, inherited from UrlConnection. Does this do what you need?
Ok, I found the problem, and the answer to the original question. Basically, the Cookie implementation I used (python's default Cookie Lib) used \r\n to delimit the different Set-Cookie headers(as supposed to \n), this confused HttpUrlConnection and caused it to stop at the first occurence of that delimiter(I am going to guess it stops at the first empty line). So the answer to the first question is: Yes, it can cope with duplicate header names, but is buggy in another way. Currently fixing the python library is a workable workaround, but it's not going to work long term because we don't own that library. I am sure using the httpclient library is a sensible way to go, but I am hoping for a solution that requires less work. So I don't know exactly what to do there yet.