I have a website (Liferay portal 6.1 and Tomcat 7.0) which is having HTTP and HTTPS URL like below.
https://stackoverflow.com/questions/ask
https://stackoverflow.com/profile
I follow below steps and I am getting Forbidden error:
I fill some form details in 2nd URL.
Before submitting that form I open 1st URL in a new tab.
Then if I come back to 1st URL and do a submit then I found a forbidden error.
I checked JSESSIONID at both tabs, Ids are same. What may be the issue? Any idea guys?
It's not worth investing time in making http/https mixed mode work (in my opinion). Bite the bullet and just go https always. Even if you'd fix this issue now, chances are that you'll run into more issues later, eating up more of your time. And when you run into other issues, they're highly likely security sensitive.
Do yourself a favor - unconditionally redirect ALL http traffic to https. It's 2016, there's nothing unusual with this any more.
Edit after your comment: Do this especially if it's an old system (by the way, this was obvious when you mentioned Liferay 6.1. Assuming you're using CE, it's long out of updates): Configure the use of https anywhere you can easily get your hands on. Unconditionally add the HSTS header to take care of the rest. No need to touch any ancient logic. E.g. set
web.server.protocol=https
in your portal-ext.properties. Add the HSTS header to your Apache httpd unconditionally (assuming you have Apache httpd, otherwise use this Liferay App from yours truly).
Related
We are trying to use pretender.io to our application which developed in AngularJS, Spring and Hibernate konnectnow.com which hosted at amazon server.
Here are the steps I followed:
Signup at prerender.io and got token: cFeRZcsv3JnAftreuhMO
Checked documentation and understood that I need to install middleware and decided to use Spring one.
In web.xml added pom added as mentioned https://github.com/greengerong/prerender-java
Added !# to the URL in all the pages.
Restarted tomcat server.
Logged into pretender.io with login details and found that nothing getting crawl.
For testing purpose the url konnectnow.com/#!/planpage changed to konnectnow.com/?_escaped_fragment_=/planpage
Nothing comes up, got error page isn’t working.
Checked Crawl Stats at pretender.io and found that as:
Status Code: 505, Cache Hit: Miss, Response Time(sec): 1.51sec, URL:
http://localhost:8080/#!/planpage
Not sure why it takes local host.
Can some one help me how to make this work.
We recommend using html5 push state instead of the #! in your URLs if possible. Html5 push state is better since nothing after a # is sent to the server, which can lead to issues for the crawlers that are checked by their user agent (Facebook, Twitter, etc).
You should set the forwardedURLHeader in order to have the Prerender Java middleware use a different host for your website instead of your proxy URL.
https://github.com/greengerong/prerender-java#forwardedurlheader
I also see that you posted your prerender token publicly so we regenerated your token to prevent someone else from using it. Please find your new token when you log into your Prerender.io account. I've also emailed you there.
This is somewhat of a speculative question in that the answer may not be apparent in the info I have available, but I am hoping that someone with sufficient experience will recognize a likely answer based on common practices for corporate proxies.
I work (not as a software developer) behind a corporate proxy. In my spare time I was messing around with a Java program I'm developing. This program needs to make a few very simple HTTP GET requests, and I'm using Apache HttpClient for that. I was concerned at first about whether or not I'd make it through the proxy server. In our web browsers, the proxy server is simple entered into the network settings... no authentication needed. So, I added the following to my Java program:
myClient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, MY_PROXY);
Sure enough, it worked! However, I had another concern. The HTTP requests coming from my program probably had some strange User-Agent specified (I've since confirmed this is the case), and I did not want them to ever trigger any sort of suspicion in automated or manual packet inspections. So I said to myself, "why not just set the User-Agent header to be the same as the browser on this machine?"
myClient.getParams().setParameter(CoreProtocolPNames.USER_AGENT, BROWSER_AGENT);
Here is where it gets weird. If the BROWSER_AGENT string above is set to exactly the same value as the corporate supplied browser on my machine (either IE or FF), I get an "authentication failed, missing credentials" type error message returned from the corporate proxy server. But, if I set the User-Agent header to something generic, like say Mozilla 5.0 or even a totally bogus string, or even an empty string, it all works fine! The parts that confuse me are:
When User-Agent is set to the same as my browser (a long complex string), I "fail authentication" somehow, which makes no sense since in the real browser I provide no authentication information (unless it comes from some pre-installed certificate maybe?)
If the corporation requires authentication for any requests sent to the proxy server on port 80, then how come they let random User-Agent strings get through? Oversight? Some other reason I can't comprehend?
Hopefully this question is not too speculative to be deemed constructive. I'd love to hear from people with experience in this area. Thanks.
By default, HTTPClient identifies itself as the user agent. As you have seen, you can override this to any string you want.
Looks like your proxy servers is configured to automatically add user credentials based on browser type however due to some exception found, your admin added an exception rule, ie, when the user-agent is not known, just let it through. Personally, I think it is a very bad security policy since as you found out, all program can go through your proxy without authentication just by using a bogus user-agent.
We are facing a peculiar issue at the moment and we have no clue what is causing this.
We have a web-service hosted on serverA.
When this web-service is invoked from serverB (using the command, curl http://serverA:8008/service/getId), we get the required response. (the web service returns an Id which is an integer).
When the same web-service is invoked from serverC, we get the required response but the digit 2 in the response is getting replaced by _ .
For example, we get 5002 when the web-service is invoked from serverB.
When the same web service is invoked from serverC, we get 500_
We checked the wireshark details from serverA and the data going out from serverA is the same for both the servers.
We have no clue at the moment why this is happening. I would like to add that serverC is in DMZ while serverB is not.
Any input/help in this regard is highly appreciated.
by gather the facts that
1. Server doesn't change the response by its own.
2. Web Service is giving the same response for the same input.
only culprit is your firewall, can you stop it for testing purpose and see if the response is coming as expected. OR
Try to check the firewall settings and create a hole/exception for web Service.
Thanks everyone for your efforts, the issue is now resolved. It was an incorrect firewall rule that was causing this. I asked our network engineer how the firewall setting can alter http response body and following is the reply I got:
For certain protocols the firewall does deep-level packet inspection,
so rather than just check the port number it actually looks into the
payload. This allows it to block malware, malformed packets that might
be exploiting a vulnerability and the like. So it know what to inspect
you have to specify in the rule what the traffic is, so you say it’s
on port 8008 and it’s HTTP. The problem was that for some reason this
rule had been set to use port 8008, but the traffic type was set to
passive mode FTP rather than HTTP. Once I corrected it to HTTP, it
started working.
Try putting ServerB in DMZ too and see what happen.
If it acts the same its a network issue.
If not you might have 2 different versions of code on the servers.
This sounds to me like you have special characters in your URL and they cause the overwriting of the port number, but only if the characters are recognized in the character set. Can you use a hex editor to check the URL for special characters (backspace, specifically)?
I can't solve your problem, but look for any transcoders on the path.
Send request from server C to server A.
1) wireshark at A, to see if it receives request correctly. A possible transcoder may convert host-less urls to host-ful ( GET /service/getId to GET http:// serverA:8080/service/getId), or may drop Host header etc. But if you see nothing wrong here proceed to step 2.
2) wireshark at B, to see if response is valid. Look if Content-Type is set correctly. If set correctly, and still getting manipulated try adding header Cache-Control: no-transform. Many transcoders respect that. If this also fails and can't remove any possible transcoders, viruses you may have go to step 3.
3) Just go https, it is immune to such things.
This is a feature of Apache, designed to hide parts of the HTTPresponce.
I did not see a fix immediatly, and do not have the time to look right now. I'll try to edit one in later.
If you want to try to find it, here is the link to the documentation: http://xianshield.org/guides/apache2.0guide.html
use [Ctrl] + [F] to find this statement (without qoutes) "Configure and build the Apache Server"
I have developed myself in the last few months about web development in java (servlets and jsp). I am developing a web server, which is mainly serving for an application. Actually it is running on google app engine. My concern is, although I am using SSL connections, sending parameters in the URL (e.g. https://www.xyz.com/server?password=1234&username=uname) may not be secure. Should I use another way or is it really secure? I don't know if this url is delivered as plaint text as whole (with the parameters)?
Any help would be appreciated!
Everything is encrypted, including the URL and its parameters. You might still avoid them because they might be stored in server-side logs and in the browser history, though.
Your problem seems to go further than Web Server and Google App Engine.
Sending a password through a web form to your server is a very common security issue. See this SO threads:
Is either GET or POST more secure than the other? (meaningly, POST will simply not display the parameter in the URL so this is not enough)
Are https URLs encrypted? (describes something similar to what you intend to do)
The complete HTTP request including the request line is encrypted inside SSL.
Example http request for the above URL which will all be contained within the SSL tunnel:
GET /server?password=1234&username=uname HTTP/1.1
Host: www.xyz.com
...
It is possible though that your application will log the requested URL, as this contains the users password this may not be OK.
Well, apart from the issues to do with logging and visibility of URLs (i.e., what happens before and after the secure communication) both GET and POST are equally secure; there is very little information that is exchanged before the encrypted channel is established, not even the first line of the HTTP protocol. But that doesn't mean you should use GET for this.
The issue is that logging in is changing the state of the server and should not be repeated without the user getting properly notified that this is happening (to prevent surprises with Javascript). The state that is being changed is of the user session information on the server, because what logging in does is associate a verified identity with that session. Because it is a (significant) change of state, the operation should not be done by GET; while you could do it by PUT technically, POST is better because of the non-idempotency assumptions associated with it (which in turn encourages browsers to pop up a warning dialog).
Is there anything in the Servlet spec, Tomcat, or Wicket that will allow a webapp running behind mod_proxy to determine the non-proxied URL of the request?
We need to send out emails with links in them. I had been using the following bit of Wicket to construct URLs to specific pages in the app:
String relURL = RequestCycle.get().getRequest().getRelativePathPrefixToWicketHandler();
RequestUtils.toAbsolutePath(relURL);
Since the emails don't go back out through the proxy, of course the URLs don't get re-written, and end up looking like http://localhost/....
Right now the best I can do is to hard-code the URLs to our production server, but that's setting us up for some debugging headaches when running on dev/test machines.
Using InetAddress.getLocalHost().getHostName() isn't really a solution, since that's likely to return prod1.mydomain.com or somesuch, rather than mydomain.dom, from which the request likely originated.
As answered for the question Retain original request URL on mod_proxy redirect:
If you're running Apache >= 2.0.31 then you might try to set the
ProxyPreserveHost directive as described here .
This should pass the original Host header trough mod_proxy into your
application, and normally the request URL will be rebuild there (in
your Servlet container) using the Host header, so the schema location
should be build using the host and path infos from "before" the proxy.
Is there anything in the Servlet spec, Tomcat, or Wicket that will allow a webapp running behind mod_proxy to determine the non-proxied URL of the request?
No. If the reverse proxy doesn't put the information that you require into the message headers before passing them on, there's no way to recover it.
You need to look at the Apache Httpd documentation to figure out how to get the front-end to put the information that you need into the HTTP request headers on the way through. (It can be done. I just can't recall the details.)