This question might have been asked before as it is quite a common exercise, but I am not sure what to search with.
I have a rest client app hosted on one instance of Tomcat and a rest server hosted on a different tomcat instance on another hostname. This works well, but currently I have the hostname hardcoded in the java classes. How can I parameter-ize the host name inside the rest client so that it allows for a future change without restarting the tomcat on which the rest client is hosted ?
There are a number of solutions you could employ including (but not limited to):
look up the server hostname in a properties or xml file on the client server (each time you want to make a REST call)
provide a mechanism in the client app to configure the server hostname and store it in memory
look up the server hostname in a directory such as LDAP or AD or in a database (each time you want to make a REST call)
You could employ a caching mechanism if you don't want to look up the value every time - perhaps once it is read it only expires after a number of seconds/minutes, or after a number of calls.
A quite common technique to pass environment variables and even hostname is define a System variable that you could read using java.lang.System.getenv() or java.lang.System.getProperty() if you are using Maven once your application startup read this variable in a static class of constants so the value will be set at the beginning and remain until the end of the application there other techniques that involve, web.xml but to keep it simple this will work. Just add the var to the enviroment and read it.
You can always pass the host name in arguments during client process startup and then read those arguments using System.getProperty("hostName").
Related
My question is upper) I'm building Client-Server app on java and want to try implement OAuth2.0 authentication. Yet there is a problem - I haven't static IP address. Could I implement it whith such services like Google or Facebook when my app is on localhost?
First off, some of the OAuth providers won't even accept IP addresses, so even if you have a static IP it won't work.
You can try to use localhost, but that is not always possible or desirable, for instance when you want to test over a local network.
There is another way to get around this. What you can do is:
Pick a domain name which will never exist. For example: random.rubbish
Setup your OAuth apps with this domain name, i.e. register with Facebook and Google using http://random.rubbish/ as your domain and you can add a path if you want. This is only an example, you can change http and random.rubbish to whatever you need.
Now on your local system, you can edit the HOSTS file and put an entry for random.rubbish as follows: random.rubbish 127.0.0.1
Now in your browser when you go to http://random.rubbish, it will take you to localhost (port 80 because of http). This is because the first check the system performs to resolve a domain name is the HOSTS file.
If you want to test over a local network, you can add this entry to your DHCP server, or you can edit the HOSTS file on every machine where you will be accessing the server.
I have some Apache CXF Web services published to the Internet, but I want one of them to be only visible to a specific IP through a VPN.
I modified the CXF XML file so that my Web service is only visible when accessing through that IP, but it is already accessible through the net.
How can I publish my Web service to only the IP only visible through the VPN?
Thanks in advance.
IP filtering should ideally not be done on your application layer. Think about it - You need to process the request to find out if your business code should run. You are using application resources for a request that should never have come to the application.
Use a firewall rule to filter the requests instead (Assuming of course that your firewall resides elsewhere). This will reduce the load on your server and centralize the IP filtering rules for a particular group of servers (Application / DB / File etc).
If your service is available on the internet, the rule to block requests via a specific IP do not make sense. You will need to get a list of IPs to white-list if you intend to restrict access by IP for everyone.
Say, I have a Java web app inside a war file that is hosted on cloudfoundry at the url mycoolapp.cfapps.io, which works perfectly. I now need to host it on a custom domain mycoolapp.com and I have purchased the domain.
What is process to host it on my own domain? Can I do it via Cloudfoundry?
My app needs ssl. Currently https://mycoolapp.cfapps.io works. But I need it to work on my custom domain. What will be involved in this? (I think I need to get a certificate for my domain, but what next?)
In the app some confidential information is embedded in urls (this cannot be changed), so I'd also need to ensure that the provider cannot know the urls accessed (apart from the base url). Can this be done? If not, what are the alternatives?
It could be done by creating a CNAME record for your app (see Azure example here). Unfortunately, it seems that Cloud Foudry (CF) does not support it yet. As I understand, it is caused by the fact that CF router determines the exact Virtual Machine (and, hence, IP) by parsing URL and determining the route according to the host name (mycoolapp in your case). Ideally there would be an interface in CF where you could register all CNAME aliases for your app (as implemented for Azure websites)
If CNAME record would be enabled, that it would also work for HTTPS, as it basically resolves IP address. And definitely there would be an interface for you to upload a certificate for your domain. This leads to problems mentioned below about SSL termination. But, again, as far as I know, it is not supported by CF yet.
That it a question to the internal structure of run.pivotal.io deployment of CF. Conceptually HTTPS will do the trick as it encrypts URL parameters. However I suppose that SSL terminates on the router (as certificate is issued for *.cfapps.io - single cert for all apps - you could check it in browser connection to your app by HTTPS). That likely means that internally CF has access to ALL data of your request, and leads to my question about SSL termination in CF, which currently has no answer. Hope CF will provide a way to terminate SSL on the final server processing the request.
UPDATE:
Cloud Foundry has proposed its own way to support custom domains - through using CloudFlare proxy. If the fact of using proxy that decrypts your data is Ok for you, it could be used.
I have a java app on my server and I can access it with my browser by going to server.com:8080/app.
I've been trying to get my application to access this weblet but because of XSS jQuery.post() gives me errors. Both the app and weblet are on the same server, but since I have to access the weblet through port 8080 Javascript thinks it's another server.
My question: Is there a way to avoid this XSS issue?
I don't want to use a PHP proxy or .htaccess. I also don't want to use the $.getJSON(url + '&callback?') method.
I'm looking for any other solutions.
Thanks in advance.
It' SOP(Same Origin Policy) that's stopping you here, not XSS. XSS is a security vulnerability, which breaks SOP. And yes it limits access so both pages have to run on the same protocol, port and domain.
Can you use a reverse proxy from the webserver on port 80 to 8080? If not you could take a look at easyXDM. Another alternative is to have the 8080 service return rhe access control header mentioned in one of your comments, but this is not supported in older browsers.
We have a Rest API that requires client certificate authentication. The API is used by this collection of python scripts that a user can run. To make it so that the user doesn't have to enter their password for their client certificate every time they run one of the scripts, we've created this broker process in java that a user can startup and run in the background which holds the user's certificate password in memory (we just have the javax.net.ssl.keyStorePassword property set in the JVM). The scripts communicate with this process and the process just forwards the Rest API calls to the server (adding the certificate credentials).
To do the IPC between the scripts and the broker process we're just using a socket. The problem is that the socket opens up a security risk in that someone could use the Rest API using another person's certificate by communicating through the broker process port on the other person's machine. We've mitigated the risk somewhat by using java security to only allow connections to the port from localhost. I think though someone in theory could still do it by remotely connecting to the machine and then using the port. Is there a way to further limit the use of the port to the current windows user? Or maybe is there another form of IPC I could use that can do authorization using the current windows user?
We're using Java for the broker process just because everyone on our team is much more familiar with Java than python but it could be rewritten in python if that would help.
Edit: Just remembered the other reason for using java for the broker process is that we are stuck with using python v2.6 and at this version https with client certificates doesn't appear to be supported (at least not without using a 3rd party library).
The most simple approach is to use cookie-based access control. Have a file in the user's profile/homedirectory which contains the cookie. Have the Java server generate and save the cookie, and have the Python client scripts send the cookie as the first piece of data on any TCP connection.
This is secure as long as an adversary cannot get the cookie, which then should be protected by file system ACLs.
I think I've come up with a solution inspired by Martin's post above. When the broker process starts up I'll create an mini http server listening on the IPC port. Also during startup I'll write a file containing a randomly generated password (that's different every startup) to the user's home directory so that only the user can read the file (or an administrator but I don't think I need to worry about that). Then I'll lock down the IPC port by requiring all http requests sent there to use the password. It's a bit Rube Goldberg-esque but I think it will work.