This question may seem like a novice, and perhaps 'stupid' question but please bear with me...
I'm still struggling to find a way to get my Java application to use a keystore located inside the JAR file, and I'm very tempted just to disable certificate validation all together using the method here. However, before I do so, I just wanted to confirm why you should not do this and whether those reasons actually apply to me.
I've heard that no certificate validation can make your application liable to "Man In The Middle" attacks (I think), but even if I am correct, I am unsure as to what these actually are so please could somebody explain. Though, if they are what I think they could be, I'm not sure whether my application ever be subject to them because, my application only uses an SSL connection to obtain data from my website, so users do not tell the application which URLs to visit - if that makes sense...
Here's, an attack scenario. Other's might want to contribute some more.
Your application accesses a URL. At some point along the way (any intermediate network hop), an attacker could position himself as a "man-in-the-middle", that is, he would pretend to be a "proxy" for your communication, being able to read everything that goes through, and even modifying it on the way: the attacker could act on behalf of the user, mislead him as to what information he gets, and basically access al data being transferred.
Enter SSL: your client receives a certificate from the server, with a valid key (Signed by a known certification authority, or present in your keystore). The server will then sign and encrypt all it sends using that key. If an attacker where to place himself in the middle, he would not be able to read the data (it's encrypted) or modify it (it's signed, and modification would break the signature). He could still block communications altogether, but that's another story.
So that's that... if you ignore your keystore, you can't verify any server side certificate, and you open the door to man-in-the-middle attacks.
Though, if they are what I think they could be, I'm not sure whether
my application ever be subject to them because, my application only
uses an SSL connection to obtain data from my website, so users do not
tell the application which URLs to visit - if that makes sense...
If you connect to a server via SSL and you don't do any authentication, effectively you are have no security.
You have no idea who is the endpoint you are talking to.
The fact that the user does not type in a URL, but the URL is a hardcoded URL to your site is irrelevant. A simple proxy that forwards the data from your client to the server can steal all your client's data since there is no kind of authentication (this is the Man in the Middle Attack).
I would suggest you put the code you are using to load the keystore so that you get help on that.
Otherwise, if you don't have any requirements on security and you don't have any sensitive data you should go for plain connection (i.e. non-SSL) so that your performance does not deteriorate due to the unecessary (in your case) SSL overhead
Related
I've implemented HTTPS connection with servlet running REST api.
Device is able to connect to server using HTTPS.
Device is accepting server's certificate and establishing HTTPS.
How to make sure that the device accepts only a particular certificate? The intention is that someone should not be able to setup a fake server identifying itself as right server using self-signed certificate.
In a browser environment, user would see Chrome's crossed out https in the url and know that the certificate is not verified. How to ensure that for an app.
The procedure is called certificate validation and is pretty standard. Some classes and components perform validation for you, others leave it for your manual implementation and control.
Validation ensures (in ideal world) that you are connecting to the legitimate server, i.e. the server whose host name and the name in the presented certificate match. This requires that the server has acquired a valid CA-signed (we omit self-signed variants for lack of security and flexibility) certificate for the needed host name. So far so good.
Now you can either rely on pre-implemented certificate validation or implement your own or add your own checks to the pre-implemented validation procedure. Implementing your own validation is too cumbersome for your task, so let's assume that the client code you use already performs some validation (you have not specified what exactly code you use for connection so I can't comment on it). You can rely on it, however in some countries state agencies perlustrate traffic, and for doing this they acquire (or generate on-the-fly in some cases) certificates which are fake by nature but valid if we follow the validation procedure blindly.
So if you control both the server and the client and also you can implement additional validation (your client component or class lets you do this) then your additional check can be to compare the issuer of the certificate (or the whole certificate chain) to the issuer you know to be valid. This is less flexible and to some extent against the PKI rules, but this approach significantly reduces the chance for the fake certificate to be generated and accepted as valid. The idea is that you know what certificate you use and what CA you used (and maybe use in future), so you can store this information in the client and compare it during validation.
You can read more about certificate validation by simply searching here on SO for "certificate validation" - this is quite popular topic.
With Adobe Reader you can sign a document locally. Is it theorically more secure than if the document was transported to the server and signed on the server (not especially using adobe technology) ? Could the user contest that the document could have been tampered later on if done on server ? How to prove him wrong technically that it is impossible even when signed on server - for legal issue to be taken into account.
Are you living in the EU? I can describe the situation here. The legal aspects of signatures are sort of regulated by Directive 1999/93/EC. There will be an updated version of this, so there will be some changes in the details, but generally the Directive does distinguish between server-based signatures and signatures made by an individual locally, having sole control over the process.
Being in sole control of a local process has a lot of security advantages, among them:
Using what the Directive calls a Secure Signature Creation Device (SSCD), such as a smart card. Using a tamper-proof device is definitely considered an advantage, although it can still be exploited when the attacker is e.g. in control of the computer/OS the SSCD is attached to.
The "What you see is what you sign" principle that was vaguely described in the Directive. Ideally, you should be able to view the data you are about to sign on a trustworthy device. This is impossible to guarantee with server-side signatures.
Key escrow. If the server signs, the key is most likely also stored on the server. And it's very, very hard to implement a solution where a key is on the server where only clients may access it, it's much more often the case that you need to trust the party operating the server.
That said, it is possible to secure the transport from client to server using e.g. TLS and still having a reasonably secure service. But pertaining the law (at least in the EU), the notion of a "non-repudiation" signature, a signature which is meant to be issued by an individual person, is only possible in the context of "local signatures". Accredited CAs here won't issue non-repudiation certificates to legal entities for example, such a certificate will only be issued to a real person, typically on an SSCD.
The downside of SSCDs has been that it is very hard to roll out large-scale deployments of software that would make use of them, especially across company/state boundaries because there are still a lot of interoperability issues with the myriads of hardware, the cost and the plain and simple fact that it's just less convenient.
Anything could happen to the document on its way to the server. The connection could be MitM attacked, the server could have tampered with, etc. etc.
The #1 rule in crypto signatures is, that it must happen on a trusted machine in a trusted environment, preferrably before it even reaches a connected machine (i.e. offline, then transferred on a offline medium).
So in short: It should be signed on the client and nowhere else.
If the signature is made by the user (human operator), then it's a question of trust to the server where the key resides.
Normally "signature is made by the user on behalf on the user" means that the user owns the key which is used for signing. In this case it makes little sense to put the private key on the server. And if you need this scheme, then either the signature is made not by the owner or the signature is made not on behalf of the individual making it.
But technically signing the data on the server (as you describe) is possible and in properly implemented architecture the user should be able to get the signed copy back and manually validate the signed document to ensure that this is what he (or the server on his behalf) intended to sign.
Another question is whether the server doesn't (intentionally or due to security breach) use the private key of the user to sign anything besides the requested documents. This is extremely hard to ensure unless you have a server specifically crafted for exactly one operation (signing of something) and in this case you would probably deal with specialized hardware device (such as one offered by SafeNet), not with a generic Windows/Linux/... server operating system.
We have a distributed cryptography module in our SecureBlackbox product, which implements similar scheme to what you describe, but roles are usually reversed: the user possesses the key and uses it to locally sign the document which resides on the server and is not transferred to the client. In that module we use TLS to ensure security of the channel and signing is performed on the computer of the user, so the private key remains strictly secret. However, the scheme you describe is also possible to implement using that module.
I have made a web application using Java EE 6 (using reference implementations) and I want to expose it as a REST web service.
The background is that I want to be able to retrieve data from the web application to a iOS app I made. The question is how would I secure the application? I only want my application to use the web service. Is that possible and how would I do this? I only need to know what I should search for and read and not the actual code.
Unfortunately, your webservice will never be completely secure but here are few of the basic things you can do:
Use SSL
Wrap all your (app) outbound payloads in POST requests. This will prevent casual snooping to find out how your webservice works (in order to reverse engineer the protocol).
Somehow validate your app's users. Ideally this will involve OAUTH for example using Google credentials, but you get the idea.
Now I'm going to point out why this won't be completely secure:
If someone gets a hold of your app and reverse engineers it, everything you just did is out the window. The only thing that will hold is your user validation.
Embedding a client certificate (as other people have pointed out) does nothing to help you in this scenario. If I just reverse enginneered your app, I also have your client certificate.
What can you do?
Validate the accounts on your backend and monitor them for anomalous usage.
Of course this all goes out the window when someone comes along, reverse engineers your app, builds another one to mimic it, and you wouldn't (generally) know any better. These are all just points to keep in mind.
Edit: Also, if it wasn't already obvious, use POST (or GET) requests for all app queries (to your server). This, combined with the SSL should thwart your casual snoopers.
Edit2: Seems as if I'm wrong re: POST being more secure than GET. This answer was quite useful in pointing that out. So I suppose you can use GET or POST interchangeably here.
Depends on how secure you want to make it.
If you don't really care, just embed a secret word in your application and include in all the requests.
If you care a little more do the above and only expose the service via https.
If you want it to be secure, issue a client certificate to your app and require a
valid client certificate to be present when the service is accessed.
my suggestions are:
use https instead of http. there are free ssl certificate avaliable,
get one and install.
use a complex path such as 4324234AA_fdfsaf/ as the root end point.
due to the nature of http protocol, the path part is encrypted in the https request. therefore it's very safe. there are ways to decrypt the request through man-in-the-middle attack but it requires full control over the client device including install an ilegal ssl certificate. but, i'd spend more time on my app to make it successful.
Create a rule on the machine which hosts your Web Service to only allow your application to access it through some port. In Amazon EC2, this is done creating a rule in the instance Security Group.
We have used RestEasy as a part to securing our exposed RESTful webservices. There should be lot of example out there but here is the one which might get you started.
http://howtodoinjava.com/2013/06/26/jax-rs-resteasy-basic-authentication-and-authorization-tutorial/
You can also use OAUTH:
http://oltu.apache.org/index.html
I'm developing a server component that will serve requests for a embedded client, which is also under my control.
Right now everything is beta and the security works like this:
client sends username / password over https.
server returns access token.
client makes further requests over http with the access token in a custom header.
This is fine for a demo, but it has some problems that need to be fixed before releasing it:
Anyone can copy a login request, re-send it and get an access token back. As some users replied this is not an issue since it goes over https. My mistake.
Anyone can listen and get an access key just by inspecting the request headers.
I can think of a symmetric key encryption, with a timestamp so I can reject duplicate requests, but I was wondering if there are some well known good practices for this scenario (that seems a pretty common).
Thanks a lot for the insight.
PS: I'm using Java for the server and the client is coded in C++, just in case.
I don't get the first part, If the login request is https, how can anyone just copy it?
Regarding the second part, t This is a pretty standard session hijacking scenario. See this question. Of course you don't have the built-in browser options here, but the basic idea is the same - either send the token only over a secure connection when it matters, or in some way associate the token with the sending device.
In a browser, basically all you have is IP address (which isn't very good), but in your case you may be able to express something specific about your device that you validate against the request to ensure the same token isn't being used from somewhere else.
Edit: You could just be lucky here and be able to rule out the IP address changing behind proxies, and actually use it for this purpose.
But at the end of the day, it is much more secure to use https from a well-known and reviewed library rather than trying to roll your own here. I realize that https is an overhead, but rolling your own has big risks around missing obvious things that an attacker can exploit.
First question, just to get it out there: if you're concerned enough about nefarious client-impersonator accesses, why not carry out the entire conversation over HTTPS? Is the minimal performance hit significant enough for this application that it's not worth the added layer of security?
Second, how can someone replay the login request? If I'm not mistaken, that's taking place over HTTPS; if the connection is set up correctly, HTTPS prevents replay attacks using one-time nonces (see here).
One of the common recommendations is - use https
https man in the middle attack aside using https for the entire session should be reliable enough. You do not even need to worry about access tokens - https takes care of this for you.
Using http for further requests seems to introduce some vulnerabilities. Now anybody with a network sniffer can intercept your traffic steal the token and spoof your requests. you can build protection to prevent it - token encryption, use once tokens, etc. but in doing so you will be re-creating https.
Going back to the https man in the middle attack - it is based on somebody's ability to insert himself between your server and your client and funnel your requests through their code. It is all doable i.e. in case the attacker has access to the physical network. The problem such attacker will face is that he will not be able to give you a proper digital certificat - he does not have the private key you used to sign it. When https is accessed through a browser, the browser gives you a warning but still can let you through to the page.
In your case it is your client who will communicate with the server. And you can make sure that all proper validations of the certificate are in place. If you do that you should be fine
Edit
Seconding Yishai - yes some overhead is involved, primarily CPU, but if this additional overhead pushes your server over board, you have bigger problems with your app
I'm writing on a Java EE project which will have everything from 3-6 different clients. The project is open source, and I wonder what security mechanisms one could/should use. The problem is: Because it is open source, I would believe that it is possible for anyone with a user to write their own client (maybe not realistic, but truly possible) and make contact with the server/database. I've tried to go through all the scenarios of reading/writing different data to the database as different roles, and I conclude with that I have to have some security mechanism on a higher level than that (it is not enough to check if that account type is allowed to persist that entity with that ID and so on...). In some way I have to know that the client making contact is the correct client I wrote. Could signing the Jar files solve this entire problem, or is there other ways to do it?
-Yngve
I really think that if restricting the available activities on the server side (based on role) is not sufficient, than you've got a bigger problem. Even if a user doesn't write their own client, whatever mechanism you are using for your remote calls is likely to be vulnerable to being intercepted and manipulated. The bottom line is that you should limit the possible calls that can be made against the server, and should treat each call to the server as potentially malicious.
Can you think of an example scenario in which there's a server action that a particular authenticated user would be allowed to take that would be fine if they're using your client but dangerous if they're not using your client? If so I'd argue that you're relying too strongly on your client.
However, rather than just criticize I'd like to try to also offer some actual answers to your question as well. I don't think signing your jar file will be sufficient if you're imagining a malicious user; in general, public-key cryptography may not help you much since the hypothetical malicious user who is reverse-engineering your source will have access to your public key and so can spoof whatever authentication you build in.
Ultimately there has to be someone in the system you trust, and so you have to figure out who that is and base your security around them. For example, let's imagine that there may be many users at a particular company who you don't necessarily trust, and one admin who oversees them, who you do trust. In that scenario you could set up your client so that the admin has to enter a special code at startup, and have that code be kept in memory and passed along with any request. This way, even if the user reverse-engineers your code they won't have the admin code. Of course, the calls from your client to your server will still be vulnerable to being intercepted and manipulated (not to mention that this requirement would be a royal pain in the neck to your users).
Bottom line: if your user's machine is calling your server, than your user is calling your server. Don't trust your user. Limit what they can do, no matter what client they're using.
Well the source may be available for anyone, but the configuration of the deployment and the database certainly isn't. When you deploy the application you can add users with roles. The easiest thing to do is to persist them in a database. Of course the contents of the table will only be accessible to the database administrator. The database administrator will configure the application so that it can access the required tables. When a user tries to log in, he/she must supply a username and password. The application will read the table to authenticate/authorize the user.
This type of security is the most common one. To be really secure you must pass the credentials over a secure path (HTTPS). For a greater degree of security you can use HTTPS client authentication. You do this by generating a public key for every client and signing this with the private key of the server. Then the client needs to send this signed key with every request.
EDIT: A user being able to write his/her own client doesn't make the application less secure. He/she will still not be able to access the application, if it is required to log in first. If the log in is successful, then a session (cookie) will be created and it would be passed with every request. Have a look at Spring security. It does have a rather steep learning curve, but if you do it once, then you can add security in any application at a number of minutes.