Secure connection between two mobile devices: SSL key not 'private' - java

I have SSL working between two Android devices running the same app using a self-signed cert and key generated using openssl and stored in keystores.
The problem is that the private keystore must be embedded in the app package somehow, and therefore becomes available to any attacker. I believe this would allow an attacker to snoop on the session and decrypt the data between the two phones.
I'm not using or requiring any of the other features of PKI, I'm just providing two keystores because the SSL connection setup requires them.
Is there a secure SSL cipher that does not need predefined PKI and generates its own keys on the fly at runtime?
I have investigated generating my own keys at runtime - creating the keys is easily done in Java but the KeyStore.setEntry() requires an an X509 certificate chain not just the public key, and Android does not contain the JCE code to generate the X509. I can do that by including the BouncyCastle (Android compatible version is called SpongyCastle) library but that adds quite an overhead to my app package size.
There is no access to a third-party trust server on the internet, the two phones could be on a private WLAN with no internet access.
As a nice-to-have bonus I'd like to be able to trust that the app is communicating with itself, not someone sniffing the protocol from a PC, but I don't think that's going to be possible as the app package contents will always be available.

To ensure you are talking to something/someone you trust, you need a mechanism of authenticating the other party. I'm not aware of a way to achieve this without a piece of data remaining secret:
Asymmetric authentication (i.e. your current implementation) requires the private key data to remain private.
Symmetric authentication requires that the shared secret remains private.
In the future, TrustZone will allow applications to store secret data in the secure element of the handset. Until that time, however, you are always at risk of malware on your devices. Adding a password to your keystore (that the user knows, not the app) might add an additional hurdle to an attacker, however once the phone is infected then the password can be snooped.
To minimise your risk profile you should produce per-device keys, rather than a single cert/key-pair combo that you incorporate into your application. This will, of course, increase the effort required to add new users as some form of registration will be required (e.g. certifying their key). Alternatively you can push the problem out to your users and have them decide who to trust, PGP-style.

Related

SSL: Client Authentication, multiple certificate version in same store

Here is the situation:
Our Application talks to multiple 3rd party applications and few of them need client authentication.
One particular third party app needs client auth and has appropriately provided certificates (which we imported in our key store (JKS)). This is during integration testing. Things are working fine in the Test environment.
Now before going live, they want to upgrade the certificate issued by an CA.
For the new certificate, we can always create a new store, but for the sake
of convenience wanted to know if the two certificates(old and new) can reside in the same store? (So that in case they rollback, there is no change on our side)
URL being same, how does the application(http-client library) knows which client certificate(s) version to present when making calls to server?
You can have both certificates in the truststore. JSSE will select whichever one matches the trusted CAs the server advises when it requests the client certificate.
However the scenario you describe is radically insecure. If you are the client, you should be providing your own client certificate, not one that comes from someone else. Otherwise there must be a compromise of the private key, which means the certificate can't be used for the purpose it is intended for, and you can legally repudiate all transactions that are supposedly authenticated via this means. Client authentication under such a scheme serves no useful purpose whatsoever and you should not waste further time on it.

Handling security when deploying Java application

I have a Java app (deployed as a JAR file) that allows file sharing through SLLSockets. If all users use the same certificate, file transfers are not secure, since it violates the core concept of asymmetric encrypted communication. Therefore, I understand that each user needs to have its own certificate. This brings up my first question:
How can you generate a certificate programmatically, and where would you store it ? I don't want users to have to generate their own certificate with keytool, then have to tell the app where it is located.
Now, let's say my first question is answered and each user has its own certificate. Prior to opening the SSL connection between two hosts, you need to add each other's certificate to the trustStore. The only way I know to achieve this is by exchanging them through Sockets (note that I am using JGroups to exchange Socket connection info). This brings up my next two questions:
How do you guarantee authentication and integrity when exchanging the certificates ?
How do you programmatically add the received certificate to the trustStore ?
Finally, this whole post brings up my fourth question:
Are the steps described above the correct way to send data securely between two hosts, using SSLSocket asymmetric encrypted communication ?
You don't need client certificates necessarily.
Could you not use username/password authentication?
You can still secure the transfer just by using a server certificate.
Client certs are also kind of a pain, and not entirely secure. They tie you to a machine, and evil processes can read them. Smart cards mitigate this, but aren't free.

Any way of creating an Client Certificate programmatically for SSL Client-Auth without BouncyCastle?

I'm writing a server application that identifies it's clients by SSL-Client-Auth. Every Client should create it's own KeyPair and Certificate to connect to this server. Any unknown client (identified by a new public key) will be thread as a new client and registered with this public key. I don't want to sign client certs serverside (since this is would not increase security).
But it seems (after studing hundreds of pages) that I can't create the necessary files (or objects) programatically in plain Java, since the KeyStore would need at least self-signed client-certificates (which I can't create without BouncyCastle lib).
Am i wrong or is there really no way to do this?
And please provide me with a link or code. Java SSL semms to be VERY overcomplicated and has a bad documentation.
As far as I know, there is nothing in the JSSE API (including JSSE and JCE) to issue an X.509 certificate indeed. Dealing with the X.509 structures manually is actually quite complex, and you'd certainly need to read a lot more if you want to replicate what BouncyCastle does, without using BouncyCastle.
BouncyCastle is by far the most convenient way to achieve what you want. You could also use the sun.* packages (since keytool uses them to produce self-signed certificates), but using these packages is usually bad practice, since they're not part of the public JSSE API. They are not documented, rely on a specific implementation of the JSSE, and are subject to change. In contrast, BouncyCastle is meant to be used as a library.
I don't want to sign client certs serverside (since this is would not
increase security).
Since your server will only use the public key (and not the certificate) to perform the authentication (based on whatever mapping between public key and user you choose to implement), issuing the certificate on the server side or the client side doesn't matter in terms of security. The client could self-sign anything it wants, to the server can only rely on the public key anyway, not the rest of the certificate. The reason you'd need to "bundle" the public key into an X.509 certificate is because it's the only kind of client certificate supported (for example, the JSSE doesn't support OpenPGP certificates).
Having a service on your server that receives a public key and sends an X.509 certificate (with any dummy attributes, signed by any private key) might be the easiest option. In addition, if you use a mini CA internal to that server, this would simplify the way you'd need to tweak the trust managers to get the self-signed certificates through. (You'd still need to check the actual public key with your internal mapping, if you want to design such a security scheme.)

Use case of increased https security in JAVA

I am writing an application that should ensure secured connection between two parties (call them Client and Server).
Server should restrict which clients can connect using https. For this purpose, server will issue a certain number of certificates that will be checked when a client tries to connect. If the certificate that the client is using is not in trusted list, connection would not be established.
This certificate should be distributed using some kind of usb device. So when Client using my application tries to get something from server using https, application should read that certificate from usb device and USE IT to establish https connection. Private key should be kept secret on the device at all times.
So far I managed to create local keystores on client and server (JKS), add them to each other trusted list and use them to achieve proper connection.
My question is: can client certificates be issued by a server and transported to clients, all together with private key required for https connection? I dont want any data or keystore to be created on the client machine, everything required for establishing https connection should be on that device. Device could have some procedures and api to help this process and ensure secrecy of private key.
Any help will be greatly appreciated :)
can client certificates be issued by a server and transported to
clients, all together with private key required for https connection?
Technically, they can, but you're going to have to authenticate that connection by some other means if you want to make sure that private key only gets to its intended user. As far as your overall scheme is concerned, this doesn't really help. In addition by sending the private key as data to the client, they may be able to extract it one way or another.
If you can physically send a USB device, you can use a hardware cryptographic token that supports PKCS#11. Such tokens tend to have options to store a private key in a way in can't be extracted (it can only be used with the device). They tend to come in to forms: as a smart card (in which case you need a reader) or as a USB token (it looks like a memory stick, but it's not). Depending on the model, the USB token can in fact be a smart card with an embedded reader.
Java supports PKCS#11 keystores, so if this token comes with a PKCS#11 driver/library, it could be used from Java.
The normal client certificate approach to authentication doesn't work well if you don't trust the client to protect their credentials - which seems to be your scenario.
Putting the certificate on the USB device keeps it off the client machine's disk, but doesn't stop the client user from accessing it and distributing it to others. On the other hand, it reduces the risk of 3rd parties stealing the certificate from the client machine "at rest" - but only if the client protects the USB key properly. So you need to be clear about what threats you are trying to defend against, and who you trust.
The only way to make the certificate at all 'private' from the client user is to put it on some kind of tamper-resistant device, and use an approach that does not transmit the certificate to the client machine during authentication.
Compare your approach with those used for internet banking, where the customer is issued a device that can do challenge-response, using their bank card and PIN (two-factor authentication). The card details are protected from casual attack by the card's chip; but the system assumes that the client looks after their card and PIN, and reports thefts promptly (because it's their money at risk!). If the client is not motivated to look after the credentials, then this approach does not make sense.
If you just want to ensure that the client has an unsharable token, you could consider using SecurID devices, or similar, which are an off-the-shelf solution to your problem.

Digital Signature: security issue between signing on client or on server?

With Adobe Reader you can sign a document locally. Is it theorically more secure than if the document was transported to the server and signed on the server (not especially using adobe technology) ? Could the user contest that the document could have been tampered later on if done on server ? How to prove him wrong technically that it is impossible even when signed on server - for legal issue to be taken into account.
Are you living in the EU? I can describe the situation here. The legal aspects of signatures are sort of regulated by Directive 1999/93/EC. There will be an updated version of this, so there will be some changes in the details, but generally the Directive does distinguish between server-based signatures and signatures made by an individual locally, having sole control over the process.
Being in sole control of a local process has a lot of security advantages, among them:
Using what the Directive calls a Secure Signature Creation Device (SSCD), such as a smart card. Using a tamper-proof device is definitely considered an advantage, although it can still be exploited when the attacker is e.g. in control of the computer/OS the SSCD is attached to.
The "What you see is what you sign" principle that was vaguely described in the Directive. Ideally, you should be able to view the data you are about to sign on a trustworthy device. This is impossible to guarantee with server-side signatures.
Key escrow. If the server signs, the key is most likely also stored on the server. And it's very, very hard to implement a solution where a key is on the server where only clients may access it, it's much more often the case that you need to trust the party operating the server.
That said, it is possible to secure the transport from client to server using e.g. TLS and still having a reasonably secure service. But pertaining the law (at least in the EU), the notion of a "non-repudiation" signature, a signature which is meant to be issued by an individual person, is only possible in the context of "local signatures". Accredited CAs here won't issue non-repudiation certificates to legal entities for example, such a certificate will only be issued to a real person, typically on an SSCD.
The downside of SSCDs has been that it is very hard to roll out large-scale deployments of software that would make use of them, especially across company/state boundaries because there are still a lot of interoperability issues with the myriads of hardware, the cost and the plain and simple fact that it's just less convenient.
Anything could happen to the document on its way to the server. The connection could be MitM attacked, the server could have tampered with, etc. etc.
The #1 rule in crypto signatures is, that it must happen on a trusted machine in a trusted environment, preferrably before it even reaches a connected machine (i.e. offline, then transferred on a offline medium).
So in short: It should be signed on the client and nowhere else.
If the signature is made by the user (human operator), then it's a question of trust to the server where the key resides.
Normally "signature is made by the user on behalf on the user" means that the user owns the key which is used for signing. In this case it makes little sense to put the private key on the server. And if you need this scheme, then either the signature is made not by the owner or the signature is made not on behalf of the individual making it.
But technically signing the data on the server (as you describe) is possible and in properly implemented architecture the user should be able to get the signed copy back and manually validate the signed document to ensure that this is what he (or the server on his behalf) intended to sign.
Another question is whether the server doesn't (intentionally or due to security breach) use the private key of the user to sign anything besides the requested documents. This is extremely hard to ensure unless you have a server specifically crafted for exactly one operation (signing of something) and in this case you would probably deal with specialized hardware device (such as one offered by SafeNet), not with a generic Windows/Linux/... server operating system.
We have a distributed cryptography module in our SecureBlackbox product, which implements similar scheme to what you describe, but roles are usually reversed: the user possesses the key and uses it to locally sign the document which resides on the server and is not transferred to the client. In that module we use TLS to ensure security of the channel and signing is performed on the computer of the user, so the private key remains strictly secret. However, the scheme you describe is also possible to implement using that module.

Categories