Google Authentication errors when accessing token endpoint from JVM - java

We run a web application with a JVM backend (Java 7 update 75; code is in Scala, but I don't believe this is relevant). We use Google for authentication via Oauth.
There have been a handful of days over the last couple of months on which we have been intermittently unable to authenticate users.
The redirect to and from Google is successful, but when we try to call the token_endpoint at https://www.googleapis.com/oauth2/v4/token to validate the authentication we sometimes get the following exception: javax.net.ssl.SSLHandshakeException: server certificate change is restrictedduring renegotiation.
This comment on another question led me to find a JDK bug that can manifest as this exception (What means "javax.net.ssl.SSLHandshakeException: server certificate change is restrictedduring renegotiation" and how to prevent it?).
My working hypothesis is:
The bug (https://bugs.openjdk.java.net/browse/JDK-8072385) means that only the first entry in the Subject Alternative Name (SAN) list is checked. The exception above is thrown when the hostname being verified is in the SAN list, but not at the top of the list.
Yesterday (27th May 2015 from ) we saw two different certificates being intermittently served from www.googleapis.com. The first (serial 67:1a:d6:10:cd:1a:06:cc) had an SAN list of DNS:*.googleapis.com, DNS:*.clients6.google.com, DNS:*.cloudendpointsapis.com, DNS:cloudendpointsapis.com, DNS:googleapis.com whilst the second (serial 61:db:c8:52:b4:77:cf:78) had a SAN list of: DNS:*.storage.googleapis.com, DNS:*.commondatastorage.googleapis.com, DNS:*.googleapis.com.
Due to the bug in the JVM, we can validate the first certificate, but the exception is thrown with the second (despite being perfectly valid) as *.googleapis.com is not the first entry in the SAN list.
The fix is in the yet to be released 7u85 (no mention of when this will be available).
I've downgraded a single node of our cluster to 7u65, but the certificate seemed to be reverted at around the time we did this (last error we saw was 22:20GMT) so it's hard to pin down an affirmative fix.
Has anyone else experienced this or something similar and have any other workaround other than downgrading (downgrading removes various other SSL/TLS checks)?

I am not really sure that your problem is related to a JVM bug.
There is a fix in Java 6 and above for CVE-2014-6457, "Triple Handshake attack against TLS/SSL connections (JSSE, 8037066)", prevents peer certificates changing during renegotiation.
Problem explanation:
A security vulnerability in all versions of the Transport Layer
Security (TLS) protocol (including the older Secure Socket Layer
(SSLv3)) can allow Man-In-The-Middle (MITM) type attacks where chosen
plain text is injected as a prefix to a TLS connection. This
vulnerability does not allow an attacker to decrypt or modify the
intercepted network communication once the client and server have
successfully negotiated a session between themselves.
However, if the potentially changed certificate is for the same identity as the last seen certificate then the connection is allowed.
Two identities are considered equal in this case:
There is a subject alternative name specified in both certificates which is an IP address and the IP address in both certificates is the same.
There is a subject alternative name specified in both certificates which is a DNS name and the DNS name in both certificates is the same.
The subject and issuer fields are present in both certificates and contain identical subject and issuer values.
In other conditions (the identity of the certificate has changed) then a javax.net.ssl.SSLHandshakeException: server certificate change is restricted during renegotiation exception is raised.
Workaround:
Disable renegotiation (not recommended) applying the following JVM argument: -Djdk.tls.allowUnsafeServerCertChange=true it disables the unsafe server certificate protection.
Disable SSLv3 in outgoing HTTPS connections, Java 7 supports TLSv1.1 and TLSv1.2 in client mode but defaults to using TLSv1 in the TLS handshake. We should use TLSv1.1 and TLSv1.2 in client mode TLS in java 7 as well. Java 8 enables TLSv1.1 and TLSv1.2 in client mode(in addition to SSLv3 and TLSv1) and uses TLSv1.2 by default in TLS handshake. If you are creating the connection programatically and setting a socket factory use TLS instead of SSL.
Anyway, update your post with your google oauth client code before calling the token_endpoint to validate the authentication to see what might be happening.

Related

ActiveMQ SSLException with Java 1.8.0_271-b09 client

I'm running an ActiveMQ server with SSL authorization (via trust store).
The clients are written with Spring Boot and Camel. Each client has it's individual certificate.
When the client's Java version is updated to version 1.8_271 the SSL connection suddenly fails. This can be found in the ActiveMQ logs:
javax.net.ssl.SSLException: Received fatal alert: unexpected_message
After downgrading to 1.8.0_261 everything is back to normal.
And here is where it gets really weird: my ActiveMQ truststore currently contains 232 certificates. When I delete 2 of them (does not matter which one) the connection with the 1.8_271 client works again.
This really does not make any sense to me. How can the number of items in the server's truststore have anything to to with the client's Java version?
Some updates:
I'm testing with the ActiveMQ Docker image
Changing the key store type from native JKS to PKCS #12 does not make a difference
Using Java 1.8_271 on the server side behaves the same but shows a different error message:
java.net.SocketException: Connection or outbound has closed
From the release notes
Improve Certificate Chain Handling
A new system property, jdk.tls.maxHandshakeMessageSize, has been added to set the maximum allowed size for the handshake message in TLS/DTLS handshaking. The default value of the system property is 32768 (32 kilobytes).
If your server requests client authentication, JSSE (edit) below 1.3 sends a CertificateRequest message specifying acceptable CA names derived from the certificates in your truststore, thus the number of certificates in your truststore affects the size of this message and may make it exceed the limit in which case the client rejects it (although I'm not sure I like using unexpected_message for this case).

Enforce Two-Way SSL in Java CXF clients

Two-Way SSL - or mutual authentication - is typically dictated in HTTPS by the server. For example, this tutorial explains how to set up WildFly application server to require webservice clients to present a certificate during communication.
However, in our case we need to enforce Two-Way SSL on the client side. That means our client is configured with a client certificate so that it can supply the certificate during handshake. If a server we are connecting to does not ask for the certificate, we want to abort communication.
Descriptions of the SSL handshake like the diagram in the section titled "The SSL Protocol" here (a bit further down) explain how the first thing happening is selection of a cipher suite:
"1. Client hello - The client sends the server information including the
highest version of SSL it supports and a list of the cipher suites it
supports. (TLS 1.0 is indicated as SSL 3.1.) The cipher suite
information includes cryptographic algorithms and key sizes."
On Java side (more specifically: CXF in my case) , it's possible to filter cipher suites("cipherSuitesFilter") - so I thought it would be possible to limit cipher suites to those requiring mutual authentication. But I don't find any links between cipher suites and two-way SSL. For example, this page notes:
authentication algorithm - dictates how server authentication and (if
needed) client authentication will be carried out.
I'm starting to think that means the cipher suite only dictates how client authentication is done, not if client authentication is required.
That leaves me at a dead end. Is there any other way to enforce client authentication on the client side?
Right now the only solution I can think of is finding the right hook method to implement for SSL communication after the handshake has been done, checking if the connection uses client authentication and aborting if it's not. But I'd like to use any kind of common approach for, if such a thing exists.
We didn't find a better solution than the one I already mentioned in my question.
As a client, we can only check whether a connection was established using a client certificate. That does not guarantee that the server thoroughly verified the certificate, just that it requested a certificate.
Our implementation is a custom javax.net.ssl.SSLSocketFactory that extends createSocket methods to check whether javax.net.ssl.SSLSession.getLocalCertificates() returns something. If not, an exception subclass of javax.net.ssl.SSLException is thrown to abort communication.
The socket factory is set via org.apache.cxf.configuration.jsse.TLSClientParameters.setSSLSocketFactory(SSLSocketFactory).

Java seems to accept certificate with ANY CN [duplicate]

This question already has answers here:
Writing a SSL Checker using Java
(2 answers)
Closed 5 years ago.
This question is NOT a duplicate of question pointed to. There is NOWHERE in mentioned question anything about fact that TLS does not perform hostname verification by itself.
I have ActiveMQ instance and client in Java. Client uses JMSTemplate (org.springframework.jms.core.JmsTemplate) with factory org.apache.activemq.ActiveMQSslConnectionFactory. I have created self-signed certificates and with them trust store and keystore. Trust stores and keystores are read by both programs, I checked it by running both programs with
-Djavax.net.debug=all
Now my problem is that it seems that client absolutely ignores server hostname verification. Client connects to ActiveMQ using URL:
ssl://localhost:61616?jms.useCompression=true
Now, I tried to check whether everything will fail as expected if I change CN on ActiveMQ's certificate and well, it didn't went well. I changed to CN to e.g:
CN=google.com
or to:
CN=some.random.xxx333aaa.net.pp
but all these values seem to be OK with Java. Also note that there are no SANs (that is subjectAltNames). What more I tried to connect to ActiveMQ with such certificate but installed on different machine, and it seems that it all works well. Which is NOT what I want.
Also: I have finally uninstalled all Java versions and installed 1.8.0_144, using only JDK installer, installed jce_policy-8 in both places (it installs both JRE and JDK), did the same on remote machine too.
If you will examine RFC 2246 (TLS) and RFC 2818 (HTTPS) you will discover that hostname verification is part of HTTPS, not part of TLS. In TLS it is entirely up to the application to perform an authorization step.
So in fact my question is: how to force hostname verification?
See this answer.
Ok, I think I found an answer. Check this link:
https://issues.apache.org/jira/browse/AMQ-5443
and link mentioned in link above:
https://tersesystems.com/2014/03/23/fixing-hostname-verification/
It seems that TLS against what I thought DOES NOT PERFORM HOSTNAME VERIFICATION. This is absolutely stunning, but it seems that this is exactly the case. If no one will provide better answer I'll accept my own answer.
EDIT: Also see this:
https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html
and look specifically at this part:
Cipher Suite Choice and Remote Entity Verification
The SSL/TLS protocols define a specific series of steps to ensure a protected connection. However, the choice of cipher suite directly affects the type of security that the connection enjoys. For example, if an anonymous cipher suite is selected, then the application has no way to verify the remote peer's identity. If a suite with no encryption is selected, then the privacy of the data cannot be protected. Additionally, the SSL/TLS protocols do not specify that the credentials received must match those that peer might be expected to send. If the connection were somehow redirected to a rogue peer, but the rogue's credentials were acceptable based on the current trust material, then the connection would be considered valid.
When using raw SSLSocket and SSLEngine classes, you should always check the peer's credentials before sending any data. The SSLSocket and SSLEngine classes do not automatically verify that the host name in a URL matches the host name in the peer's credentials. An application could be exploited with URL spoofing if the host name is not verified.
Protocols such as HTTPS (HTTP Over TLS) do require host name verification. Applications can use HostnameVerifier to override the default HTTPS host name rules. See HttpsURLConnection for more information.

SMTP TLS certificate

I'm having some problems understanding how TLS/SSL is working for email.
I have some questions.
In my development machine if I debug the following code fails the first time arround on the "sslSocket.startHandshake()" line, but if I try it again straight away it is working fine.
The error message that I'm getting is: "Remote host closed connection during handshake".
When I deploy the same code to our staging environment and send an email the code is working fine first time.
Both the development and staging server are in the same network and both have no anti virus programs runnning.
The only thing that I can think of as to why it is not working the first time around in the development environment is because I'm stepping through the code with the debugger and it's slower because of this.
Do you have any knowledge as to why I am receiving this error?
The code underneath is creating an SSL Socket. I'm curious to know if this code is enough for the connection with the mail server to be secure. Are these SSLSocketFactory classes dealing with certificates themselves?
2a) Or do I still need to specify a certificate somehow?
2b) Or is this code getting the certificate from the server and using the certificate to encrypt the data and send the encrypted data back and forth to the email server?
I know that it should work like it is described here:
RFC 3207 defines how SMTP connections can make use of encryption. Once a connection is established, the client issues a STARTTLS command. If the server accepts this, the client and the server negotiate an encryption mechanism. If the negotiation succeeds, the data that subsequently passes between them is encrypted.
2c) Is the code underneath doing this?
socket.setKeepAlive(true);
SSLSocket sslSocket = (SSLSocket) ((SSLSocketFactory) SSLSocketFactory.getDefault()).createSocket(
socket,
socket.getInetAddress().getHostAddress(),
socket.getPort(),
true);
sslSocket.setUseClientMode(true);
sslSocket.setEnableSessionCreation(true);
sslSocket.setEnabledProtocols(new String[]{"SSLv3", "TLSv1"});
sslSocket.setKeepAlive(true);
// Force handshake. This can throw!
sslSocket.startHandshake();
socket = sslSocket;
in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
out = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream()));
In my development machine if I debug the following code fails the first time arround on the "sslSocket.startHandshake()" line, but if I try it again straight away it is working fine.
The error message that I'm getting is: "Remote host closed connection during handshake". []
The only thing that I can think of as to why it is not working the first time around in the development environment is because I'm stepping through the code with the debugger and it's slower because of this.
If you just do startHandshake() again with the underlying socket closed it should never work. If you go back to doing the TCP connection (e.g. new Socket(host,port)) and the initial SMTP exchange and STARTTLS, then yes I would expect it to avoid whatever problem affected the previous connection.
Yes, the server timing out because of the delay while you were debugging is quite possible, but to be certain you need to check logs on the server(s).
The code underneath is creating an SSL Socket. I'm curious to know if this code is enough for the connection with the mail server to be secure. Are these SSLSocketFactory classes dealing with certificates themselves?
Indirectly, yes. SSLSocketFactory creates an SSLSocket linked to an SSLContext which includes a TrustManager which is normally loaded from a truststore file. Your code defaults to the default SSLContext which has a TrustManager loaded from the default truststore, which is the file jssecacerts if present and otherwise cacerts in the lib/security directory in the JRE you are running. If your JRE hasn't been modified (by you or anyone else authorized on your system), depending on your variant or packaging of Java the installed JRE usually has no jssecacerts and contains or links to a cacerts file that (initially) contains root certs for about a hundred 'well-known' or established certificate authorities like Symantec, GoDaddy, Comodo, etc.
2a) Or do I still need to specify a certificate somehow?
Since when the handshake is done it is successful, obviously not.
2b) Or is this code getting the certificate from the server and using the certificate to encrypt the data and send the encrypted data back and forth to the email server?
Kind of/sort of/not quite. With some exceptions not applicable here, in an SSL/TLS handshake the server always provides its own certificate and usually intermediate or 'chain' certificates that link its cert to a trusted root cert (such as the abovementioned Symantec etc). The server cert is always used to authenticate the server, and sometimes alone but often combined with other mechanisms (particularly Diffie-Hellman ephemeral DHE or its elliptic-curve variant ECDHE) used to establish a set of several symmetric key values which are then used to encrypt and authenticate the data in both directions. For a more complete explanation see the canonical question and (multi-part!) answer in security.SX https://security.stackexchange.com/questions/20803/how-does-ssl-work/
2c) Is the code underneath doing this?
It is starting an SSLv3 or TLSv1 client-side session on an existing socket. I'm not sure what other question you have here.
You might be better off leaving out the setEnabledProtocols(). Sun/Oracle Java version 8, which is the only one now supported, supports TLS 1.0, 1.1 and 1.2 by default. 1.1 and especially 1.2 are definitely better than 1.0, and should definitely be offered so that if the server supports them they get used. (Sun/Oracle 7 is more problematic; it implements 1.1 and 1.2, but does not enable them client side by default. There I would look at .getSupportedProtocols and if 1.1 and 1.2 are supported but not enabled I would add enable them. But if possible I would just upgrade to 8. Other versions of Java, notably IBM, differ significantly in crypto details.)
SSLv3 should not be offered unless absolutely necessary; it is now badly broken by POODLE (search on security.SX for dozens of Qs about POODLE). I would try without it, and only if the server insists on it re-enable it temporarily, _along with TLS 1.0 through 1.2 whenever possible, and simultaneously urge the server to upgrade so I can remove it again.

java 1.6 TLS1.2 support using proxy nginx/ squid solution issues

I have a legacy java web application that makes calls to an external webservice. The provider of that service is turning off TLS1.0 support. So, I am trying to see how the application can continue to work with the service.
The options I have seen are a) use BouncyCastle JCE instead of Java JCE http://boredwookie.net/index.php/blog/how-to-use-bouncy-castle-lightweight-api-s-tlsclient/, which I guess requires code change/ recompile (we don't have the luxury of doing it) or 2) use proxy servers https://www.reddit.com/r/sysadmin/comments/48gzbi/proxy_solution_to_bump_tls_10_connection_to_tls_12/
I have tried nginx proxy - it doesn't seem to handle the switch between TLS1.0 incoming and TLS1.2 that the end server expects.
server { listen 443 ssl; server_name proxy.mydomain.com;
ssl_certificate D:/apps/openssl/proxy.mydomain.com.cert;
ssl_certificate_key D:/apps/openssl/proxy.mydomain.com.private;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!SRP;
ssl_prefer_server_ciphers on;
location / {
proxy_pass https://fancyssl.hboeck.de/;
}
This fails with a 502/ bad gateway error since https://fancyssl.hboeck.de only support TLS1.2 but works with https://www.google.com that supports TLS1.0.
I am doing this on Windows.
It's not TLSv1.2, it's lack of SNI leading to renegotiation.
First, I set up nginx (1.8.1/Windows) with a config like yours except using my own key&cert and proxying to my own test server. It worked fine, connecting from Java6 requester with TLSv1.0 and to server with TLSv1.2 (and even ECDHE-RSA-AES256GCM-SHA384, one of the 'best' ciphersuites) and returned pages just fine. I tried fancyssl.hboek.de and got 502 like you.
With wireshark I saw that nginx does not send SNI (by default) and at least using the IPv4 address 46.4.40.249 (I don't have IPv6) that server apparently hosts more than one domain because without SNI it provides a different (and expired!) certificate, for *.schokokeks.org, and after the first application data (the request) it sends an encrypted handshake (a renegotiation request -- which nginx does not honor). Testing with openssl s_client confirms that the server with SNI immediately sends the page but without it renegotiates first; repointing nginx to openssl s_server confirms that if the server requests renegotiation, receives no response, and closes nginx treats that as 502.
I would guess that Apache is renegotiating because it realizes the requested Host is not covered by the certificate -- except that it again uses the 'wrong' certificate. I haven't tried to track that part down.
Google does support TLSv1.2 (and ECDHE-RSA-AESGCM) when I connect, but even without SNI doesn't renegotiate, presumably because it's such high volume nothing else runs on www.google.com servers and there's no ambiguity. My test server doesn't have vhosts so didn't need SNI.
The nginx documentation reveals a directive proxy_ssl_server_name which can be set on to enable SNI, and then proxying to this server works.
FYI: several of the statements on that webpage are wrong, although its conclusion (if possible use TLSv1.2 with ECDHE or DHE and AES-GCM) is good.
Also, most of your ssl_ciphers string is useless, but you didn't ask about that.
ssl_ciphers HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!SRP
HIGH is an excellent start.
SEED is useless in a server used (only) by Java/JSSE client, because it's not implemented on the Java side. Even outside of Java it was pretty much used only in South Korea, where it was created as an alternative to DES or IDEA, and even there it is mostly obsoleted by ARIA which is an alternative to AES -- but is not implemented by OpenSSL and hence nginx.
aNULL is probably unneeded because JSSE disables 'anonymous' suites by default, but here it's worth it as defense in depth.
!eNULL does nothing; no eNULL suites are in HIGH, or DEFAULT, or even ALL. You can only get them explicitly or with the bizarre COMPLEMENTOFALL -- which you shouldn't.
!EXPORT !DES !RC4 do nothing; none of them are in HIGH. If instead you started from DEFAULT on older versions of OpenSSL, or from ALL, then they would be good.
!PSK is unneeded; nginx doesn't appear to configure for PSK and JSSE doesn't implement it anyway.
!RSAPSK is ignored because OpenSSL doesn't implement that keyexchange, and if it did those suites are already covered as above.
!aDH !aECDH are covered by !aNULL and thus do nothing.
!EDH-DSS-DES-CBC3-SHA is silly; there's no reason to exclude this one suite when you keep other DHE_DSS and 3DES suites.
!KRB5-DES-CBC3-SHA is ignored because OpenSSL doesn't implement Kerberos, and if it did nginx wouldn't be configured for it plus again it would be silly to exclude one suite while keeping similars.
!SRP is unneeded; like PSK nginx apparently doesn't configure and JSSE doesn't implement.
So: HIGH:!aNULL is all you need.

Categories