Can I trust Java SecurityManager sandbox? - java

I'm writing a JavaFX2 application that accepts arbitrary code to be loaded from remote locations. For me using a custom SecurityManager, ClassLoader and ProtectionDomain was the way to go. Unfortunately this seems to be the same setup that's used to sandbox applets, which has caused a lot of security exploits and that in turn has persuaded people to fear Java Web Plugin and removing it from their OS entirely.
Is Java sandbox a secure environment to run untrusted code onto, or is it just the Java Web Plugin as a whole to be insecure?

The security manager provides your app. with exactly as much protection as it provided the plug-in. Which was, given the security bugs, 'not much'.
It currently plugs the known security bugs (AFAIU). But as in any complex plug-in there are probably more, yet to be discovered, or possibly to be introduced in new versions or new APIs.
So basically, your code should go somewhat beyond a standard security manager, black-listing entire packages and (if need be) providing utility methods through which to perform activity normally handled by that package.
But then, that advice is the first point of a 20+ point list that I might be able to name 2 or 3 of the possible things an app. might need to guard against, in running untrusted code. Though that is not the question..
Is Java sandbox a secure environment to run untrusted code onto..
No. Java security might provide a good starting point for security against untrusted code, but it would need to be expanded specific to the app., and have other elements in order to be suited to the task required. Even then, there are the 'unknown security bugs' (in both the JRE as well as your own security efforts) to consider.

Related

How to run Apache Sling with an enabled SecurityManager?

Did anybody run Apache Sling with an enabled Java SecurityManager? That'd need a special java.policy file to allow the actions done by all deployed bundles, and it'd be extremely helpful to have a basic version that already allows what's needed by the bundles provided with the basic Sling Starter, and to which one could add policies for additional deployed code.
I'd also be interested if someone can tell that employing the SecurityManager is infeasible in a Sling setting, perhaps due to its design properties (such as the ability to add JSPs to the JCR at runtime).
Background: If you run code of several mandants on one server, that might be neccessary to separate their code from each other. While OSGI does have some mechanisms to separate bundles from each other, it'd be trivial for malicious code to use e.g. Java reflection to grab internal stuff from services provided by other bundles. An enabled security manager might at least make that much more difficult.
(I do realize that even with a security manager it's probably quite possible for malicious code to use bugs and design flaws to get access to resources of other users on the system, and that probably the only way to seriously separate code from different mandants would be using different servers. But at least one can try to make it hard.)

What are common Java vulnerabilities?

What are common Java vulnerabilities that can be exploited to gain some sort of access to a system? I have been thinking about it recently, and havent been able to come up with much of anything - integer overflow - maybe? race condition - what does it give you?
I am not looking for things like "sql injection in a web app". I am looking for a relationship similar to buffer overflow - c/c++.
Any security experts out there that can help out? Thanks.
Malicious Code injection.
Because Java (or any language using an interpreter at runtime), performs linkage at runtime, it is possible to replace the expected JARs (the equivalent of DLLs and SOs) with malicious ones at runtime.
This is a vulnerability, which is combated since the first release of Java, using various mechanisms.
There are protections in places in the classloaders to ensure that java.* classes cannot be loaded from outside rt.jar (the runtime jar).
Additionally, security policies can be put in place to ensure that classes loaded from different sources are restricted to performing only a certain set of actions - the most obvious example is that of applets. Applets are constrained by the Java security policy model from reading or writing the file system etc; signed applets can request for certain permissions.
JARs can also be signed, and these signatures can be verified at runtime when they're loaded.
Packages can also be sealed to ensure that they come from the same codesource. This prevents an attacker from placing classes into your package, but capable of performing 'malicious' operations.
If you want to know why all of this is important, imagine a JDBC driver injected into the classpath that is capable of transmitting all SQL statements and their results to a remote third party. Well, I assume you get the picture now.
After reading most of the responses I think your question has been answered in an indirect way. I just wanted to point this out directly. Java doesn't suffer from the same problems you see in C/C++ because it protects the developer from these types of memory attacks (buffer overflow, heap overflow, etc). Those things can't happen. Because there is this fundamental protection in the language security vulnerabilities have moved up the stack.
They're now occurring at a higher level. SQL injection, XSS, DOS, etc. You could figure out a way to get Java to remotely load malicious code, but to do that would mean you'd need to exploit some other vulnerability at the services layer to remotely push code into a directory then trigger Java to load through a classloader. Remote attacks are theoretically possible, but with Java it's more complicated to exploit. And often if you can exploit some other vulnerability then why not just go after and cut java out of the loop. World writable directories where java code is loaded from could be used against you. But at this point is it really Java that's the problem or your sys admin or the vendor of some other service that is exploitable?
The only vulnerabilities that pose remote code potential I've seen in Java over the years have been from native code the VM loads. The libzip vulnerability, the gif file parsing, etc. And that's only been a handful of problems. Maybe one every 2-3 years. And again the vuln is native code loaded by the JVM not in Java code.
As a language Java is very secure. Even these issues I discussed that can be theoretically attacked have hooks in the platform to prevent them. Signing code thwarts most of this. However, very few Java programs run with a Security Manager installed. Mainly because of performance, usability, but mainly because these vulns are very limited in scope at best. Remote code loading in Java hasn't risen to epidemic levels that buffer overflows did in the late 90s/2000s for C/C++.
Java isn't bullet proof as a platform, but it's harder to exploit than the other fruit on the tree. And hackers are opportunistic and go for that low hanging fruit.
I'm not a security expert, but there are some modules in our company that we can't code in java because it is so easy to de-compile java bytecode. We looked at obfuscation but if you want real obfuscation it comes only with a lot of problems (performance hit/loss of debug information).
One could steal our logics, replace the module with a modified version that will return incorrect results etc...
So compared to C/C++, I guess this is one "vulnerability" that stands out.
We also have a software license mechanism built-in in our java modules, but this can also be easily hacked by de-compiling and modifying the code.
Including third party class files and calling upon them basically means you are running unsecure code. That code can do anything it wants if you don't have security turned on.

Does Java have a built-in Antivirus? Is it true?

Does Java have a built-in Antivirus?
One of my friends told me there is in the JVM itself - it's called the "sandbox". Is it true?
Java does have a security-related concept called "sandbox", but it works very differently from typical anti-virus products. The latter usually try to catch viruses via signatures or code analysis before they are executed.
The Java sandbox on the other hand allows you to run Java code while witholding from it access to system resources that could be used to to bad things, e.g. no access to any files.
However, only Java applets and Java Web Start applications run in a sandbox per default. Regular java applications have full access to your system.
Doubtful. Perhaps he was referring to the fact that the JVM (somewhat) sandboxes execution of a Java program, to help prevent it from damaging the host OS.
No they do not have a built-in antivirus. Did he tell you this on April 1st?
To clear your doubt, sandbox is not an antivirus.
does the java have an in-built antivirus?
No.
Java has a security model built-in that allows it to execute untrusted code. This model is called "the sandbox model".
It is not a virus-scanner. Instead, it limits the possibilities of untrusted code so that applets on a webpage do not have access to files on your computer's hard drive.
You can read more about Java's Security Architecture.
java uses a class called SecurityManager to determine what a program can or cannot do, so in some sense it implements anti-exploit code, but not specifically anti-virus.
http://java.sun.com/j2se/1.4.2/docs/api/java/lang/SecurityManager.html
anti-virus in the usual sense of the word detects viruses in files and removes them. this is not built in to java.
No. What it does is running the program in an environment that is (somewhat) separated from the operating system, which should, in most cases, prevent malicious code from doing any damage. Sort of like running VMware - virii and other malware have no influence on the host OS.
I heard garbage collection also acts as a handy anti-bacterial, making your applications 99.99% free from germs.
Wash after every use.
The closest thing in the JRE to literal "anti-virus" is the blacklisting feature for signed jars. If a signed jar is found to cause a security issue, it can be blocked. This has been designed for accidental security flaws rather than blocking deliberately malicious code. Also it is possible to revoke a certificate using a CRL (Certificate Revocation List) or OCSP (Online Certificate Status Protocol) if enabled. Conventional anti-virus is left to specialist anti-virus products, rather than trying to produce a half-baked alternative.
(Today's anti-virus products do more than just check for known viruses.)

Should I use Security Manager in Java web applications?

Is it sufficient to secure a Java web application with the rights of the user that is running the application server process or is it reasonable also to use SecurityManager with a suitable policy file?
I have used to do the former and not the latter, but some customers would like us to also use SecurityManager that would explicitly give permissions to every third-party component to be sure there isn't any evil code lurking there.
I've seen some Servlet containers, like Resin to propose not using SecurityManager to slow things up. Any thoughts?
While I hate to ever recommend not using a security feature, it's my opinion that a SecurityManager is more intended to manage situations where untrusted or third-party code is executing in the JVM. Think applets, or a hosted, shared app server scenario. If you have complete control over the app server and are not running anybody else's code, I think it's redundant. Enable the SecurityManager does have significant performance impact in my experience.
There is no simple yes/no answer to your question, because it really depends: what do you want to secure, and what do you want to secure it from?
For example, I've used SecurityManager to implement IP filtering and allow only whitelisted IP addresses to connect to my application. If you just want to disallow access to disk files, maybe running application as user with lesser privileges is better solution.
If you don't trust third party plugins at all, remember that once you allow execution of plugin code, that plugin can crash your application if it wants to even if you use SecurityManager. If your application loads plugins, maybe whitelisting plugin and checking the list before loading plugin is better solution.
If you decide to use it, you will take a performance hit (since JVM will do more checks), but how fast it will run really depends on your code/configuration that will do the checks. My IP whitelist was pretty fast since it included only single list lookup; if your checks include invoking remote web service and accessing database you can slow things down a lot, but on the other hand, even that should not matter if you have enough hardware and few concurrent users (in other words, if you can afford it).
Correctly configurin Security Manager in java can be hard. For instance, if you do not restrict the security manager itself, one can bypass all security just by setting the Security Manager to null.
Using a security manager only makes sense if your JVM will run untrusted code, otherwise it is a pain to set up, because you'll have to know beforehand what permissions you should set for each feature (ex: RMI, sockets, I/O) and for each client.

Adding scripting security to an application

Let's say I have an existing application written in Java which I wish to add scripting support to. This is fairly trivial with Groovy (and just as trivial in .Net with any of the Iron range of dynamic languages).
As trivial as adding the support is, it raises a whole host of questions about script execution and security - and how to implement that security.
Has anyone come across any interesting articles/papers or have any insights into this that they'd like to share? In particular I'd be very interested in architectural things like execution contexts, script authentication, script signing and things along those lines... you know, the kind of stuff that prevents users from running arbitrary scripts they just happened to download which managed to screw up their entire application, whilst still allowing scripts to be useful and flexible.
Quick Edit
I've mentioned signing and authentication as different entities because I see them as different aspects of security.
For example, as the developer/supplier of the application I distribute a signed script. This script is legitimately "data destructive" by design. Such as it's nature, it should only be run by administrators and therefore the vast majority of users/system processes should not be able to run it. This being the case it seems to me that some sort of security/authentication context is required based on the actions the script performs and who is running that script.
script signing
Conceptually, that's just reaching into your cryptographic toolbox and using the tools that exist. Have people sign their code, validate the signatures on download, and check that the originator of the signature is trusted.
The hard(er) question is what makes people worthy of trust, and who gets to choose that. Unless your users are techies, they don't want to think about that, and so they won't set good policy. On the other hand, you'll probably have to let the users introduce trust [unless you want to go for an iPhone AppStore-style walled garden, which it doesn't sound like].
script authentication
How's this different from signing? Or, is your question just that: "what are the concepts that I should be thinking about here"?
The real question is of course: what do you want your users to be secured against? Is it only in-transit modification, or do you also want to make guarantees about the behavior of the scripts? Some examples might be: scripts can't access the file system, scripts can only access a particular subtree of the file system, scripts can't access the network, etc.
I think it might be possible [with a modest amount of work] to provide some guarantees about how scripts access the outside world if they were done in haskell and the IO monad was broken into smaller pieces. Or, possibly, if you had the scripts run in their own opaque monad with access to a limited part of IO. Be sure to remove access to unsafe functions.
You may also want to look at the considerations that went into the Java (applet) security model.
I hope this gives you something to think about.
You may have a look at the documentation on security:
http://groovy.codehaus.org/Security
Basically, you can use the usual Java Security Manager.
And beyond this, there's a sample application showing how you can introspect the code by analyzing its AST (Abstract Syntax Tree) at compile-time, so that you can allow / disallow certain code constructs, etc. Here's an example of an arithmetic shell showing this in action:
http://svn.groovy.codehaus.org/browse/groovy/trunk/groovy/groovy-core/src/examples/groovyShell

Categories