This follows on from this question, which is about Groovy (a superset/modernisation of Java), where there is, seemingly, essentially no information-hiding and no encapsulation whatsoever.
But in Java too of course there is reflection, meaning that private, protected and package-private are essentially pointless, or worse: create a false sense of security.
In Java, is there any way to enforce visibility, of some kind, not necessarily in the sense of specifically enforcing the above visibility modifiers, and package-private, using a SecurityManager? I've only just started looking into the latter and I can't see any very obvious way of accomplishing something like that. But it would seem that some developers must ship code where some classes and methods do not have completely public visibility... so how is it done?
PS in the Lucene package, with which I'm a bit familiar, I notice that quite a lot of classes turn out to be final (which has sometimes caused me some head-scratching...) but I'm pretty sure, although not certain, that reflection can be used to squash that modifier
Can I write my classes to be setAccessible-proof regardless of SecurityManager configuration? ... Or am I at the mercy of whoever manages the configuration?
You can't and you most certainly are.
Anybody who has access to your code can configure their JVM and SecurityManager as they please. (more details below)
Is setAcessible legitimate? Why does it exist?
The Java core classes use it as an easy way to access stuff that has to remain private for security reasons. As an example, the Java Serialization framework uses it to invoke private object constructors when deserializing objects. Someone mentioned System.setErr, and it would be a good example, but curiously the System class methods setOut/setErr/setIn all use native code for setting the value of the final field.
Another obvious legitimate use are the frameworks (persistence, web frameworks, injection) that need to peek into the insides of objects.
And finally...
Java access modifiers are not intended to be a security mechanism.
So what can I actually do?
You should take a deeper look into Security Providers section of the Java SE Security documentation:
Applications do not need to implement security themselves. Rather,
they can request security services from the Java platform. Security
services are implemented in providers
The access control architecture in the Java platform protects access to sensitive resources (for example, local files) or sensitive application code (for example, methods in a class). All access control decisions are mediated by a security manager, represented by the java.lang.SecurityManager class. A SecurityManager must be installed into the Java runtime in order to activate the access control checks.
Java applets and Java™ Web Start applications are automatically run with a SecurityManager installed. However, local applications executed via the java command are by default not run with a SecurityManager installed. In order to run local applications with a SecurityManager, either the application itself must programmatically set one via the setSecurityManager method (in the java.lang.System class), or java must be invoked with a -Djava.security.manager argument on the command line.
I recommend you read further about this on the official security documentation
https://docs.oracle.com/javase/7/docs/technotes/guides/security/overview/jsoverview.html
Related
I am trying to understand the drawbacks as mentioned in Java docs
Security Restrictions
Reflection requires a runtime permission which may not be present when
running under a security manager.
What are the runtime permissions that reflection needs? What is security manager in this context? Is this drawback specific to Applets only?
Exposure of Internals
Since reflection allows code to perform operations that would be
illegal in non-reflective code, such as accessing private fields and
methods, the use of reflection can result in unexpected side-effects,
which may render code dysfunctional and may destroy portability.
Reflective code breaks abstractions and therefore may change behavior
with upgrades of the platform.
How reflection can break abstraction? and how does it affect with upgrades of the platform.
Please help me in clarifying these. Thanks a lot.
First you should always ask to yourself why reflection in your code. Aren't you able to do the operations without reflection. If YES then only you should use reflection. Reflection uses meta information about class,variables and methods this increase overhead, performance issue and security threat.
To understand the drawback of reflection in detail visit http://modernpathshala.com/Forum/Thread/Interview/310/what-are-the-drawbacks-of-reflection
Security "sandboxes" aren't limited to applets. Many other environments which permit less-than-completely-trusted "plug-in" code -- webservers, IDEs, and so on -- limit what the plug-ins can do to protect themselves from errors in the plug-in (not to mention deliberately malicious code).
A framework class called dependency container was used to analyzes the dependencies of a class. With this analysis, it was able to create an instance of the class and inject the objects into the defined dependencies via Java Reflections. This eliminated the hard dependencies. That way the class could be tested in isolation, ex. by using mock objects. This was Dagger 1.
Main disadvantages of this process were two folds. First, the Reflection is slowing itself and second, it used to perform dependency resolution at runtime, leading to unexpected crashes.
I'm interested in learning secure coding best practices (specifically for Java apps) and I'm reading OWASP's Secure Coding Practices checklist. Under their Memory Management section they state the following:
Avoid the use of known vulnerable functions (e.g., printf, strcat, strcpy, etc.).
I'm not a C/C++ developer, but this must mean that the above functions have security vulnerabilities in them. I ran a couple of searches for vulnerable Java methods and all I came up with was this CVE.
What Java SE/EE methods (if any) apply to this advisory from OWASP?
For C APIs, yes, you can cause problems with those functions by doing unintentional memory corruption if your parameters are not carefully checked.
In Java, since all operations are automatically checked, this class of memory corruption exploit should not happen (barring bugs in the implementation).
Those are C functions that are particularly prone to buffer overflow and format string attacks.
Java doesn't typically have those problems, but the same rule of thumb applies -- don't trust your inputs.
Reflection & Serialization
Java's reflection APIs can be a source of vulnerabilities.
If an attacker can cause part of a string they give you to be treated as a class, method, or property name, then they can often cause your program to do things that you did not intend.
For example,
ObjectInputStream in = ...;
MyType x = (MyType) in.readObject();
will allow an attacker who controls content on in to cause the loading and initialization of any class on your CLASSPATH and allow calling any constructor of any serializable class on your CLASSPATH. For example, if you happen to have a JS or Python interpreter on your CLASSPATH, they may be able to get access to a String -> JavaScript/Python function class from where they might be able to gain access to more powerful methods via Java reflection.
javax.script
javax.script is available in Java 6 and allows converting strings into source code in an embedded scripting language. If untrusted inputs reach these sinks, they may be able to use the script engine's access to Java reflection to reach the file-system or shell to execute arbitrary user-ring instructions with the permissions of the current process's owner.
XML
Java is just as vulnerable to external entity attacks as other languages whereby external entities in an XML input can be used to include content from URLs from the local network.
If you don't hook into java.net.SocketFactory or use a SecurityManager to filter outgoing connections then any XML parse method that does not let you white-list URLs that appear in DTDs is vulnerable.
Runtime / ProcessBuilder
Also not Java specific, but Runtime and ProcessBuilder allow access to executables on the local file-system. Any attacker controlled strings that reach these can potentially be used to elevate permissions.
suppose I want to allow people run simple console java programs on my server without ability to access the file system, the network or other IO except via my own highly restricted API. But, I don't want to get too deep into operating system level restrictions, so for the sake of the current discussion I want to consider code level sanitization methods.
So suppose I try to achieve this restriction as follows. I will prohibit all "import" statements except for those explicitly whitelisted (let's say "import SanitizedSystemIO." is allowed while "import java.io." is not) and I will prohibit the string "java.*" anywhere in the code. So this way the user would be able to write code referencing File class from SanitizedSystemIO, but he will not be able to reference java.io.File. This way the user is forced to use my sanitized wrapper apis, while my own framework code (which will compile and run together with user's code, such as in order to provide the IO functionality) can access all regular java apis.
Will this approach work? Or is there a way to hack it to get access to the standard java api?
ETA: ok, first of all, it should of course be java.* strings not system.*. I think in C#, basically...
Second, ok, so people say, "use security manager" or "use class loader" approaches. But what, if anything, is wrong with the code analysis approach? One benefit of it to my mind is the sheer KISS simplicity - instead of figuring out all the things to check and sanitize in SecurityManager we just allow a small whitelist of functionality and block everything else. Implementation-wise this is a trivial exercise for people with minimal knowledge of java.
And to reiterate my original question, so can this be hacked? Is there some java language construct that would allow access to the underlying api despite such code restrictions?
In your shoes I'd rather run the loaded apps inside a custom ClassLoader.
Maybe I'm mistaken, but if he wants to allow limited access to IO through his own functions, wouldn't SecurityManager prevent those as well? With a custom ClassLoader, he could provide his SanitizedSystemIO while refusing to load the things he doesn't want people to load.
However, checking for strings inside code is definitely not the way to go.
You need to check the SecurityManager. It is called by lots of JVM classes to check, before they perform their work, if they have the permission needed.
You can implement your own SecurityManager. Tutorial.
Many Java frameworks allow class members used for injection to be declared non-public. For example, injected variables in Spring and EJB 3 may be private. JPA allows properties of a persistent class to be protected or package-private.
We know it's better to declare methods non-public if you can. That being said, if I'm not mistaken, allowing these frameworks to access non-public members only works with the default Java security manager. Doesn't it mean that custom code can also gain access to non-public member via reflection by calling setAccessible(), which would compromise security?
Which begs this question: What is the best practice when setting the access level for injection methods?
Typically a class needs to opt-in to a persistence mechanism. For instance, Java serialisatoin requires a class to implement java.io.Serializable. It is the responsibility of classes that implement Serializable to ensure that they are secure. Where a library allows poking of privates through an external configuration file, then that should not be trusted - reflection is really dangerous and its use is usually messed up.
Of course if you do find a vulnerability, please report it to the appropriate group.
If you're running untrusted code in the same JVM as your application, and you're using the default security manager settings, then yeah, that could be a security hole. This is something you need to be aware of, but in practice, this situation is pretty rare.
I'm developing a system that allows developers to upload custom groovy scripts and freemarker templates.
I can provide a certain level of security at a very high level with the default Java security infrastructure - i.e. prevent code from accessing the filesystem or network, however I have a need to restrict access to specific methods.
My plan was to modify the Groovy and Freemarker runtimes to read Annotations that would either whitelist or blacklist certain methods, however this would force me to maintain a forked version of their code, which is not desirable.
All I essentially need to be able to do is prevent the execution of specific methods when called from Groovy or Freemarker. I've considered a hack that would look at the call stack, but this would be a massive speed hit (and it quite messy).
Does anyone have any other ideas for implementing this?
You can do it by subclassing the GroovyClassLoader and enforcing your constraints within an AST Visitor. THis post explains how to do it: http://hamletdarcy.blogspot.com/2009/01/groovy-compile-time-meta-magic.html
Also, the code referenced there is in the samples folder of Groovy 1.6 installer.
You should have a look at the project groovy-sandbox from kohsuke. Have also a look to his blog post here on this topic and what is solution is addressing: sandboxing, but performance drawback.
OSGi is great for this. You can partition your code into bundles and set exactly what each bundle exposes, and to what other bundles. Would that work for you?
You might also consider the java-sandbox (http://blog.datenwerke.net/p/the-java-sandbox.html) a recently developed library that allows to securely execute untrusted code from within java.
Also see: http://blog.datenwerke.net/2013/06/sandboxing-groovy-with-java-sandbox.html