Are SAFE levels supported in JRuby? If not, is there other ways of safely running user supplied code in server?
AFAIK, they aren't supported.
The main problem is that they are very badly documented, so how are the JRuby developers supposed to provide a compatible implementation if nobody knows what a compatible implementation is?
Another reason not to waste time implementing $SAFE levels in JRuby, is that the JVM's security mechanisms provide better protection anyway. Which is also the answer to your second question: from the point of view of the JVM, your Ruby script is just another Java program and it can be sandboxed and controlled just like any other Java program.
How to do that, however, is a question for a Java expert. I'm just a lowly Ruby hacker …
Related
I want to keep only java util, io, nioand math packages and want to remove all other packages like java.sql and others from my JDK.
How can I remove them?
So if I write some program which import removed packages it will give
error package doesn't exist.
Use a SecurityManager instead of hacking the JDK
I'm going to give you the best answer I can.
Why you really shouldn't be doing what you want to do
When you're writing code, it is commonly agreed to develop that code in a way that is extendable. That is, your code can be plugged into other applications, or it can be changed and added to, very easily. Now with that principle in mind, let's review what happens when you delete the possible functionality of your program. Say you delete the SQL package, and in the future, you want a backend database to provide some persistence in your program. You're screwed.
The idea of Java, in fact I'd go as far as to say the major advantage of Java, is it's commonality, consistency and standardization of patterns. A getter is always a getter. A variable (that isn't a constant) starts with a lower case letter. Classes have a standardized way of being structured. All these things make developing in Java quite intuitive.
The JDK is part of that consistency, and to edit it is to really impact one of the major points of Java. It sounds like you want to implement your program in a different, more compact language.
Finally, you have no idea how the client may want to extend your project in the future. IF you want to have some repeatable business from the client, and generate a good reputation at the same time, you want to design your code with good design practise in mind.
There is no such tool, AFAIK.
Removing stuff from the Java libraries can be technically tricky, 'cos it can be difficult to know if your code might directly or indirectly use some class or method.
There are potentially "licensing issues" if you add or remove classes from a JRE installer, and ship it to other people.
Concerning your proposed use case.
If you are building this as a web application, then you are going to have a lot of difficulty cutting out classes that are not needed. A typical webapp server-side framework uses a lot of Java SE interfaces.
If you accepted and ran code someone who wanted to try and bring down your service, they could do it without using only the Object class. (Hint: infinite loops and filling the heap.) Running untrusted code on your server is a bad idea. Period.
Think about the consequence for someone trying to run legitimate code on your server. Why shouldn't they be allowed to use library classes / methods? (I'd certainly be a bit miffed if I couldn't use "ordinary" library classes ...)
My advice would be reconsider if it was a good idea to implement such a service at all ... given the risks, and the difficulty you could have if your safeguards were ineffective. If you decide to proceed, I advise running the untrusted code within the JVM in a security box. As a second level of defence in case Java security is compromised, I'd recommend running the service "chrooted" or better still in an isolated virtual machine that can be turned off if you run into problems.
I am curious about what automatic methods may be used to determine if a Java app running on a Windows or PC is malware. (I don't really even know what exploits are available to such an app. Is there someplace I can learn about the risks?) If I have the source code, are there specific packages or classes that could be used more harmfully than others? Perhaps they could suggest malware?
Update: Thanks for the replies. I was interested in knowing if this would be possible, and it basically sounds totally infeasible. Good to know.
If it's not even possible to automatically determine whether a program terminates, I don't think you'll get much leverage in automatically determining whether an app does "naughty stuff".
Part of the problem of course is defining what constitutes malware, but the majority is simply that deducing proofs about the behaviour of other programs is surprisingly difficult/impossible. You may have some luck spotting particular patterns, but on the whole you can't be confident (and I suspect it's provably impossible) that you've caught all possible attack vectors.
And in the general sphere, catching 95% of vectors isn't really worthwhile when the attackers simply concentrate on the remaining 5%.
Well, there's always the fundamental philosophical question: what is a malware? It's code that was intended to do damage, or at least code that doesn't do what it claims to. How do you plan to judge intent based on libraries it uses?
Having said that, if you at least roughly know what the program is supposed to do, you can indeed find suspicious packages, things the program wouldn't normally need to access. Like network connections when the program is meant to run as a desktop app. But then the network connection could just be part of an autoupdate feature. (Is autoupdate itself a malware? Sometimes it feels like it is.)
Another indicator is if a program that ostensibly doesn't need any special privileges, refuses to run in a sandbox. And the biggest threat is if it tries to load a native library when it shouldn't need one.
But all these only make sense if you know what the code is supposed to do. An antivirus package might use very similar techniques to viruses, the only difference is what's on the label.
Here is a general outline for how you can bound the possible actions your java application can take. Basically you are testing to see if the java application is 'inert' (can't take harmful actions) and thus it probably not mallware.
This won't necessarily tell you mallware or not, as others have pointed out. The app could still do annoying things like pop-up windows. Perhaps the best indication, is to see if the application is digitally signed by an author you trust; if not -- be afraid.
You can disassemble the class files to determine which Java APIs the application uses; you are looking for points where the java app uses the OS. Since java uses a virtual machine, there are well defined points where a java application could take potentially harmful actions -- these are the 'gateways' to various OS calls (for example opening a socket or reading a file).
Its difficult to enumerate all the APIs, different functions which execute the same OS action should require the same Permission. But java's docs don't provide an exhaustive list.
Does the java app use any native libraries -- if so its a big red flag.
The JVM does not offer the ability to run arbitrary code, or use native system APIs; in particular it does not offer the ability to modify the registry (a typical action of PC mallware). The only way a java application can do this is via native libraries. Typically there is no need for a normal application written in java to use native code (unless it needs to use devices).
Check for System.loadLibrary() or System.load() or Runtime.loadLibrary() or Runtime.load(). This is how the VM loads native libraries.
Does it use the network or file system?
Look for use of java.io, java.net.
Does it make system calls (via Runtime.exec())
You can check for the use of java.lang.Runtime.exec() or ProcessBuilder.exec().
Does it try to control the keyboard / mouse?
You could also run the application in a restricted policy JVM (the instructions/tools for doing this are not as simple as they should be) and see what fails (see Oracle's security tutorial) -- note that disassembly is the only way to be sure, just because the app doesn't do anything harmful once, doesn't mean it won't in the future.
This definitely is not easy, and I was surprised to find how many places one needs to look at (for example several java functions load native libraries, not just one).
I have a fairly sophisticated server-side application which frequently requires me to see what is going on with its internals to debug and fix problems.
I've therefore embedded a Beanshell instance, which I can telnet into (normally over a ssh tunnel), but I'm wondering if there is a better way.
A few limitations:
No readline support, which I can get around by using 'rlwrap' on telnet, but its not ideal
Tab-completion of variables and methods would be really nice, but I haven't found a way to do this
Pre-defining variables (to access stuff I need to access frequently) doesn't seem to work, I have to pre-define functions instead
All-in-all its rather clunky, although Beanshell has the nice advantage that its a super-set of Java, so nobody needs to learn another programming language to use it.
I'm wondering if others have any experience with facilitating remote debugging/administration via a scripting language (Beanshell or otherwise), perhaps someone has found a better approach.
One of our application has a BeanShell interface that connects to our application over JMX with RMI+SSL.
You can indeed pre-define variables by calling Interpreter#set() before actually invoking the Interpreter.
Other than that, I agree with you on the "readline" like functionality. There is a project called java-readline (and BSH apparently ships with a plugin for it) that purports to provide functionality but I haven't investigated using it. It also requires a JNI library.
I've heard that people use Scala and Groovy to implement command line clients but as of now, I don't think the Scala API for embedding the interpreter is standardized yet. I also don't believe they support the "readline" features you're looking for.
Hi guys does anyone know why the programming language C++ is used more widely in biometric security applications compared to the programming language Java? The answers that I have collected so far are 1) Virtual Compilers 2) OpenCV Library provided by C++. Can anyone help with this question??
Maybe it's the hardware support: I wrote an app that uses a fingerprint sensor. The library support for the device is C++, so I wrote the app in C++. Now they have a .NET version, so my next app will be C#.
I don't know specifically about biometric applications, but in general when security is important Java can be a stumbling block. Depending on how the security requirements are written, they can cover things that one must do manually in C++, but which are done automatically by Java. This poses a problem because one would need to demonstrate that Java properly (and in a timely manner!) satisfies the requirement. It is a lot easier to show that these requirements are met in C++ code, because the code the meets the requirement is part of the program in question.
If the security person/requirements/customer make it clear that relying on Java for some security features is acceptable, then this is no big deal. We could go round-and-round about whether or not it is reasonable to rely on/trust Java to satisfy security requirements, it really just depends on the specific security needs.
I am willing to put money on the reason being simply that the access api's for the hardware are written in c++. Most of the modern/higher-level languages are not going to easily communicate with hardware originaly exposed through a C/C++ api.
On a somewhat related note, Vala has all the languages features expected of a modern\high-level language(and then some), but compiles to C binary and source, and can easily make use of any library written in C (not sure about c++). Check it out, I havnt used it much, but its pretty cool.
Implementing a library in C++ provide a lot over java. Once written, C++ library can run on almost any platform (including embedded ones), and can be made available as a native import to a variety of other languages through tools like SWIG. Java can only run on something with enough speed and memory to run a JVM, and the only other Java programs can include the code as a native import. For biometric applications especially I think running on embedded systems would be a large concern, since you could build this into a small sensor.
The more glib answer would be no one wants to wait for your garbage collection cycle to launch the friggen missiles.
You could replace Java with any other language there. Probably it has more to do with the APIs and hardware.
Also, Java is more suited for Web Applications. Its not the best choice for desktop applications.
For some biometric applications, execution speed is crucial.
For instance, let's say you're doing facial recognition for a checkpoint, and Java takes twice the time to run the algorithm that a compiled language like C++ does. That means if you go with Java, either:
The checkpoint lines will be twice as long,
You'll have to pay to staff twice as many checkpoints, or
Your system will do half as good a job at recognizing faces
None of those are usually acceptable options, which makes using Java a non-starter.
How does Google App Engine sandbox work?
What would I have to do to create my own such sandbox (to safely allow my clients to run their apps on my engine without giving them the ability to format my disk drive)? Is it just class loader magic, byte manipulation or something?
You would probably need a combination of a restrictive classloader and a thorough understanding of the Java Security Architecture. You would probably run your JVM with a very strict SecurityManager specified.
In the Java case, I think it's mostly done by restricting the available libraries. Since Java doesn't have pointer concept, and you can't upload natively compiled code (only JVM bytecode), you can't break out of the sandbox. Add some tight process scheduling, and you're done!
I guess The hardest part is to pick the libraries, to make it useful while staying safe.
In the Python case, they had to modify the VM itself, because it wasn't designed with safety in mind. Fortunately, they have Guido himself to do it.
to safely allow my clients to run their apps on my engine without giving them the ability to format my disk drive
This can be easily achieved using the Java Security Manager. Refer this answer for an example.