I have a Java project that uses the Bluecove Library, this library requires root privileges to do certain actions that I require in my project. I should note here that despite the project being Java based it is for Linux only.
The project will have many functions that do not require root privileges, some of which will have to interact with the root privilege functions and some that will not.
Additionally, the project will execute programs such as hciconfig using user inputted data under root privileges.
All this root activity has led me to be concerned about the security of my system. The target machine would be the user's own computer and there is no intention of running this system on some public terminal but security is still important as unknown external bluetooth devices will be capable of interacting with this system.
So far my security measures have involved heavily filtering user input, and paying very careful attention to all actions that external bluetooth devices will cause the system to perform but I am growing increasingly unhappy with this.
What would people recommend? One thought would be to split the system in to two or three modules, one containing the GUI and non-root backend, one containing the Bluecove root backend and possibly a root wrapper for hciconfig and the other tools used.
I have noticed some programs, for example Apache, that once run "drop down" their privileges. How is this achieved and is this effective?
What apache does it the setuid system call (in libc), which as you noted, effectively drops down the privilege of the process. You can make libc call via JNI, or JNA.
This works very well, even for Java programs, except that once you go from root to non-root, you won't be able to perform any operations that require the elevated privileges. So the technique can be only used if all the privileged operations can be done upfront, like Apache does.
Another possibility is to divide your program into two processes --- when launched, your program forks another program that runs as root, then have the original one demote to the non-root. Two processes can communicate over their stdin/stdout.
Related
I'm running a J2SE application that is somewhat trusted (Minecraft) but will likely contain completely un-trusted (and likely even some hostile) plugins.
I'd like to create a plugin that can access the GPIO pins on the Raspberry PI.
Every solution I've seen requires that such an app be given sudo-superpowers because gpio is accessed through direct memory access.
It looks like the correct solution is to supply a command-line option like this:
-Djava.security.policy=java.policy
which seems to default you to no permissions (even access to files and high ports), then add the ones your app needs back in with the policy file.
In effect you seem to be giving Java "sudo" powers and then trusting java's security model to only give appropriate powers out to various classes. I'm guessing this makes the app safe to run with sudo--is this correct?
Funny that I've been using Java pretty much daily since 1.0 and never needed this before... You learn something new every day.
[Disclaimer: I'm not very convinced by the Java security model.]
The way I would solve this is to have the code that needs to access the hardware run as a separate privileged process, then have your Java application run as an unprivileged process and connect to the privileged process to have it perform certain actions on its behalf.
In the privileged process, you should check with maximum distrust each request whether it is safe to execute. If you are afraid that other unprivileged processes might connect to the daemon too and make it execute commands it shouldn't, you could make its socket owned by a special group and setgid() the Java application to that group by a tiny wrapper written in C before it is started.
Unix domain sockets are probably the best choice but if you want to chroot() the Java application, a TCP/IP socket might be needed.
Sorry if the question is too open-ended or otherwise not suitable, but this is due to my lack of understanding about several pieces of technology/software, and I'm quite lost. I have a project where I have an existing java swing GUI, which runs MPI jobs on a local machine. However, it is desired to support running MPI jobs on HPC clusters (let's assume linux cluster with ssh access). To be more specific, the main backend executable (linux and windows) that I need to, erm, execute uses a very simple master-slave system where all relevant output is performed by the master node only. Currently, to run my backend executable on multiple machines, I would simply need to copy all necessary files to the machines (assuming no shared filespace) and call "mpiexec" or "mpirun" as is usual practice. The output produced by the master needs to be read in (or partially read in) by my GUI.
The main problem as I see things is this: Where to run the GUI? Several options:
Local machine - potential problem is needing to read data from cluster back to local machine (and also reading stdout/stderr of the cluster processes) to display current progress to user.
Login node - obvious problem of hogging precious resources, and in many cases will be banned.
Compute node - sounds pretty dodgy - especially if the cluster has a queuing system (slurm, sun grid, etc)! Also possibly banned.
Of these three options, the first seems the most reasonable, and also seems least likely to upset any HPC admin people, but is also the hardest to implement! There are multiple problems associated with that setup:
Passing data from cluster to local machine - because we're using a cluster - by definition we probably will generate large amounts of data, which the user wants to see at least part of! Also, how should this be done? I can see how to execute commands on remote machine via ssh using jsch or similar, but if i'm currently logged in on the remote machine - how do I communicate information back to the local machine?
Displaying stdout/stderr of backend in local machine. Similar to above.
Dealing with peculiar aspects of individual clusters - the only way I see around that is to allow the user to write custom slurm scripts or such like.
How to detect if backend computations have finished/failed - this problem interacts with any custom slurm scripts written by user.
Hopefully it should be clear from the above that I'm quite confused. I've had a look at apache camel, jsch, ganemede ssh, apache mina, netty, slurm, Sun Grid, open mpi, mpich, pmi, but there's so much information that I think I need to ask for some help and advice. I would greatly appreciate any comments regarding these problems!
Thanks
================================
Edit
Actually, I just came across this: link which seems to suggest that if the cluster allows an "interactive"-mode job, then you can run a GUI from a compute node. However, I don't know much about this, nor do I know if this is common. I would be grateful for comments on this aspect.
You may be able to leverage the approach shown here: a ProcessBuilder is used to execute a command in the background of a SwingWorker, while the command's output is displayed in a suitable component. In the example, ls -l would become ssh username#host 'ls -l'. Use JPasswordField as required.
I am writing a server application with Java servlets and at some point, a Python script that was uploaded by a user has to be executed. Is it possible to create a process with restrictions like only beeing able to access a certain directory (probably using ProcessBuilder)?
I already had a look at pysandbox, but I am not sure if this alone is a safe enough measure when executing an unknown Python script.
All the script has to do is process a given String using certain libraries and return a String using the print function.
Is my approach correct or is there a better way to execute an unknown script?
As a forward to my answer, whitelisting and blacklisting only go so far and are proven easily broken by the most determined of hackers. Don't bother with these styles of security.
About as safe as you are going to get is to use pypy-sandbox it creates an OS level sandbox and tries to isolate processes that could lead to nasty execution.
For real security you probably want something more like this following model.
Using SELinux as the host fire up a virtual machine running SELinux
Disable all ports except for SSH and ensure patches are up to date
Upload the code to a non executable directory.
Chroot and ulimit all the things
Execute the code through pypy-sandbox
Destroy the machine when execution is complete
Or maybe I am just paranoid.
There is a client who would like to install a custom Java application on his business owned computers, However, he doesn't want to give the ability for the limited users to close the application, even from Windows Task Manager.
The purpose of the application is to monitor some specific resources and do several tasks silently. The users of these computers will be aware of this software and what it does exactly.
I couldn't find a way to do this by using the Java programming language! Is it possible or it's mainly related to Windows users' permissions and capabilities system?
You can't do such a thing in the program itself. It is more of a system level thing. Try launching the JVM (java.exe) as a System process.
You could run it as a Windows service, started by the Administrator. That way, users won't be able to close the process as they do not have high enough privileges.
Refer to http://wrapper.tanukisoftware.com/doc/english/download.jsp for a fantastic wrapper that you can use.
For a short tutorial, look at http://wrapper.tanukisoftware.com/doc/english/launch-win.html
Having something unclosable and invisible sounds unethical, but I'd imagine it does have legitimate uses. If it is for something worthwhile like protecting vulnerable people online etc. then they should probably not have task manager access in the first place, preventing them from stopping the JVM. Consider user privileges over attempting to code around corners.
As part of our requirements, we need to upload some files to database. Is there any way we can do virus scan on those files before saving them to database.
I personally use Free AVG as my anti-virus program on my Windows machine. It comes with a command-line scanning utility which allows you to scan files manually. This could easily be executed from Java code.
I am sure some of the other anti-virus applications also come with command-line versions of their scanners. Any of these would be easily executed from Java code.
If you are on a UNIX machine, you may want to question this requirement of virus scanning since UNIX viruses are very rare and the effective ones are not easily detectable by anti-virus software. The value of such a feature may be non-existent.
Your server is probably not at risk from viruses; however, you probably want to check the files anyway - it is entirely possible for a Windows-using client to upload an infected file, and another Windows-using client to download it and infect themselves. By checking for malware at the server, you could stop it from spreading - so the net result is positive, even if the malware doesn't attack your server directly.
If your server runs something UN*X-ish (Linux, BSD, ...), you may want to look at ClamAV, and its Java bindings, clamavj: these provide various scan capabilities (e.g. on-demand or automatic in a given location), even for different-platform malware (e.g. you can check for Windows viruses, even though your server runs Linux).