I have written a java program that can read the epc code of rfid card when the card is shown to the sensor.
Now, I need a different program that can terminate the program which is reading the card.
In my program I have written a function that can stop reading the card, but I don't know how to use that function in other program to terminate the currently running program
This question is pretty open-ended and not really Java-specific. There are a lot of kinds of inter-process communication, take your pick.
A few options off the top:
Store the PID of the first process in a file when you start it, and then the second process can stop it by sending a kill signal. This is a lightweight option because it doesn't involve modifying the first process, but has the disadvantage of being platform-specific and not being able to cross a machine boundary.
Have the first process act as some kind of socket-based server and the second process access it as a client. This is nice because it works over the network, you can take advantage of the existing procotol if you end up needing to expand the scope of the IPC (e.g. to add authentication, additional functions beyond just termination, etc.), and you can leverage existing clients (e.g. if you use HTTP, maybe you just use curl instead of building something in Java for the second process).
Use a message-passing platform like Akka, or some RPC library. If you want to keep your mind inside the JVM and learn as little as possible about anything else, you might take one of these approaches, but you'd end up coupling the two processing together to an extent that doesn't sound necessary.
Related
I have an idea for a game. The core mechanic is based on entering a script that controls your character, somewhat like Screeps. This makes multiplayer very easy to implement, as you only need to sync the initial world state and then everybody watches their characters run off.
I have two problems in going about this.
I want to ensure that every character gets a fair slice of time, regardless of how loaded the server is. It won't do to say each character gets X ms of execution time because the amount of work done in that time may vary based how loaded the client is and its hardware. I want, regardless of if I run the code on a Raspberry Pi or a top of the line i7, to execute the same amount of code.
I also want to ensure that, if playing with untrusted players, the submitted code from other clients can't consume excessive memory or access any resources it shouldn't be allowed to.
What's the best course of action to accomplish this?
An easy approach would be to have players program according to some assembly standard I invent, then letting each character run X ops for game-tick. But I fear that'll reduce the accessibility, so I'd rather use a scripting language like Lua, JavaScript, or Python.
I'm running a J2SE application that is somewhat trusted (Minecraft) but will likely contain completely un-trusted (and likely even some hostile) plugins.
I'd like to create a plugin that can access the GPIO pins on the Raspberry PI.
Every solution I've seen requires that such an app be given sudo-superpowers because gpio is accessed through direct memory access.
It looks like the correct solution is to supply a command-line option like this:
-Djava.security.policy=java.policy
which seems to default you to no permissions (even access to files and high ports), then add the ones your app needs back in with the policy file.
In effect you seem to be giving Java "sudo" powers and then trusting java's security model to only give appropriate powers out to various classes. I'm guessing this makes the app safe to run with sudo--is this correct?
Funny that I've been using Java pretty much daily since 1.0 and never needed this before... You learn something new every day.
[Disclaimer: I'm not very convinced by the Java security model.]
The way I would solve this is to have the code that needs to access the hardware run as a separate privileged process, then have your Java application run as an unprivileged process and connect to the privileged process to have it perform certain actions on its behalf.
In the privileged process, you should check with maximum distrust each request whether it is safe to execute. If you are afraid that other unprivileged processes might connect to the daemon too and make it execute commands it shouldn't, you could make its socket owned by a special group and setgid() the Java application to that group by a tiny wrapper written in C before it is started.
Unix domain sockets are probably the best choice but if you want to chroot() the Java application, a TCP/IP socket might be needed.
Long story short, I have a Java process that reads and writes data to/from a process. I have a C++ program that takes the data, processes it and then needs to pass it back to Java so that Java can write it to a database.
The Java program pulls its data from Hadoop, so once the Hadoop process kicks off, it gets flooded with data but the actual processing(done by the C++ program) cannot handle all the data at once. So I need a way to control the flow as well. Also to complicate the problem(but simplify my work), I do the Java stuff and my friend does the C++ stuff and are trying to keep our programs as independent as possible.
That’s the problem. I found Google protocol buffer and it seems pretty cool to pass data between the programs but I’m unsure how the Java Program saving data can trigger the c++ program to process and then when the c++ program saves the results how the Java program will be triggered to save the results (this is for one or a few records but we plan to process billions of records).
What is the best approach to this problem? Is there a simple way of doing this?
The simplest approach may be to use a TCP Socket connection. The Java program sends when you want to be done and the C++ program sends back the results.
Since you're going to want to scale this solution, i suggest using ZMQ.
Have your java app still pull the data from Hadoop.
It will then in turn push the data out using a PUSH socket.
Here you will have as many as needed c++ workers going who will process this data accepting connections as PULL sockets. This is scalable to as many different processor/cores/etc... that you need.
When each worker is finished it will push the results out on a PUSH socket to the 'storing' java program which is accepting info on a PULL socket.
It looks something like this example (standard divide and conquer methodology)
This process is scalable to as many workers as necessary as your first java program will block (but still is processing) when there aren't any available workers. So long as your ending java program is fast, you will see this scales really really nicely.
The emitting and saving program can be in the same program just use a zmq_poll device :)
Alright, so I'm writing this program that essentially batch runs other java programs for me (multiple times, varying parameters, parallel executions, etc).
So far the running part works great. Using ProcessBuilder's .start() method (equivalent to the Runtime.exec() I believe), it creates a separate java process and off it goes.
Problem is I would like to be able to pause/stop these processes once they've been started. With simple threads this is generally easy to do, however the external process doesn't seem to have any inbuilt functionality for waiting/sleeping, at least not from an external point of view.
My question(s) is this: Is there a way to pause a java.lang.Process object? If not, does anyone know of any related exec libraries that do contain this ability? Barring all of that, is extending Process a more viable alternative?
My question(s) is this: Is there a way to pause a java.lang.Process object?
As you've probably discovered, there's no support for this in the standard API. Process for instance provides no suspend() / resume() methods.
If not, does anyone know of any related exec libraries that do contain this ability?
On POSIX compliant operating systems such as GNU/Linux or Mac OS you could use another system call (using Runtime.exec, ProcessBuilder or some natively implemented library) to issue a kill command.
Using the kill command you can send signals such as SIGSTOP (to suspend a process) and SIGCONT (to resume it).
(You will need to get hold of the process id of the external program. There are plenty of questions and answers around that answers this.)
You will need to create a system for sending messages between processes. You might do this by:
Sending signals, depending on OS. (As aioobe notes.)
Having one process occasionally check for presence/absence of a file that another process can create/delete. (If the file is being read/written, you will need to use file locking.)
Have your "main" process listen on a port, and when it launches the children it tells them (via a comamnd-line argument) how to "phone home" as they start up. Both programs alternate between doing work and checking for handling messages.
From what you have described (all Java programs in a complex batch environment) I would suggest #3, TCP/IP communication.
While it certainly involves extra work, it also gives you the flexibility to send commands or information of whatever kind you want between different processes.
A Process represents a separate process running on the machine. Java definitely does not allow you to pause them through java.lang.Process. You can forcibly stop them using Process.destroy(). For pausing, you will need the co-operation of the spawned process.
What sorts of processes are these? Did you write them?
I'm trying to develop an application that just before quit has to run a new daemon process to execute the main method of a class.
I require that after the main application quits the daemon process must still be in execution.
It is a Java Stored Procedure running on Oracle DB so I can't use Runtime.exec because I can't locate the java class from the Operating System Shell because it's defined in database structures instead of file system files.
In particular the desired behavior should be that during a remote database session I should be able to
call the first java method that runs the daemon process and quits leaving the daemon process in execution state
and then (having the daemon process up and the session control, because the last call terminated) consequentially
call a method that communicates with the daemon process (that finally quits at the end of the communication)
Is this possible?
Thanks
Update
My exact need is to create and load (reaching the best performances) a big text file into the database supposing that the host doesn't have file transfer services from a Java JDK6 client application connecting to Oracle 11gR1 DB using JDBC-11G oci driver.
I already developed a working solution by calling a procedure that stores into a file the LOB(large database object) given as input, but such a method uses too many intermediate structures that I want to avoid.
So I thought about creating a ServerSocket on the DB with a first call and later connect to it and establish the data transfer with a direct and fast communication.
The problem I encountered comes out because the java procedure that creates the ServerSocket can't quit and leave an executing Thread/Process listening on that Socket and the client, to be sure that the ServerSocket has been created, can't run a separate Thread to handle the rest of the job.
Hope to be clear
I'd be surprised if this was possible. In effect you'd be able to saturate the DB Server machine with an indefinite number of daemon processes.
If such a thing is possible the technique is likely to be Oracle-specific.
Perhaps you could achieve your desired effect using database triggers, or other such event driven Database capabilities.
I'd recommend explaining the exact problem you are trying to solve, why do you need a daemon? My instict is that trying to manage your daemon's life is going to get horribly complex. You may well need to deal with problems such as preventing two instances being launched, unexpected termination of the daemon, taking daemon down when maintenance is needed. This sort of stuff can get really messy.
If, for example, you want to run some Java code every hour then almost certanly there are simpler ways to achieve that effect. Operating systems and databases tend to have nice methods for initiating work at desired times. So having a stored procedure called when you need it is probably a capability already present in your environment. Hence all you need to do is put your desired code in the stored procedure. No need for you to hand craft the shceduling, initiation and management. One quite significant advantage of this approach is that you end up using a tehcnique that other folks in your environment already understand.
Writing the kind of code you're considering is very intersting and great fun, but in commercial environments is often a waste of effort.
Make another jar for your other Main class and within your main application call the jar using the Runtime.getRuntime().exec() method which should run an external program (another JVM) running your other Main class.
The way you start subprocesses in Java is Runtime.exec() (or its more convenient wrapper, ProcessBuilder). If that doesn't work, you're SOL unless you can use native code to implement equivalent functionality (ask another question here to learn how to start subprocesses at the C++ level) but that would be at least as error-prone as using the standard methods.
I'd be startled if an application server like Oracle allowed you access to either the functionality of starting subprocesses or of loading native code; both can cause tremendous mischief so untrusted code is barred from them. Looking over your edit, your best approach is going to be to rethink how you tackle your real problem, e.g., by using NIO to manage the sockets in a more efficient fashion (and try to not create extra files on disk; you'll just have to put in extra elaborate code to clean them up…)