I'm trying to send a command to a minecraft server jar using /proc/{pid}/fd/0 but the server does not execute the command.
To replicate what I'm trying to do you can do this on a Debian based machine (possibly other Linux distributuions aswell).
What I use to test this:
Ubuntu 14.04
minecraft_server.jar (testing with 1.8)
OpenJDK Runtime Environment (installed with default-jre-headless)
First console:
$ java -jar minecraft_server.jar nogui
Response: [ ... server starts and waiting for input]
say hi
Response: [19:52:23] [Server thread/INFO]: [Server] hi
Second console:
Now when i switch to the second console, with the server still running in the first i write:
echo "say hi2" >> /proc/$(pidof java)/fd/0
Everything looks well until I switch back to the first console. I can see the text "say hi2" but the server hasn't recognized it. I can write another command in the first console again and it is as if the text inputted from the second console hasn't even existed.
Why is this? And more importantly, how do I use /proc/{pid}/fd/0 in a proper way to send commands to a java jar file?
I don't know if this is some kind of Java-thing that I'm not aware of, if I can use some flag or something when executing the server, or if it's the server jar itself that is the problem..
I'm aware that you can use screen, tail -f or some kind of server wrapper to accomplish this, but that's not what I'm after. I would like to send a command using this method, in some kind of way.
It's not a Java thing. What you are trying is simply not doable.
Test it like this:
Console1:
$ cat
This will basically echo anything you type on it as soon as you hit "return".
Console2: Find the process number of your cat command. Let's say it's NNN. Do:
$ echo Something > /proc/NNN/fd/0
Switch back to Console1. You'll see "Something" on the console output, but it's not echoed.
Why? Do
$ ls -l /proc/NNN/fd
And you may understand. All three descriptors, 0 for stdin, 1 for stdout and 2 for stderr are actually symbolic links, and all point to the same pseudoterminal slave (pts) device, which is the pts associated with your first terminal.
So basically, when you write to it, you actually write to the console output, not to its input. If you read from that file, you could steal some of the input that was supposed to go to the process in the first console (you are racing for this input). That's how a character device works.
The documentation for /proc says that:
/proc/[pid]/fd/
This is a subdirectory containing one entry for each file
which the process has open, named by its file descriptor, and
which is a symbolic link to the actual file. Thus, 0 is
standard input, 1 standard output, 2 standard error, and so
on.
So these are not the actual file descriptors opened by the process. They are just links to files (or in this case, character devices) with names that indicate which descriptor they are attached to in the given process. Their main duty is to tell you whether the process has redirected its file descriptors or has opened any new ones, and which resources they point to.
But if you want an alternative way of doing this, you can use a fifo - a named pipe.
Create a fifo by doing:
$ mkfifo myfifo
Run your java program:
$ java -jar minecraft_server.jar nogui < myfifo
Open another console. write
$ cat > myfifo
Now start typing things. Switch to the first console. You'll see your server executing your commands.
Mind your end-of-files, though. Several processes can write to the same fifo, but as soon as the last one closes it, your server will receive an EOF on its standard input.
It is possible to get around the fact that a named pipe is 'closed' when a process ends. You can to do this by keeping a file descriptor to the named pipe open in another process.
#! /bin/bash
# tac prints a file in reverse order (tac -> cat)
cmd="tac"
# create fifo called pipe
mkfifo pipe
# open pipe on the current process's file descriptor 3
exec 3<>pipe
bash -c "
# child process inherits all file descriptors. So cmd must be run from a sub-
# process. This allows us to close fd so cmd does not inherit fd 3, but allow fd 3
# to remain open on parent process.
exec 3>&-
# start your cmd and redirect the named pipe to its stdin
$cmd < pipe
" &
# write data to pipe
echo hello > pipe
echo world > pipe
# short wait before tidy up
sleep 0.1
# done writing data, so close fd 3 on parent (this) process
exec 3>&-
# tac now knows it will receive no more data so it prints its data and exits
# data is unbuffered, so all data is received immediately. Try `cat` instead to see.
# clean up pipe
rm pipe
Related
I have an AppleScript which I am using to run a .jar file. The .jar file takes several inputs which were originally entered via the command line but now I enter into a .csv and get read into the .jar automatically. For unknown reasons, sometimes a number in the CSV is not read correctly leading to a NumberFormatException in the Java code. However, instead of breaking, my script continually tries to enter the invalid input in an infinite loop. Is there a way to amend my code so that when an error is raised by the .jar, the script stops?
Here is my current code:
on RunFile(jar_location)
do shell script "cd " & jar_location & " ; cat 'prompt.csv' | sh 'runScript.sh' 'WSO'"
end RunFile
After going over this in the comments, it's clear that the problem is that the .jar file is trying to be interactive — asking for some input at the cursor — and AppleScript's do shell script is not designed for that. AppleScript can get errors and outputs form the shell, but it cannot feed a response back to the shell, or tell if a shell script is waiting for input.
if the .jar file cannot be operated in a non-interactive mode, then the only for AppleScript to make sure the process ends is to grab its process id number, wait a reasonable amount of time, and then send it a kill signal. That script would look like this:
on RunFile(jar_location)
set pid to do shell script "cd " & jar_location & " ; cat 'prompt.csv' | sh 'runScript.sh' 'WSO' &> /dev/null & echo $!"
-- wait 5 seconds, or whatever seems appropriate for the task to complete
delay 5
try
do shell script "kill " & pid
end try
end RunFile
The appended &> /dev/null & echo $! phrase detaches the shell script, allowing the AppleScript to move forward, and returns the process id of the process for later use. I've put the kill signal in a try block so that the script does not throw an error if the process has already exited normally.
I'm trying to use the "top" command from a standalone java app. I basically want to run this:
top -p myProcessId -n 1 -b
my java psuedocode:
Process process = Runtime.getRuntime().exec("top -p myProcessId -n 1 -b");
// read both:
process.getInputStream();
process.getErrorStream()
and the output I get is:
top: failed tty get
I'm not sure what that output means - any ideas how to use "top" correctly here? I'm on ubuntu fwiw.
Thanks
----- Update -------------
OK I was able to do this instead with:
ps -Ao pid,rsz,cmd
and then I read through each output line for the process I'm interested in, I can read the res mem value from there.
top doesn't use the ordinary standard output stream; it manipulates the terminal device directly. This means that you can't just read in its output like you can from most command-line programs. There are a few other programs that might do what you need (ps is a likely candidate), or you might be able to read the relevant information from /proc.
I am fairly new to Amazon. I have a Java file which reads GBs of crawled data and I am running this using AWS ToolKit for Eclipse. The disadvantage here is, I have to keep my machine running for weeks if I need to read the entire crawled data and that is not possible. Apart from that, I can't download GBs of data in to my local PC (Because it is reading data).
Is there any way that I can upload the Jar to Amazon, and Amazon run it without engaging with my computer? I have heard about web crawlers running in Amazon for weeks without downloading data into the developers machine, and without letting the developer to turn on his machine without shutting down for months.
The feature I am asking is just like "job flows" in Amazon Elastic Map-Reduce. You upload the code, it runs it inside. It doesn't matter whether you keep "your" machine turned on or not.
You can run with the nohup command for *nix
nohup java -jar myjar.jar 2>&1 >> logfile.log &
This will run your jar file, directing the output [stderr and stdout] to logfile.log. The & is needed so that it runs in the background, freeing up the command line / shell/
!! EDIT !!
It's worth noting that the easiest way I've found for stopping the job once it's started is:
ps -ef | grep java
Returns ec2-user 19082 19056 98 18:12 pts/0 00:00:11 java -jar myjar.jar
Then kill 19082.
Note, you can tail -f logfile.log or other such derivatives [less, cat, head] to view the output from the jar.
Answer to question/comment
Hi. You can use System.out.println(), yes, and that'll end up in logfile.log. The command that indicates that is the 2&>1 which means "redirect stream 2 into stream 1". In unix speak that means redirect/pipe stderr into stdout. We then specify >> logfile.log which means "append output to logfile.log". As System.out.println() writes to stdout it'll end up in logfile.log.
However, if you're app is set up to use Log4j/commons-logging then using LOG.info("statement"); will end up in the configured 'log4j.properties' log file. With this configuration the only statements that will end up in logfile.log will be those that are either System generated (errors, linux internal system messages) or anything that's written explicitly to the stdout (ie System.out.println()) statements;
I call a Java function using PHP. The code is:
exec('pushd d:\xampp\htdocs\file_excecute\class & java Autoingestion username password id Sales Daily Summary 20120902',$output,$return);
This code worked on a Windows machine but it is not working on a Linux server. The code is:
exec('pushd \var\www\domainname.com\itune_report\class & java Autoingestion username password id Sales Weekly Summary 20120901',$output,$return);
You are using the wrong kind of slash as a field separator, but that may not be your only problem.
The output of the command appears in $output, since you use the exec(command, output, return) form.
However, this only gives you stdout. The shell will send error messages to stderr.
Unfortunately there isn't a version of exec() that reads stderr.
You can merge both outputs to $output by adding 2>&1 at the end of your shell command:
exec("mycommand 2>&1", $output, $return);
Look at $output, and you will either find the output of your successful command or error messages which you can use to work out why it didn't work.
If you want to write something more rigorous that treats stdout and stderr separately, you'll need to use proc_open() instead: PHP StdErr after Exec()
There are (perhaps insurmountable) difficulties when trying to execute sudo commands from a PHP script and from an external script called by PHP on SELinux enabled machines.
Make sure you use Linux directory path in your command
Linux won't let apache change the group id of the process by default.
You may need to use another solution, like make the PHP script deposit a file in a directory which is monitored by cron or inotify and which will call another script with root privileges.
Obviously it does not work on Linux. Command pushd is defined in windows shell only. The path on linux must use forward and not back slashe as separator.
This is an extremely strange situation, but I just cannot point out what I'm doing wrong.
I'm executing a big bunch of SQL scripts (table creation scripts, mostly). They are executed through Java, using sqlcmd. Here's the sqlcmd command I use.
sqlcmd -m 11 -S SERVER -d DB -U USER -P PASS -r0 -i "SCRIPT.sql" 2> "ERRORS.log" 1> NULL
Note: I use the -r0 and redirects to make sure only errors go into the log file. I chuck out all STDOUTs.
Now I execute this command in Java, using getRuntime.exec(), like this.
Runtime.getRuntime().gc();
strCmd = "cmd /c sqlcmd -m 11 -S SERVER -d DB -U USER -P PASS -r0 -i \"SCRIPT.sql\" 2> \"ERRORS.log\" 1> NULL"
Process proc = Runtime.getRuntime().exec(strCmd);
proc.waitFor();
Note: I use cmd /c, so that the command runs in its own shell and exits gracefully. Also, this helps in immediately reading the error log to look for errors.
The Problem!
This command works perfectly when run by hand on the command prompt (i.e. the tables are getting created as intended). However, when executed through Java as shown, the scripts are run, and and there are no errors, no exceptions, nothing in the logs. But, when checking in SSMS, the tables aren't there!
Where do I even begin debugging this issue?
UPDATE: I'M A MORON
The return value from the getRuntime().exec method is 1. It should be 0, which denotes normal execution.
Any pointers on how to fix this?
UPDATE 2
I've looked at the process' ErrorStream, and this is what it has.
Sqlcmd: Error: Error occurred while opening or operating on file 2>
(Reason: The filename, directory name, or volume label syntax is
incorrect).
Looks like the path I'm passing is wrong. The error log goes into my profile directory, which is C:\Documents and Settings\my_username. Do the spaces in the path matter? I'm anyways double-quoting them!
Have a look at the exec method with an string array as parameter:
java.lang.Runtime.exec(String[] cmdArray)
The JavaDoc for this method says:
Executes the specified command and arguments in a separate process.
So, the first item in the array is the command and all of your arguments are appended to the array, e. g.,
Runtime.getRuntime().exec(new String[] {"cmd", "/c", "sqlcmd ... "});
After looking at your comment and the implementation of exec(String) it seems to be, that the exec method recognizes the pipe operator > as an argument to cmd, because exec(String) splits the command string to an array using whitespaces as seperators.
I don't have privs to post comments - which is what this is - but what if you try putting in a bogus user id for the DB? Does that cause a different execution path? Will that give you a Java error? Or an Auth error in your DB? Also, def tweak the user, not the password and learn from my experience that if you tweak the password that's a great way to get an account locked out!
The other thing - and this may be a shot in the dark - but what are the JRE and driver you're using? I believe there's a known issue with JRE 1.6.0.29 and the sqljdbc4 JAR. I have more details on this, but I'll have to post the link once I get to work.
Edit:
I know it's been established that the JRE/sqljdbc combo isn't your issue, but if folks search and find this, here is the link I spoke of above:
Driver.getConnection hangs using SQLServer driver and Java 1.6.0_29
First enable log/view commands output (since exec() returns 1), which would point out possible cause of the issue.
Use proc.getInputStream() and print the contents to a file or console.