IOException on RandomAccessFile's readBytes function during file monitoring - java

I am monitoring a log file of another java process that is constantly writing to it. These two processes (monitoring application and the monitored application) is running on the linux distro, centos.
The problem is that everytime I restart the monitored application, the monitoring application seem to get this error:
java.io.IOException: Input/output error
at java.io.RandomAccessFile.readBytes(Native Method)
at java.io.RandomAccessFile.read(RandomAccessFile.java:361)
at LogMonster.fileChanged(LogMonitor.java:57)
at FileMonitor.fireFileChangeEvent(FileMonitor.java:96)
at FileMonitor$FileMonitorTask.run(FileMonitor.java:128)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
I keep a Map with file name as key and RandomAccessFile object as value and I populate it as follows after adding this object as a listener:
monitor.addFileChangeListener(logMonitor, LogFileName, LogMonitor_Properties.getTimeDelay());
randomAccessFile_list.put(LogFileName, new RandomAccessFile(LogFileName, "r"));
An event is fired every time the file is modified and it is within the eventFired function that I'm trying to read contents from the RandomAccessFile after the monitored application was restarted (before it is restarted it works fine).
The following line of code within the 'fileChanged' function is causing the error:
randomAccessFile_list.get(file.getAbsolutePath()).read(byteArray);
I use a bash script to kill all versions of the application and then restart it in a 'go' file.
Contents of go:
cd /path/to/app
./kill
nohup ./app.run &
Contents of kill:
kill -9 $(lsof app.run| awk '{print $2}')
kill -9 $(lsof app.log| awk '{print $2}')
kill -9 $(lsof app.go| awk '{print $2}')
Contents of app.run:
./app.go >>app.log 2>&1
Contents of app.log:
Just text output of the application.
Contents of app.go:
. /path/to/some/other/location/setClassPath.go
export CLASSPATH=$CLASSPATH
echo $CLASSPATH
/usr/local/jdk1.6.0_27/bin/java -cp $CLASSPATH MyApp
I apologise for posting a question that looks exhausting before you've even read it, but I'm really at my wits end and any help would be greatly appreciated.
Thanks in advance.

From the method names it appears you're using this for file monitoring. It actually isn't opening the file, it's just stat'ing it every once in a while.
You're then also keeping a separate file handle open to the file in your map.
The library fires an event only when the modified time changes - this doesn't mean there's any new data added to the file. You then apparently attempt to read from your filehandle and get an IO exception.
There's a number of issues with this approach, but without seeing more of your code it would be impossible to tell you exactly what the problem is. I'm guessing that the monitored process is truncating, deleting, or doing something else with the file when it restarts that is invalidating your open filehandle.
File monitoring like this is generally used when you want to reload the entire file (usually a properties file, or a document being edited), not for trying to do a "tail".

Related

force tty creation for crontab job

I've been coding a litte bash script which connects on several distant servers, then execute a java CLI program through a few expect instructions.
It goes like this :
bash script
expect
ssh to server using public keys
expect ...
expect ...
log_file my_file (everything displayed on the screen is now redirected to my_file)
expect ...
log_file (closing my_file)
exit
exit
When I execute my script manually everything runs OK.
When I execute it through crontab, the file my_file is empty.
I found out that cron jobs don't have a tty attached and that PATH isn't the same as usually
My question is : is there a way to force the creation/allocation of a tty to my cronjob?
I've tried using the -t and -tt option with ssh but no result.
redirecting standard output on different levels of the script didn't work.
Also, I can't install screen (which could have helped, maybe) and "script" isn't writing anything either.
Thanks a bunch!
You can check the cron tab log for erros and make sure the full path is given for the command to be executed.

How to run a Jar in Amazon EC2?

I am fairly new to Amazon. I have a Java file which reads GBs of crawled data and I am running this using AWS ToolKit for Eclipse. The disadvantage here is, I have to keep my machine running for weeks if I need to read the entire crawled data and that is not possible. Apart from that, I can't download GBs of data in to my local PC (Because it is reading data).
Is there any way that I can upload the Jar to Amazon, and Amazon run it without engaging with my computer? I have heard about web crawlers running in Amazon for weeks without downloading data into the developers machine, and without letting the developer to turn on his machine without shutting down for months.
The feature I am asking is just like "job flows" in Amazon Elastic Map-Reduce. You upload the code, it runs it inside. It doesn't matter whether you keep "your" machine turned on or not.
You can run with the nohup command for *nix
nohup java -jar myjar.jar 2>&1 >> logfile.log &
This will run your jar file, directing the output [stderr and stdout] to logfile.log. The & is needed so that it runs in the background, freeing up the command line / shell/
!! EDIT !!
It's worth noting that the easiest way I've found for stopping the job once it's started is:
ps -ef | grep java
Returns ec2-user 19082 19056 98 18:12 pts/0 00:00:11 java -jar myjar.jar
Then kill 19082.
Note, you can tail -f logfile.log or other such derivatives [less, cat, head] to view the output from the jar.
Answer to question/comment
Hi. You can use System.out.println(), yes, and that'll end up in logfile.log. The command that indicates that is the 2&>1 which means "redirect stream 2 into stream 1". In unix speak that means redirect/pipe stderr into stdout. We then specify >> logfile.log which means "append output to logfile.log". As System.out.println() writes to stdout it'll end up in logfile.log.
However, if you're app is set up to use Log4j/commons-logging then using LOG.info("statement"); will end up in the configured 'log4j.properties' log file. With this configuration the only statements that will end up in logfile.log will be those that are either System generated (errors, linux internal system messages) or anything that's written explicitly to the stdout (ie System.out.println()) statements;

How to get rid of /tmp/.java_pid<number> files in Linux?

I noticed a lot of file /tmp/.java_pid<...> in my Linux machine. The file command says they are sockets. Assuming they are created by Java I wonder why Java does not clean them up. How to make Java clean them up or just not create them?
These files are created by the JVM to support debugging. It's part of the attach api.
If you don't want java to create them then start java apps without debugging enabled.
You can safely delete them if there isn't a jvm with the corresponding pid... a task that is eminently suitable for a cron job.
A little bit of bash:
for file in /tmp/.java-[0-9]*; do
[ -d /proc/${file#*.java-} ] || rm -f $file
done
pid files are generally the location where applications store their process id, so the user can kill the process easily afterwards. These applications should be deleting these files when they close down.
I wouldn't worry about these files too much, unless you are seeing more and more of them and they dont get deleted, then it might be a tell tale sign that you have an application that is not shutting down correctly,

Starting and killing java app with shell script (Debian)

I'm new to UNIX. I want to start my java app with a script like so:
#!/bin/sh
java -jar /usr/ScriptCheck.jar &
echo $! > /var/run/ScriptCheck.pid
This is supposedly working. It does run the app and it does write the pid file. But when I try to stop the process with a different script which contains this:
#!/bin/sh
kill -9 /var/run/ScriptCheck.pid
the console gives me this error:
bash: kill: /var/run/ScriptCheck.pid: arguments must be process or job IDs
My best guess is that I'm not writing the right code in the stop script, maybe not giving the right command to open the .pid file.
Any help will be very appreciated.
You're passing a file name as an argument to kill when it expects a (proces id) number, so just read the process id from that file and pass it to kill:
#!/bin/sh
PID=$(cat /var/run/ScriptCheck.pid)
kill -9 $PID
A quick and dirty method would be :
kill -9 $(cat /var/run/ScriptCheck.pid)
Your syntax is wrong, kill takes a process id, not a file. You also should not be using kill -9 unless you absolutely know what you are doing.
kill $(cat /var/run/ScriptCheck.pid)
or
xargs kill </var/run/ScriptCheck.pid
I think you need to read in the contents of the ScriptCheck.pid file (which I'm assuming has only one entry with the PID of the process in the first row).
#!/bin/sh
procID=0;
while read line
do
procID="$line";
done </var/run/ScriptCheck.pid
kill -9 procID
I've never had to create my own pid; your question was interesting.
Here is a bash code snippet I found:
#!/bin/bash
PROGRAM=/path/to/myprog
$PROGRAM &
PID=$!
echo $PID > /path/to/pid/file.pid
You would have to have root privileges to put your file.pid into /var/run --referenced by a lot of articles -- which is why daemons have root privileges.
In this case, you need to put your pid some agreed upon place, known to your start and stop scripts. You can use the fact a pid file exists, for example, not to allow a second identical process to run.
The $PROGRAM & puts the script into background "batch" mode.
If you want the program to hang around after your script exits, I suggest launching it with nohup, which means the program won't die, when your script logs out.
I just checked. The PID is returned with a nohup.

Thread dump not getting generated on JBoss

I used to generate thread dumps by running kill -quit and I would get them in a log file where my server logs were there. When the file grew too large I removed it using rm and created a new file of the same name.
Now when I use kill -quit to take the thread dumps, nothing gets copied in the log file - its empty.
Can anyone help?
The default JBoss startup scripts on Unix usually look something like:
nohup $JBOSS_HOME/bin/run.sh $JBOSS_OPTS >/dev/null 2>&1 &
This is unfortunate because it's sends stderr to /dev/null. Usually this is not a problem, because once log4j initializes, then most application output will go to boot.log or server.log. However, for thread dumps, and other low level errors they get lost.
Your best bet is to change the startup script to redirect stdout and stderr to a file. Additionally, one thing that's overlooked in the default setup is redirect stdin. For daemon processes it's a best practice to redirect stdin to /dev/null. For example:
nohup $JBOSS_HOME/bin/run.sh $JBOSS_OPTS >> console-$(date +%Y%m%d).out 2>&1 < /dev/null &
Lastly, if you have a running process, you can use jstack, which is included with the JRE, to get a thread dump. This will output to the console from which it's invoked. I prefer the output from kill -3, but jstack also allows you to view native stack frames.
If this is on *nix, when you delete a file, everyone who has that file still open will continue to write to the old (now missing) file. The file will only be really deleted when all file handles to it are closed.
You would have to cause the JVM to close and re-open the log file. Not sure if this can be done without a restart.
If you go into jmx and find jboss.system:service=Logging,type=Log4jService you can then invoke the reconfigure method which should cause log4j to reopen any of its log files. Then the kill -quit should work.

Categories