I used to generate thread dumps by running kill -quit and I would get them in a log file where my server logs were there. When the file grew too large I removed it using rm and created a new file of the same name.
Now when I use kill -quit to take the thread dumps, nothing gets copied in the log file - its empty.
Can anyone help?
The default JBoss startup scripts on Unix usually look something like:
nohup $JBOSS_HOME/bin/run.sh $JBOSS_OPTS >/dev/null 2>&1 &
This is unfortunate because it's sends stderr to /dev/null. Usually this is not a problem, because once log4j initializes, then most application output will go to boot.log or server.log. However, for thread dumps, and other low level errors they get lost.
Your best bet is to change the startup script to redirect stdout and stderr to a file. Additionally, one thing that's overlooked in the default setup is redirect stdin. For daemon processes it's a best practice to redirect stdin to /dev/null. For example:
nohup $JBOSS_HOME/bin/run.sh $JBOSS_OPTS >> console-$(date +%Y%m%d).out 2>&1 < /dev/null &
Lastly, if you have a running process, you can use jstack, which is included with the JRE, to get a thread dump. This will output to the console from which it's invoked. I prefer the output from kill -3, but jstack also allows you to view native stack frames.
If this is on *nix, when you delete a file, everyone who has that file still open will continue to write to the old (now missing) file. The file will only be really deleted when all file handles to it are closed.
You would have to cause the JVM to close and re-open the log file. Not sure if this can be done without a restart.
If you go into jmx and find jboss.system:service=Logging,type=Log4jService you can then invoke the reconfigure method which should cause log4j to reopen any of its log files. Then the kill -quit should work.
Related
Every once in a while our catalina.out file gets very very large (yes, I will be implementing slf4j and logback in my applications to prevent this in the future). But for now, when I go to cycle the logs, I copy catalina.out to catalina.{date} and execute cat /dev/null > catalina.out. The problem is, tomcat will capture no further logs after I do that, until tomcat is restarted the next morning, and this is not ideal. Why does this happen? And is there a way to avoid it?
Easy as cake: echo > catalina.out. The file descriptor won't change and java can continue to write to that file.
The traditional way is to
cat /dev/null > catalina.out
It will clear the log file, and will not disrupt the processes that currently hold open file handles.
The better way is to not lose your logging information, by rotating out the log file. To do this, create or edit the file /etc/logrotate.d/tomcat and have its contents read
/var/log/tomcat/catalina.out { copytruncate daily rotate 7 compress missingok size 5M }
Then restart logrotate with the command (as root)
/usr/sbin/logrotate /etc/logrotate.conf
And you should have the log file rotated out daily, or if the size exceeds 5M, with the last seven logs kept for debugging purposes.
You can truncate the file. This makes logical sense also since it is essentially what you are trying to do.
truncate -s 0 M catalina.out
FYI: Doing a cat /dev/null > file does not alter the inode of the file.
logs]$ls -i test.log
19794063 test.log
logs]$
logs]$cat /dev/null > test.log
logs]$ls -i test.log
19794063 test.log
Also, I had a separate command tailing live data into test.log during these commands. After running these commands the tail to test.log still worked without a problem. This does not answer your question as to why it stopped working, but it helps rule out change in inode.
What you are looking for is log rotation. Since this requires a low-level signal support while the process is running, you should do this indirectly. You can use this procedure to achieve this. This is also documented on Tomcat Wiki.
go to the folder /opt/tomcat/logs/
type in command line
sudo echo > catalina.out
without stopping tomcat. . this will always work .
you can also schedule this command on daily or weekly basis . .
go to the folder /opt/tomcat/logs/
type in command line --
sudo echo > catalina.out
you can run this without shutting down tomcat server . .
you can also schedule this command on daily or weekly basis . .
If the catalina.out is deleted after tomcat is stopped, it will create a new catalina.out once tomcat starts again and it is totally safe.
But if you remove the catalina.out while tomcat is running, it will keep on logging to catalina.out which is removed already (reference of the file is hold by the tomcat) hence the space will not be released. So you will need to restart the tomcat sever to release the space. It is not recommended.
Actually you can clear the log of catalina.out file without stopping tomcat with the command below.
sudo cat /dev/null > /opt/tomcat/apache-tomcat-9.0.37/logs/catalina.out
To keep catalina.out smaller, either one of the following ways can be used:
You could use the above Linux command and add it to a CronScheduler
to run daily/ weekly/ monthly or yearly as preferred.
Use logrotate tool in Linux. It is a log managing command-line tool
in Linux. This can rotate the log files under different conditions.
Particularly, we can rotate log files on a fixed duration or if the
file has grown to a certain size. You can click here for more
info on this.
Simple and straight forward solution:
echo "" > catalina.out
Note: Add sudo if required
I am fairly new to Amazon. I have a Java file which reads GBs of crawled data and I am running this using AWS ToolKit for Eclipse. The disadvantage here is, I have to keep my machine running for weeks if I need to read the entire crawled data and that is not possible. Apart from that, I can't download GBs of data in to my local PC (Because it is reading data).
Is there any way that I can upload the Jar to Amazon, and Amazon run it without engaging with my computer? I have heard about web crawlers running in Amazon for weeks without downloading data into the developers machine, and without letting the developer to turn on his machine without shutting down for months.
The feature I am asking is just like "job flows" in Amazon Elastic Map-Reduce. You upload the code, it runs it inside. It doesn't matter whether you keep "your" machine turned on or not.
You can run with the nohup command for *nix
nohup java -jar myjar.jar 2>&1 >> logfile.log &
This will run your jar file, directing the output [stderr and stdout] to logfile.log. The & is needed so that it runs in the background, freeing up the command line / shell/
!! EDIT !!
It's worth noting that the easiest way I've found for stopping the job once it's started is:
ps -ef | grep java
Returns ec2-user 19082 19056 98 18:12 pts/0 00:00:11 java -jar myjar.jar
Then kill 19082.
Note, you can tail -f logfile.log or other such derivatives [less, cat, head] to view the output from the jar.
Answer to question/comment
Hi. You can use System.out.println(), yes, and that'll end up in logfile.log. The command that indicates that is the 2&>1 which means "redirect stream 2 into stream 1". In unix speak that means redirect/pipe stderr into stdout. We then specify >> logfile.log which means "append output to logfile.log". As System.out.println() writes to stdout it'll end up in logfile.log.
However, if you're app is set up to use Log4j/commons-logging then using LOG.info("statement"); will end up in the configured 'log4j.properties' log file. With this configuration the only statements that will end up in logfile.log will be those that are either System generated (errors, linux internal system messages) or anything that's written explicitly to the stdout (ie System.out.println()) statements;
So I have the following problem: I have a web service running inside a Tomcat7 server on Linux. The web service however has to execute some commands (mostly file operations such as copy and mount). Copy I've replaced with java.nio, but I don't think that there is a replacement for mount.
So I'm trying to execute shell commands out of my Tomcat Java process. Unfortunately it doesn't execute my commands. I've implemented the execution of shell commands in Java before. So my code should be correct:
Process pr = Runtime.getRuntime().exec("mount -o loop -t iso9660 <myimage> <mymountpoint>");
pr.waitFor();
<myimage> and <mymountpoint> are absolute paths, so no issues there either.
I've debugged my commands and they are working when executed on the console.
I've tried sending other commands. Simple commands such as id and pwd are working!
I've tried using /bin/bash -c "<command>", which didn't work.
I've tried executing a shell script, which executes the command, which didn't work.
I've tried escaping the spaces in my command, which didn't work.
So I've digged even deeper and now I'm suspecting some Tomcat security policy (Sandbox?), which prevents me from executing the command. Since security is no issue for me (it's an internal system, completely isolated from the outside world), I've tried a hack, which became quite popular just recently:
System.setSecurityManager(null);
This didn't work either. I'm using Java7 and Tomcat7 on RHEL6. Tomcat7 is just extracted! I don't have any files in /etc/.. or any other folder than /opt/tomcat/, where I've extracted the zip from the Tomcat home page. I've searched the /opt/tomcat/conf folder for security settings, but all I could find was the file catalina.policy, where it didn't seem like I could set some security level for shell commands.
Any ideas?
A few things:
System.setSecurityManager(null);
you have just killed the security of your application.
Yes, Tomcat is running as root. If I execute id I'm root as well.
Fix this immediately!
Now on to the question. You shouldnt have Tomcat executing anything, you need to defer this to a separate process whether that be a shell script or another Java program. This should also remove what (I hope) was a dependency on root running Tomcat. It should be possible to perform this command as a non-privileged user that cannot log into the system normally. You would do this by configuring /etc/fstab and supplying that same user the permissions to do this. From a pure security POV the process that mounts should not be owned by the tomcat user. Nor should the tomcat user ever be root. So to recap:
1) Stop running Tomcat as root
2) Create a separate process outside of the context of Tomcat to run this mount
3) Create a tomcat user, this user should not be able to log into the system nor should it be a privileged user (admin,super user, etc)
4) Create a process user, this user should be configured exactly as the tomcat user
5) Edit /etc/fstab giving the process user the necessary permissions to mount correctly.
It's generally a bad idea to use the single-string form of Runtime.exec. A better option is to use ProcessBuilder, and split up the arguments yourself rather than relying on Java to split them for you (which it does very naïvely).
ProcessBuilder pb = new ProcessBuilder("/bin/mount", "-o", "loop", /*...*/);
pb.redirectErrorStream(true); // equivalent of 2>&1
Process p = pb.start();
You say you're on RHEL so do you have selinux active? Check your logs and see if this is what's blocking you (I think it's audit.log you're looking for, it's been a few years since I've used selinux). If this does turn out to be the problem then you should probably ask on superuser or serverfault rather than SO...
I'm not sure if that's the problem you are having, but I've seen issues when Runtime.exec() is used without reading the associated output buffers. You can find a detailed explanation and potential solutions here. Reading the output and error streams can also help you figure out what's going on at the OS level when you run the command.
I've recently had to do something like this from a Swing app.
You'll probably be able to pull it off with ProcessBuilder, as in Ian's answer, but I found that once things start to get complex, it's easier to write a shell script that does what you want, enabling you to pass as few parameters as possible. Then use ProcessBuilder to invoke the shell script.
If you're invoking anything that has more than really minimal output, you'll also have to read the output and error streams to keep the process from blocking when the output buffers fill, as it seems you are already doing.
I use sudo -S before command and for the tomcat7 user: tomcat7 ALL=(ALL) NOPASSWD:ALL
I am monitoring a log file of another java process that is constantly writing to it. These two processes (monitoring application and the monitored application) is running on the linux distro, centos.
The problem is that everytime I restart the monitored application, the monitoring application seem to get this error:
java.io.IOException: Input/output error
at java.io.RandomAccessFile.readBytes(Native Method)
at java.io.RandomAccessFile.read(RandomAccessFile.java:361)
at LogMonster.fileChanged(LogMonitor.java:57)
at FileMonitor.fireFileChangeEvent(FileMonitor.java:96)
at FileMonitor$FileMonitorTask.run(FileMonitor.java:128)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
I keep a Map with file name as key and RandomAccessFile object as value and I populate it as follows after adding this object as a listener:
monitor.addFileChangeListener(logMonitor, LogFileName, LogMonitor_Properties.getTimeDelay());
randomAccessFile_list.put(LogFileName, new RandomAccessFile(LogFileName, "r"));
An event is fired every time the file is modified and it is within the eventFired function that I'm trying to read contents from the RandomAccessFile after the monitored application was restarted (before it is restarted it works fine).
The following line of code within the 'fileChanged' function is causing the error:
randomAccessFile_list.get(file.getAbsolutePath()).read(byteArray);
I use a bash script to kill all versions of the application and then restart it in a 'go' file.
Contents of go:
cd /path/to/app
./kill
nohup ./app.run &
Contents of kill:
kill -9 $(lsof app.run| awk '{print $2}')
kill -9 $(lsof app.log| awk '{print $2}')
kill -9 $(lsof app.go| awk '{print $2}')
Contents of app.run:
./app.go >>app.log 2>&1
Contents of app.log:
Just text output of the application.
Contents of app.go:
. /path/to/some/other/location/setClassPath.go
export CLASSPATH=$CLASSPATH
echo $CLASSPATH
/usr/local/jdk1.6.0_27/bin/java -cp $CLASSPATH MyApp
I apologise for posting a question that looks exhausting before you've even read it, but I'm really at my wits end and any help would be greatly appreciated.
Thanks in advance.
From the method names it appears you're using this for file monitoring. It actually isn't opening the file, it's just stat'ing it every once in a while.
You're then also keeping a separate file handle open to the file in your map.
The library fires an event only when the modified time changes - this doesn't mean there's any new data added to the file. You then apparently attempt to read from your filehandle and get an IO exception.
There's a number of issues with this approach, but without seeing more of your code it would be impossible to tell you exactly what the problem is. I'm guessing that the monitored process is truncating, deleting, or doing something else with the file when it restarts that is invalidating your open filehandle.
File monitoring like this is generally used when you want to reload the entire file (usually a properties file, or a document being edited), not for trying to do a "tail".
We have a jboss application server running a webapp. We need to implement a "restart" button somewhere in the UI that causes the entire application server to restart. Our naive implementation was to call our /etc/init.d script with the restart command. This shuts down our application server then restarts it.
However, it appears that when the java process shuts down, the child process running the restart scripts dies as well, before getting to the point in the script where it starts the app server again.
We tried variations on adding '&' to the places where scripts are called, but that didn't help. Is there some where to fire the script and die without killing the script process?
Try using the nohup command to run something from within the script that you execute via Java. That is, if the script that you execute from Java currently runs this:
/etc/init.d/myservice restart
then change it to do this:
nohup /etc/init.d/myservice restart
Also, ensure that you DO NOT have stdin, stdout, or stderr being intercepted by the Java process. This could cause problems, potentially. Thus, maybe try this (assuming bash or sh):
nohup /etc/init.d/myservice restart >/dev/null 2>&1
Set your signal handlers in the restart script to ignore your signal with trap:
trap "" 2 # ignore SIGINT
trap "" 15 # ignore SIGTERM
After doing this, you'll need to kill your restart script with some other signal when needed, probably SIGKILL.