Best way to run a Perl script from weblogic Java EE application - java

I currently work in a Weblogic Java EE project, where from time to time the application executes a Perl script to do some batch jobs. In the application the script is getting invoked as
Process p = Runtime.getRuntime().exec(cmdString);
Though it is a dangerous way to run, but it was working properly until we had a requirement to execute the script synchronously under a for loop. After a couple of run we are getting
java.io.IOException: Not enough space as probably OS is running out of virtual memory while exec-ing under a for loop. As a result we are not able to run the script at all in the server.
I am desperately looking for a safer and better way to run the Perl script, where we don't need to fork the parent process, or at-least not to eat-up all swap space!
The spec is as follows:
Appserver - Weblogic 9.52
JDK - 1.5
OS - SunOS 5.10
Sun-Fire-T200

I've had something similar on a couple of occasions. Since the child process is a fork of the (very large parent it can see all of it shares all it's memory (using copy on write). What i discovered was that the kernel needs to be able to ensure that it could copy all of the memory pages before forking the child, on a 32bit OS you run out of virtual head run really fast.
Possible solutions:
Use a 64Bit OS and JVM, pushes the issue down the road so far it doesn't matter
Host your script in another process (like HTTPD) and poke it using a HTTP request to invoke it

Create a perl-server, which reads perl scripts via network and executes them one by one.

If you want to keep your code unchanged and have enough disk free space, you can just add a sufficiently large swap area to your OS.
Assuming you need 10 GB, here is how you do it with UFS:
mkfile 10g /export/home/10g-swap
swap -a /export/home/10g-swap
echo "/export/home/10g-swap - - swap - no -" >> /etc/vfstab
If you use ZFS, that would be:
zfs create -V 10gb rpool/swap1
swap -a /dev/zvol/dsk/rpool/swap1
Don't worry about that large a swap, this won't have any performance impact as the swap will only be used for virtual memory reservation, not pagination.
Otherwise, as already suggested in previous replies, one way to avoid the virtual memory issue you experience would be to use a helper program, i.e. a small service that your contact through a network socket (or a higher level protocol like ssh) and that executes the perl script "remotely".
Note that the issue has nothing to do with a 32-bit or 64-bit JVM, it is just Solaris doesn't overcommit memory and this is by design.

Related

Running 2 instances of JBoss on 1 machine. Getting "not enough space" error running native command

We have a 64 bit JBoss instance that deploys an axis web service, which is just the front end face to run a native executable command. When the web service is called, it executes this native executable command. The 64 bit instance runs with 3gb of memory.
We recently introduced a 2nd instance of JBoss running on the same physical machine. It runs in 32 bit mode, because it has to run some JNI 32 bit code. This second instance of JBoss is bound to ports-01 so that it runs on 8180 (basically +100 of the default JBoss ports). This instance runs with 512MB of memory.
Since introducing this second instance of JBoss, we are receiving "not enough space" error messages when the 64 bit instance tries to execute the native executable command when it is called. It's an IOException from java, from the unix forkAndExec command. Everything I read, says this has something to do with swap file size. Using the unix, top command, it looks like the swap file size never changes, and it is 3gb. When we run the 64 bit instance first, there seem to be no issues with this, but if the 32 bit instance starts first, we get this error. I'm wondering if the two instances are competing for resources, or if we really are running out of swap space from unix. I'm not sure if JBoss uses swap space and how much it uses, or does Java handle that?
I guess I'm looking for any ideas or suggestions for a solution about this problem. The main pattern I seem to see is that if the 64 bit instance starts first, the native executable works fine, but if the 32 bit instance starts first, it has issues.
The OS handles swap space, Java has no idea about these things. Running any part of Java in the swap space is a very bad idea in any case.
I would make sure there is plenty of main memory after these two programs are running (not just the heaps but the total memory used by these processes)
It turned out to be a swap space issue after all. We had 8GB of memory and 4GB of swap. One server was using 800MB of swap and the other 3.8GB of SWAP, which barely put us over our limit.
Instead of using the "top" command in unix, we had to use swap -s to view the available size of swap space, and that was more accurate.
We created a temporary swap file with a command like mkfile 10240M /opt/myswapfile. Then we added it to the swap area on the server with a command swap -a /opt/myswapfile.
Now they seem to be working fine together.

How to Use posix_spawn() in Java

I've inherited a legacy application that uses ProcessBuilder.start() to execute a script on a Solaris 10 server.
Unfortunately, this script call fails due to a memory issue, as documented here
Oracle's recommendation is to use posix_spawn() since, under the covers, ProcessBuilder.start() is using fork/exec.
I have been unable to find any examples (e.g., how to call "myScript.sh")
using posix_spawn() in Java, or even what are the packages that are required.
Could you please, point me to a simple example on how to use posix_spawn() in Java?
Recent version of Java 7 and 8 support posix_spawn internally.
command line option
-Djdk.lang.Process.launchMechanism=POSIX_SPAWN
or enable at runtime
System.setProperty("jdk.lang.Process.launchMechanism", "POSIX_SPAWN");
I'm a little confused as to which Java version/OS combinations have this enabled by default, but I'm sure you could test and find out pretty quickly whether setting this option makes a difference.
For reference, to go back to the old fork method simply use
-Djdk.lang.Process.launchMechanism=fork
To prove whether this option is respected in your JVM version use
-Djdk.lang.Process.launchMechanism=dummy
and you will get an error next time you exec. This way you know the JVM is receiving this option.
An alternative, which does not require JNI, is to create a separate "process spawner" application. I would probably have this application expose an RMI interface, and create a wrapper object that is a drop-in replacement for ProcessBuilder.
You might also want to consider having this "spawner" application be the thing that starts your legacy application.
You will need to familiarize yourself with JNI first. Learn how to call out into a native routine from Java code. Once you do - you can look at this example and see if it helps with your issue. Of particular interest to you is:
if( (RC=posix_spawn(&pid, spawnedArgs[0], NULL, NULL, spawnedArgs, NULL)) !=0 ){
printf("Error while executing posix_spawn(), Return code from posix_spawn()=%d",RC);
}
A much simpler solution would be to keep your code unchanged and simply add more virtual memory to your server.
i.e.:
mkfile 2g /somewhere/swap-1
swap -a /somewhere/swap-1
Edit: To clarify as the link present in the question is now broken:
the question is about a system out of virtual memory due to the JVM being forked. Eg, assuming the JVM uses 2 GB of VM, an extra 2 GB of VM is required for the fork to succeed on Solaris. There is no pagination involved here, just memory reservation. Unlike the Linux kernel which by default overcommits memory, Solaris makes sure allocated memory is backed by either RAM or swap. As there is not enough swap available, fork is failing. Enlarging the swap allows the fork to succeed without any performance impact. Just after the fork, the exec "unreserves" this 2GB of RAM and revert to a situation identical to the posix_spawn one.
See also this page for an explanation about memory allocation under Solaris and other OSes.

Problems with jetty crashing intermittently

I'm having problems with jetty crashing intermittently, I'm using Jetty 6.1.24.
I'm running a neo4j Spring MVC webapp, Jetty will stay running for approx 1 hour and then I have to restart Jetty. It is running on small amazon ec2 instance, debian with 1.7gb of RAM.
I start Jetty using java -Xmx900m -server -jar start.jar
I am connecting to the server using putty, when Jetty crashes the putty session disconnects, I cannot see what error caused it to crash.
I would like to be able to see if it is an error generated by Spring, I'm not sure how to log the output from the spring app with Jetty. Or if it is Jetty or a memory issue, what would be the best way to monitor Jetty? I cannot recreate this on my local machine running windows. What do you think would be the best way to approach this? Thanks
This isn't really a programmer question; perhaps it'll be moved over to ServerFault.
You didn't specifically state which operating system you're using, but I'm hazarding a guess at some Linux distribution. You have two options of figuring out what's wrong:
Start your session in screen. Screen will live for as long as the actual machine is powered on, until you reboot the operating system (or you exit screen).
you start screen like this
screen
and you get a new prompt where you can start your program (cd foo, jetty, etc). When you're happy and you just need to go somewhere, you can disconnect the screen by hitting CTRL+A and then CTRL+D. you'll drop back to the place you were before you invoked screen.
To get back to seeing the screen you type screen -R which means to resume an existing screen. you should see jetty again.
The nice thing is that if you lose connection (or you close putty by accident or whatever) then you can use screen -list to get a list of running screens, and then forcibly detach them -D and reattach them to the current putty -R, no harm done!
Use nohup. Nohup more or less detaches the process you're running from the console, so none of its output comes to the terminal. You start your program in the normal fashion, but you add the word nohup to your command.
For example:
nohup ls -l &
After ls -l is complete, your output is stored in nohup.out.
When you say crash do you mean the JVM segfaults and disappears? If that's the case I'd check and make sure you aren't exhausting the machine's available memory. Java on linux will crash when the system memory gets so low the JVM cannot allocate up to its maximum memory. For example, you've set the max JVM memory to 500MB of which it's using 250MB at the moment. However, the Linux OS only has 128MB available. This produces unstable results and the JVM will segfault.
On windows the JVM is more well behaved in this scenario and throws OutOfMemoryError when the system is running low on memory.
Validate how much system memory is available around the time of your crashes.
Verify if other processes on your box are eating up a lot of memory. Turn off anything that could be competing with the JVM.
Run jconsole and connect it to your JVM. That will tell you how memory is being used in your JVM process and give you a history to look back through when it does crash.
Eliminate any native code you might be loading into the JVM when doing this type of testing.
I believe Jetty has some native code to do high volume request processing. Make sure that's not being used. You want to isolate the crashes to Java and NOT some strange native lib. If you take out the native stuff and find it works then you have your answer as to what's causing it. If it continues to crash then it very well could be what I'm describing.
You can force the JVM to allocate all the memory at startup with -Xms900m that can make sure the JVM doesn't fight with other processes for memory. Once it has the full Xmx amount allocated it won't crash. Not a solution, but you can easily test it this way.
When you start java, redirect both outputs (stdout and stderr) to a file:
Using Bash:
java -Xmx900m -server -jar start.jar > stdout.txt 2> stderr.txt
After the crash, inspect those files.
If the crash is due to a signal (like SEGV=segmentation fault), there should be a file dump by the JVM at the location you've started java. For Sun VM (hotspot), it's something like hs_err_pid12121.log (here 12121 is the process ID).
Putty disconnecting STRONGLY hints that the server is running out of memory and starts shutting down processes left and right. It is probably your jetty instance growing too big.
The easiest thing to do now, is adding 1-2 Gb more swap space and do it again. Also note that you can use the jvisualvm to attach to the jetty instance to get runtime information directly.

How to control the memory usage of processes spawned by a JVM

I am coding an application that creates JVMs and needs to control the memory usage of the processes spawned by the JVM.
You can connect to JVM process using JMX to get information about memory status / allocations and also provoke garbage collection. But you first need to enable JMX monitoring of your JVM: http://java.sun.com/j2se/1.5.0/docs/guide/management/agent.html.
I assume that you are talking about non-Java "processes" spawned using Runtime.exec(...) etc.
The answer is that this is OS specific and not something that the standard Java libraries support. But if you were going to do this in Linux (or UNIX) I can think of three approaches:
Have Java spawn the command via a shell wrapper script that uses the ulimit builtin to reduce the memory limits, then execs the actual command; see man 1 ulimit.
Write a little C command that does the same as the shell wrapper. This will have less overhead than the wrapper script approach.
Try to do the same with JNI and a native code library. Not recommended because you'd probably need to replicate the behavior of Process and ProcessBuilder, and that could be very difficult.
If by 'control' you mean 'limit to a known upper bound', then you can simply pass
-Xms`lower_bound`
and
-Xmx`upper_bound`
to the vm's args when you spawn the process. see the approproate setting here

Setting java to use one cpu

I have an application that has a license for a set number of cpus and I want to be able to set the number of cpus that java runs in to 1 before the check is done. I am running Solaris and have looked at pbind but thought that if I started the application and then used pbind it would have checked the license before it had set the number of CPUs that java could use.
Does anyone know a way of starting an application with a set number of CPUs on Solaris?
It is a workaround, but using Solaris 10 you could set up a zone with a single CPU available and then run the application inside that zone.
If you want to do testing without running the full application, this bit of Java is most likely what they are using to get the number of CPU's:
Runtime runtime = Runtime.getRuntime();
int nrOfProcessors = runtime.availableProcessors();
A full example here.
This isn't a complete solution, but might be enough to develop into one. There's definitely a point at which the java process exists (and thus can be controlled by pbind) and at which point it hasn't yet run the code to do the processor check. If you could pause the launch of the application itself until pbind had done its work, this should be OK (assuming that the pbind idea will work from the CPU-checking point of view).
One way to do this that should definitely pause the JVM at an appropriate place is the socket attach for remote debuggers and starting with suspend mode. If you pass the following arguments to the java invocation:
-Xdebug -Xrunjdwp:transport=dt_socket,address=8000,suspend=y,server=y
then the JVM will pause after starting the java process but before executing the main class, until a debugger/agent is attached to port 8000.
So perhaps it would be possible to use a wrapper script to start the program in the background with these parameters, sleep for a second or so, use pbind to set the number of processors to one for the java process, then attach and detach some agent to port 8000 (which will be enough to get Java to proceed with execution).
Flaws or potential hiccoughs in this idea would be whether running in debug mode would notably affect performance of your app (it doesn't seem to have a big impact in general), whether you can control some kind of no-op JDWP agent from the command line, and whether you're able to open ports on the machine. It's not something I've tried to automate before (though I've used something broadly similar in a manual way to increase the niceness of a Java process before letting it loose), so there might be other issues I've overlooked.
I think the most direct answer to your question is to use pbind to bind the running shell process, and then start Java from that shell. According to the man page the effects of pbind are inherited by processes that are created from a bound process. Try this:
% pbind -b 0 $$
% java ...
Googling over, I found that you are right, pbind binds processes to processors.
More info and examples at: http://docs.sun.com/app/docs/doc/816-5166/pbind-1m?a=view

Categories