I wrote Java SE program which has deal with really huge dataset of matrices(10^12 matrices). I am generating them through iterator and saving special ones (which satisfy some criteria) either to Java heap or database. So, I understand that it's gonna take a lot of time (probably a few days). In order to operate with all this stuff I decided to run the program on computer which is not at my apartment and has access to the Internet. I want to control runnig process of the program (for me is very important to know is everything OK with the program? Is the program still running?)
My question is how to control that my program is still running (for this purpose I want to use computer at my appartment and the Internet)?
Maybe my program should periodically post (via Java IO API ) messages to websites ( Google Docs and so on). Thanks in advance for all your responses.
Run your application with JPDA enabled. This way you can connect it remotely, examine threads, etc.
JPDA has a number of other advantages as well, for example hot code replace.
Specification for JPDA is here, the most important is that you have to pass a few JVM options on startup, it will open a port, and you can use Eclipse or NetBeans to attach it from anywhere on the net. You have to make sure that the opened port is accessible through firewalls (local and network).
I would go for VisualVM & remote JMX. Setup the server box to allow remote jmx connections. With VisualVM you'll be able to connect to the JVM and check the activity.
You can also set up MBeans to keep stats of the running process. Which you'll be able to check also with VisualVM.
First you should try a remote desktop connection. Here's link to MS documents for Windows 7 remote desktop.
If the remote computer is Linux/Unix, and your program is console application, ssh+screen is the old, true and tested solution to leave interactive console programs running and accessible from anywhere, while you frolic in the real world.
Related
I have a java program that should run on a Windows machine. It should run "forever", i.e. when the JVM or the program crashes, it should be restarted. When the computer is restarted it should also be restarted.
I saw advice to wrap the program as a "Windows service", but the tools I found seem to be either costly, complicated or outdated.
Can somebody describe me a straightforward way to achieve the desired behaviour?
For the part where you want to start the program after restart you can create a simple batch (.Bat) file and u can put that file in the startup folder.
Also you can use the same file for running the program when it crashes. you can use tasklist command and check if your java program is running and if it is not .just start the program.
Just check our windows batch this is one of the best things you can get everything for doing anything on windows without anything expensive
Yet Another Java Service Wrapper is a tool that easily wraps your Java program into a Windows service. Just start the program, note down the PID and enter it into the wrapper. Two things, which are probably universal to services, should be noted:
For connection to the network, you need to specify an account with the necessary rights.
Connected network drives are not available.
As I indicated in another post, I'm having trouble with some SPIN constructors taking an excessive amount of time to execute quite limited data. I thought I'd take a different approach and see if I can profile the execution of the constructors to gain insight into where specifically they are spending excessive time.
How do I go about profiling the execution of constructors under RDF4J Server? I'm instantiating via SPARQL update (INSERT DATA) queries. Here's the System Information on RDF4J workbench:
I've attempted to profile the Tomcat server under which the RDF4J Server runs using jvisualvm.exe, but I have not gained much insight. Ideally, I'd like to get down to the class/method level within RDF4J so that I can post a more detailed request for help on my slow execution problem or perhaps fix my queries to be more efficient themselves.
So here's the version of Java Visual VM:
RDF4J is running under Apache Tomcat 8.5.5:
I can see overview information on Tomcat:
I can also see the monitor tab and threads:
HOWEVER, what I really want to see is the profiler so that I can see where my slow queries are spending so much time. That hangs on Calibration since I don't have the profiler calibrated for Java 1.8.
This attempting to connect box will persist indefinitely. Canceling it leads to the Performing Calibration message which doesn't actually do anything and is a dead-end hang requiring the Java VisualVM to be killed.
After killing the Java Visual VM and restarting and looking at Options-->Profiling-->Calibration Data, I see that only Java 7 has calibration data.
I have tried switching Tomcat over to running on Java 7, and that did work:
The profiler did come up with Tomcat:
However, when I tried to access the RDF4J workbench while Tomcat ran on Java 7, I could not get the workbench running:
So, I'm still stuck. It would appear that RDF4J requires Tomcat running under Java 1.8, not 1.7. I can't profile under Java 1.8.
I have seen other posts on this problem with Java VisualVM, but the one applicable solution seems to be to bring everything up in a development environment (e.g. Eclipse) and dynamically invoke the profiler at a debugger breakpoint once the target code is running under Java 1.8. I'm not set up to do that with Tomcat and RDF4J and would need pointers. My intention was not to become a Tomcat or RDF4J contributer (because my tasking doesn't allow that... I wouldn't be paid for the time) but rather to get a specific handle on what's taking so long for my SPIN constructor(s) in terms of RDF4J server classes and then ask for help from the RDF4J developer community on gitub.
Can Java VisualVM calibration be bypassed? Could I load a calibration file or directory somewhere for Java VisualVM to use instead of trying to measure calibration data which fails? I'm only interested in the relative CPU loading of classes, not absolute metrics, and I don't need to compare to measurements on other machines.
Thanks.
I'm running a J2SE application that is somewhat trusted (Minecraft) but will likely contain completely un-trusted (and likely even some hostile) plugins.
I'd like to create a plugin that can access the GPIO pins on the Raspberry PI.
Every solution I've seen requires that such an app be given sudo-superpowers because gpio is accessed through direct memory access.
It looks like the correct solution is to supply a command-line option like this:
-Djava.security.policy=java.policy
which seems to default you to no permissions (even access to files and high ports), then add the ones your app needs back in with the policy file.
In effect you seem to be giving Java "sudo" powers and then trusting java's security model to only give appropriate powers out to various classes. I'm guessing this makes the app safe to run with sudo--is this correct?
Funny that I've been using Java pretty much daily since 1.0 and never needed this before... You learn something new every day.
[Disclaimer: I'm not very convinced by the Java security model.]
The way I would solve this is to have the code that needs to access the hardware run as a separate privileged process, then have your Java application run as an unprivileged process and connect to the privileged process to have it perform certain actions on its behalf.
In the privileged process, you should check with maximum distrust each request whether it is safe to execute. If you are afraid that other unprivileged processes might connect to the daemon too and make it execute commands it shouldn't, you could make its socket owned by a special group and setgid() the Java application to that group by a tiny wrapper written in C before it is started.
Unix domain sockets are probably the best choice but if you want to chroot() the Java application, a TCP/IP socket might be needed.
Sorry if the question is too open-ended or otherwise not suitable, but this is due to my lack of understanding about several pieces of technology/software, and I'm quite lost. I have a project where I have an existing java swing GUI, which runs MPI jobs on a local machine. However, it is desired to support running MPI jobs on HPC clusters (let's assume linux cluster with ssh access). To be more specific, the main backend executable (linux and windows) that I need to, erm, execute uses a very simple master-slave system where all relevant output is performed by the master node only. Currently, to run my backend executable on multiple machines, I would simply need to copy all necessary files to the machines (assuming no shared filespace) and call "mpiexec" or "mpirun" as is usual practice. The output produced by the master needs to be read in (or partially read in) by my GUI.
The main problem as I see things is this: Where to run the GUI? Several options:
Local machine - potential problem is needing to read data from cluster back to local machine (and also reading stdout/stderr of the cluster processes) to display current progress to user.
Login node - obvious problem of hogging precious resources, and in many cases will be banned.
Compute node - sounds pretty dodgy - especially if the cluster has a queuing system (slurm, sun grid, etc)! Also possibly banned.
Of these three options, the first seems the most reasonable, and also seems least likely to upset any HPC admin people, but is also the hardest to implement! There are multiple problems associated with that setup:
Passing data from cluster to local machine - because we're using a cluster - by definition we probably will generate large amounts of data, which the user wants to see at least part of! Also, how should this be done? I can see how to execute commands on remote machine via ssh using jsch or similar, but if i'm currently logged in on the remote machine - how do I communicate information back to the local machine?
Displaying stdout/stderr of backend in local machine. Similar to above.
Dealing with peculiar aspects of individual clusters - the only way I see around that is to allow the user to write custom slurm scripts or such like.
How to detect if backend computations have finished/failed - this problem interacts with any custom slurm scripts written by user.
Hopefully it should be clear from the above that I'm quite confused. I've had a look at apache camel, jsch, ganemede ssh, apache mina, netty, slurm, Sun Grid, open mpi, mpich, pmi, but there's so much information that I think I need to ask for some help and advice. I would greatly appreciate any comments regarding these problems!
Thanks
================================
Edit
Actually, I just came across this: link which seems to suggest that if the cluster allows an "interactive"-mode job, then you can run a GUI from a compute node. However, I don't know much about this, nor do I know if this is common. I would be grateful for comments on this aspect.
You may be able to leverage the approach shown here: a ProcessBuilder is used to execute a command in the background of a SwingWorker, while the command's output is displayed in a suitable component. In the example, ls -l would become ssh username#host 'ls -l'. Use JPasswordField as required.
We have a curious problem with our java processes dying.
The application doesn't stacktrace, or write anything to the logs, the process just randomly dies. It's a heavily used application, but the problem only appears about once a month.
We're currently looking into using Process Monitor but any other suggestions would be welcome.
Edit:
It's a distributed Java application, running on Weblogic with an in-house web framework (Yes, this is a terrible idea, but it's been running for eight years), connecting to Oracle.
-
Out of Memory?
Our logs would catch java.lang.OutOfMemoryException, according to Brian Agnew.
Write crashes to a log? I don't think Java ever gets the chance, the death is happening at a process level, rather than Java exiting.
Can you wrap it in some shell script that captures the log files (stdout/stderr) and the exit code (which should give some indication as to how it died) ? On JVM exit you can also capture machine level stats using WMI
IF the VM itself is crashing it'll leave behind an hs_err_pid... file that contains stacktraces, machine-level debug info. You can then use that to diagnose the VM issue. See this blog entry for further information.
If the problem is related to the app's behaviour, it may be worth looking at JConsole, although from your description of the issue, this sounds much more like a low level VM issue.
(I assume you're on the latest VM for your Java version number etc.)
You can use a Linux NAGIOS Server to monitor the health of your Windows machines and services! Have a look at: nagios-monitoring-windows.
If you have such problems with your java app! You should test it and debug it! Applications shouldn't die without a trace! Look for logfiles! From which vendor is the app? Or is it self written? Try to enforce another Log4J/Logger/Debug Level. Monitor your System with cacti etc. to reduce the possibilities for such a crash. Talk to the software vendor.
Is enogh memory available? Maybe the app runs out of memory? Is it a standalone java process or a java process from a tomcat/jboss server?
Have you written down the crash times to a log? Appear they in different time-slices? Or appear they nearly time-circular?
VisualVM is a new tool which makes monitoring Java applications easier:
https://visualvm.dev.java.net/description.html
"VisualVM is a tool that provides detailed information about Java applications while they are running. It provides an intuitive graphical user interface that allows you to easily see information about multiple Java applications."