Java - AttachNotSupportedException: Unable to open socket file: HotSpot VM not loaded - java

When attempting to attach an agent jar file onto another process running in java, I have came across the exception:
com.sun.tools.attach.AttachNotSupportedException:
Unable to open socket file: target process not responding or HotSpot VM not loaded
I was running linux, with java Oracle JDK 8_101, however after answering this question I've realized the O.S does not matter for the cause of this problem.
Edit:
Answer:
If you encounter this problem, the reason it occured for me is because I was launching a program from a different JVM, other than the default JVM specified for the system.
i.e)
Program A (The launcher), is running on JVM-1 (JDK_8_1 for example, or JDK_8_1/jdk/jre).
Program A (The launcher), creates a process with java -jar programB.jar
Program B (The target), is running on the system's default JVM, JVM-2 (JDK_8_2 for example, or JDK_8_2/jre).
Program A (The launcher) CANNOT attatch to
Program B (The target), because the JVM Program A (The launcher) is running on, does not match the JVM of which Program B (The target) is running on, thus throwing the
com.sun.tools.attach.AttachNotSupportedException:

Common reasons for this problem:
Attach socket /tmp/.java_pid1234 has been removed (e.g. by a scheduled job that periodically cleans up /tmp).
Target JVM is started with -XX:+DisableAttachMechanism option.
Garbage Collection or other long VM operation (e.g. Heap Dump) is in progress.
JVM cannot reach safepoint within attach timeout. This happens rarely, and the problem is typically intermittent.

Problem: different user executing jcmd
It might happen, that the user calling jcmd is different, than the user running the proccess.
Example:
user invoking jcmd as root
user running JVM as fancyUser
Solution:
On Linux try to run jcmd with the same user, as the process is running.
When you have such a scenario, you will get given error.
Problem: AppArmor
When AppArmor is enabled for running JVM instance, which restricts sys-calls, it might be the case, that opening a socket-connection is restricted.
Solution:
Change AppArmor-Profile for process

Related

what is it meant by stop the JVM with exit()?

They could be many JVMs per Operative system or it is only one JVM per Operative System ? I also read that with "Runtime.exit()", we stop the execution of a JVM?
I'm a bit confuse because I've always thought that JVM is a machine that never stop working, always awaken waiting to be called for example by the "java App.class".
The JVM is an abstract concept, and there may be many running instances per Operating System. The implementation is usually through the Java Runtime Environment. And when exit is called, the runtime stops. The JVM can certainly stop. And (on most computers) it has to be explicitly started.
i'have always thougth that JVM is a machine that never stop working, always awaken waiting to be called for example by the "java App.class"
No, that is not how the JVM works. It's not a background process that is waiting to execute your Java programs. There is not always just one JVM running on one computer.
Whenever you start a Java program, a new JVM is started. When you have multiple Java programs running at the same time, you have multiple JVMs running. Each program is running in its own JVM.
System.exit() stops the JVM that is running the current Java application. It has no effect on other Java programs running on the same machine.
If you start a java app (directly or indirectly with the java shell command), then an instance of the JVM is created and started.
When the application finishes (either by reaching the end or through System.exit() then the JVM instance stops.
Of course, you can have multiple Java apps running simultaneously. Each will be in its own JVM instance.
They could be many JVMs per Operative system or it is only one JVM per Operative System ?
You can do either. You can run a JVM for each command, or you can use an application server to run your java applications. (You cna have more than one application server)
I also read that with "Runtime.exit()", we stop the execution of a JVM ?
This triggers the JVM to shutdown. The process does some work such as calling Shutdown Hooks after this is called.
i' m a bit confuse because i'have always thougth that JVM is a machine that never stop working,
It can be used that way. For example scala has a daemon compiler which is used to compile scala programs.
always awaken waiting to be called for example by the "java App.class".
When you run any program, (java or not) this always starts a new program. The only time this doesn't happen is for built in shell commands. e.g. set

Foolproof detection of OutOfmemory in Java

TL;DR: Is there a foolproof (!) way I can detect from my master JVM that my slave JVM spawned via 2 intermediate scripts has experienced an OutOfMemory error on Linux?
Long version:
I'm running some sort of application launcher. Basically it receives some input and reacts by spawning a slave Java application to process said input. This happens via a python script (to correctly handle remote kill commands for it) which in turn calls a bash script (generated by Gradle and sets up the classpath) to actually spawn the slave.
The slave contains a worker thread and a monitor thread to make callbacks to a remote host for status updates. If status updates fail to occur for a set amount of time, the slave gets killed by the launcher. The reason for it not responding CAN be an OutOfMemoryError, however it can also be other reasons. I need to differentiate an OutOfMemoryError of the slave from some other error which caused it to stop working.
I don't just want to monitor memory usage and say once it reaches like 90% "ok that's enough". It may very well be that the GC succeeds in cleaning up sufficiently for the workload to finish. I only want to know if it failed to clean up and the JVM died because not enough memory could be freed.
What I have tried:
Use the -XX:OnOutOfMemory flag as a JVM option for the slave which calls a script which in turn creates an empty flag file. I then check with the launcher for the existence of the flag file if the slave died. Worked like a charm on Windows, did not work at all on Unix because there is a funky bug which causes the execution of the flag call to require the exact same amount of Xmx the slave has used. See https://bugs.openjdk.java.net/browse/JDK-8027434 for the bug. => Solution discarded because the slave needs the entire memory of the machine.
try{ longWork(); } catch (OutOfMemoryError e) { createOomFlagFile(); System.exit(100); } This does work in some cases. However there are also cases where this does not happen and the monitor thread simply stops sending status updates. No exception occurs, no OOM flag file gets created. I know from SSHing onto the machine though that Java is eating all the memory available on the system and the whole system is slow.
Is there some (elegant) foolproof way to detect this which I am missing?
You shouldn't wait for the OutOfMemory. My suggestion is, that you track memory consumption from the master application via Java Management Beans and issue warnings when memory consumption gets critical. I never did that on my own before, so I cannot get more precisely on how to do that, but maybe you find out or some others here can provide a solution.
Edit: this is the respective MXBean http://docs.oracle.com/javase/7/docs/api/java/lang/management/MemoryMXBean.html

How to start an external jar in the same JVM and still be able to terminate current main()

I'm particularly new to Java, therefore my question could be total BS, but I am faced with the challange of creating an Updater Wrapper (java app) to an existing Java Application, of which I cannot get my head wrapped around.. The issue I'm facing is, that we don't want to spin up a second and a third JVM each time we have to start a new jar, but Step 5)'s termination is blocked by the Thread of the Updater, which was started from Classloader of the Updater.jar.
Concept:
Current version of App gets triggered to check for updates
Starts Updater.jar
OnUpdateAvailable -> closes incoming connections and saves application state and its objects
Sends ReadyForUpdate Signal to Updater on socket
Current application terminates.
Updater replaces Application executable and resources
Updater launches New Version of Application.jar with argument to resume to its previous state
Updater waits for Application -> to initialize successfully
Updater terminates
New version of Application is up and running
Question: Is there a way to use the same Virtual Machine to start first the updater and then the new version of the application from the updater or should we go ahead with the separate JVMs?
java version: 1.7.0_05-icedtea
OpenJDK 64-Bit Server VM (build 23.0-b21, mixed mode)
I think that it would be far simpler to do this with a separate virtual machine and a wrapper script.
Your problem at step 5 is that it requires the application to "shut down" without calling System.exit and without running the shutdown hooks:
This could require major re-engineering of your application code-base.
Even after you've done that, you still have the problem that old application state is likely to "linger" via objects loaded using the old classloader. Getting rid of that could be really difficult. And if you don't then:
you've got a permgen leakage,
you may have a non-permgen memory leaks (e.g. via statics in classes that have been replaced), and
you may have leakage of application state that could cause problems for the "relaunched" application.

what java virtual machine will do while executing multiple java applications

By reading this article, I know that each java application will run in a specific Java Virtual Machine Instance. So if I execute the following commands("Java -jar test1.jar","Java -jar test2.jar", I will get two processes in the system. And If each command used the default heap size, for example, 256M. The total memory cost is 512M, is that right?
Also I have other questions:
Is the Java virtual Machine a daemon process, start up with the system?
When I execute "java -jar test1.jar", it will create an instance of Java Virtual Machine, then execute the main function. Does it mean every running java application is a sub thread or process of Java Virtual Machine?
Is each running java application individual, other application can not get variable, method, constant, etc, from this running java application?
If one running java application is crashed, will it affect other running java application?
PS: I googled and got lots of different answers, I was totally confused. Anyone who can help me on this kind of questions or even more depth of Java virtual Machine. For example, How it works.
The JVM is a standard process, just like any other. As such there's no implicit communication or state sharing between the two. Each will have their own heap, threads etc. If you kill one it won't affect the other.
What will get shared are the code pages of the JVM itself. The kernel is intelligent enough to identify the same binary (any binary -not just the JVM) running twice and reuse the image. This only applies to the actual binary code - not its state. See here for more info re. Linux.
The JVM isn't a daemon process, but could be started upon system startup as a Windows service or Unix/Linux process (via /etc/init.d scripts). This is how you'd (say) run a web service written in Java when a machine is booted up.
1) No, but there a ways to launch java applications as services with wrappers (Google for "Java service").
2) Yes.
3) You can use communication between processes (v.g. HTTP). But there are no shortcuts due to all processes being run in JVM.
4) No
For the OS, JVM like an user application. Each JVM Instance is individual.
No. JVM is normal process as others.But you can run it as deamon process.
Yes. Java application run on JVM just like you application on OS.
Yes. Each JVM thread is individual, but they can communication whith other JVMs through network,RMI...
It depends. Normally they are individual, but if a JVM crash cause the OS crash, other JVMs will be effected.

Problems with jetty crashing intermittently

I'm having problems with jetty crashing intermittently, I'm using Jetty 6.1.24.
I'm running a neo4j Spring MVC webapp, Jetty will stay running for approx 1 hour and then I have to restart Jetty. It is running on small amazon ec2 instance, debian with 1.7gb of RAM.
I start Jetty using java -Xmx900m -server -jar start.jar
I am connecting to the server using putty, when Jetty crashes the putty session disconnects, I cannot see what error caused it to crash.
I would like to be able to see if it is an error generated by Spring, I'm not sure how to log the output from the spring app with Jetty. Or if it is Jetty or a memory issue, what would be the best way to monitor Jetty? I cannot recreate this on my local machine running windows. What do you think would be the best way to approach this? Thanks
This isn't really a programmer question; perhaps it'll be moved over to ServerFault.
You didn't specifically state which operating system you're using, but I'm hazarding a guess at some Linux distribution. You have two options of figuring out what's wrong:
Start your session in screen. Screen will live for as long as the actual machine is powered on, until you reboot the operating system (or you exit screen).
you start screen like this
screen
and you get a new prompt where you can start your program (cd foo, jetty, etc). When you're happy and you just need to go somewhere, you can disconnect the screen by hitting CTRL+A and then CTRL+D. you'll drop back to the place you were before you invoked screen.
To get back to seeing the screen you type screen -R which means to resume an existing screen. you should see jetty again.
The nice thing is that if you lose connection (or you close putty by accident or whatever) then you can use screen -list to get a list of running screens, and then forcibly detach them -D and reattach them to the current putty -R, no harm done!
Use nohup. Nohup more or less detaches the process you're running from the console, so none of its output comes to the terminal. You start your program in the normal fashion, but you add the word nohup to your command.
For example:
nohup ls -l &
After ls -l is complete, your output is stored in nohup.out.
When you say crash do you mean the JVM segfaults and disappears? If that's the case I'd check and make sure you aren't exhausting the machine's available memory. Java on linux will crash when the system memory gets so low the JVM cannot allocate up to its maximum memory. For example, you've set the max JVM memory to 500MB of which it's using 250MB at the moment. However, the Linux OS only has 128MB available. This produces unstable results and the JVM will segfault.
On windows the JVM is more well behaved in this scenario and throws OutOfMemoryError when the system is running low on memory.
Validate how much system memory is available around the time of your crashes.
Verify if other processes on your box are eating up a lot of memory. Turn off anything that could be competing with the JVM.
Run jconsole and connect it to your JVM. That will tell you how memory is being used in your JVM process and give you a history to look back through when it does crash.
Eliminate any native code you might be loading into the JVM when doing this type of testing.
I believe Jetty has some native code to do high volume request processing. Make sure that's not being used. You want to isolate the crashes to Java and NOT some strange native lib. If you take out the native stuff and find it works then you have your answer as to what's causing it. If it continues to crash then it very well could be what I'm describing.
You can force the JVM to allocate all the memory at startup with -Xms900m that can make sure the JVM doesn't fight with other processes for memory. Once it has the full Xmx amount allocated it won't crash. Not a solution, but you can easily test it this way.
When you start java, redirect both outputs (stdout and stderr) to a file:
Using Bash:
java -Xmx900m -server -jar start.jar > stdout.txt 2> stderr.txt
After the crash, inspect those files.
If the crash is due to a signal (like SEGV=segmentation fault), there should be a file dump by the JVM at the location you've started java. For Sun VM (hotspot), it's something like hs_err_pid12121.log (here 12121 is the process ID).
Putty disconnecting STRONGLY hints that the server is running out of memory and starts shutting down processes left and right. It is probably your jetty instance growing too big.
The easiest thing to do now, is adding 1-2 Gb more swap space and do it again. Also note that you can use the jvisualvm to attach to the jetty instance to get runtime information directly.

Categories