I am using java visualVM 1.4 to do JDBC profling for my application, i need to do the following:
1-Start profiling
2-Take snapshot
3-Clear profiling records go back to step 1
I need to perform the previous loop for around 10 times, i am using my local-machine to do the entire work.
So i do the steps successfully for two times after that either ther visualVM freezes or the java process i am profiling freezes, and my only solution is to restart both process and do the work again.
I tried to read logs for visualVM and noticed that i see the following exception:
java.io.IOException: Can not attach to current VM
at jdk.attach/sun.tools.attach.HotSpotVirtualMachine.<init>(HotSpotVirtualMachine.java:75)
at jdk.attach/sun.tools.attach.VirtualMachineImpl.<init>(VirtualMachineImpl.java:58)
at jdk.attach/sun.tools.attach.AttachProviderImpl.attachVirtualMachine(AttachProviderImpl.java:58)
at jdk.attach/com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:207)
at com.sun.tools.visualvm.attach.AttachModelImpl.getVirtualMachine(AttachModelImpl.java:124)
Caused: java.io.IOException: Can not attach to current VM
at com.sun.tools.visualvm.attach.AttachModelImpl.getVirtualMachine(AttachModelImpl.java:126)
I am using jdk 8_152 for both the JVM and the java process.
Related
When I start a java program with high memory usage ("-Xmx52g") by shell, everything is working well. However if I start the same program with the same command and same user by CRON, I get a java.lang.OutOfMemoryError just after a few seconds.
Additionally CRON isn't able to do anything as long as I don't kill the blocked java program. No matter which cronjob should be started, it always ends up with "(CRON) error (can't fork)" in syslog. After killing the java program all new cronjobs are working fine again.
The problem only occurs with Ubuntu 16.04, all older versions worked very well. Is this a bug or a new security feature? I didn't find any information about this issue, so I hope anyone may help.
You need more RAM or size to launch it
http://javarevisited.blogspot.com/2011/09/javalangoutofmemoryerror-permgen-space.html
I'm trying to run and debug the utilies from
sun.jvm.hotspot.tools and sun.jvm.hotspot.utilities
(like JMap.java) in order to understand better what is going on.
Unfortunately I get stuck very early with the following error message and don't even get to debug a lot:
Attaching to process ID 5144, please wait...
Error attaching to process: Timed out while attempting to connect to debug server (please start SwDbgSrv.exe)
It seems like for whatever reason the tools are trying to connect to a "debug server" running on port 27000.
In the doc of the sun.jvm.hotspot.tools.HeapDumer.java I found the following:
This tool is used by the JDK jmap utility to dump the heap of the
target process/core as a HPROF binary file. It can also be used as a
standalone tool if required.
So I (maybe naively) assumed that jmap.exe is somehow using that, but I never had this kind of problems using creating a heap dump using jmap. I never needed to start another process first.
Any ideas what I need to do to run all those tools directly from my dev env?
Thanks
sun.jvm.hotspot.* tools are the part of HotSpot Serviceability Agent.
I assume you use JDK 6 on Windows, because the debug server is no longer required since JDK 7. In earlier versions you had to start SwDbgSrv.exe in order to use Serviceability Agent.
Some built-in JDK utilities (jmap, jstack) have two execution modes: cooperative and forced. In normal cooperative mode these tools use Dynamic Attach Mechanism to connect to the target VM. The requested command is then executed directly by the target VM from within the target process.
The forced mode (jmap -F, jstack -F) behaves quite differently. The tool suspends the target process and then reads the process memory using Serviceability Agent. The command is executed in the tool's process while the target VM is paused. This is what sun.jvm.hotspot.* utilities do.
We have trouble with Tomcat 5.5 which stops at night on our production servers (Linux CentOS 4.8) and we have no idea why it stops...
There is no Tomcat's log in catalina.out or any application's log.
We tried different things to find why the server stops:
configure Tomcat to be able to generate a core dump
instrument System.exit() method with javassist to find if the method was called
add a shutdown hook to the JVM (with Runtime.getRuntime().addShutdownHook())
None of them worked, we have no core dump, the Exit method and the shutdown hook are not called.
My conclusions are:
The VM is not terminated properly but crash without any log.
Any idea or log to read to find why Tomcat stops?
1) Make sure you know where stderr is redirected and check if anything got printed there.
2) Check the memory limits on Tomcat and how much free memory does the system have. Review the Linux system logs under /var/log to see if anything suspicious happened during the time. For example, kernel can randomly kill a process (almost) without a trace if the system is running low on memory.
We've ran 5.5 in production for years and never had any unexplained shutdowns, FWIW.
This worked for me.
As suggested here in other answers checked system logs in /var/log/messages but permission denied for me. So, I used dmesg command instead and got this in the logs
"Out of memory: Kill process 14606 (java) score 106 or sacrifice child".
In the output I also noticed Swap Memory free 0 K. Ran top command to confirm the same. So, somehow there was a high memory usage which caused the OS to kill my tomcat process.
After spending hours finally got the reason.
ps -ef | grep tomcat showed that there were several tomcat processes running for the same application. It seems that, earlier tomcat shutdowns might not have been completed successfully and due to some reason the processes were not killed even after the shutdown, which was causing the high memory usage.
So, killed all running tomcat processes using kill. SWAP memory got freed.
Started tomcat again, worked fine. :)
Tomcat 7 has an option inside catalina to prevent the System.exit class call or something similar: http://ci.apache.org/projects/tomcat/tomcat7/docs/security-manager-howto.html .
Maybe there's a similar option for the 5.5 version. Try the documentation.
There are options to redirect the output to the same console that you use to start Tomcat. This information is redirected to logs when you execute on Unix based systems, on Windows, it remains with the console if not redirected.
Most probably there is a stack-overflow exception. This is typical behavior of Tomcat when it happens. For example, you're trying to serialize to JSON or XML beans with cyclic dependencies (but without handling of the cycles).
Everytime I had this issue (several times) it always has been this one. All other stops are usually logged properly (like OutOfMemory etc).
This type of stops leaves no trace anywhere.
In Java profiling, it seems like all (free) roads nowadays lead to the VisualVM profiler included with JDK6. It looks like a fine program, and everyone touts how you can "attach it to a running process" as a major feature. The problem is, that seems to be the only way to use it on a local process. I want to be able to start my program in the profiler, and track its entire execution.
I have tried using the -Xrunjdwp option described in how to profile application startup with visualvm, but between the two transport methods (shared memory and server), neither is useful for me. VisualVM doesn't seem to have any integration with the former, and VisualVM refuses to connect to localhost or 127.0.0.1, so the latter is no good either. I also tried inserting a simple read of System.in into my program to insert a pause in execution, but in that case VisualVM blocks until the read completes, and doesn't allow you to start profiling until after execution is under way. I have also tried looking into the Eclipse plugin but the website is full of dead links and the launcher just crashes with a NullPointerException when I try to use it (this may no longer be accurate).
Coming from C, this doesn't seem like a particularly difficult task to me. Am I just missing something or is this really an impossible request? I'm open to any kinds of suggestions, including using a different (also free) profiler, and I'm not averse to the command line.
Consider using HPROF and opening the data file with a tool like HPjmeter - or just reading the resulting text file in your favorite editor.
Command used: javac -J-agentlib:hprof=heap=sites Hello.java
SITES BEGIN (ordered by live bytes) Fri Oct 22 11:52:24 2004
percent live alloc'ed stack class rank self accum bytes objs bytes objs trace name
1 44.73% 44.73% 1161280 14516 1161280 14516 302032 java.util.zip.ZipEntry
2 8.95% 53.67% 232256 14516 232256 14516 302033 com.sun.tools.javac.util.List
3 5.06% 58.74% 131504 2 131504 2 301029 com.sun.tools.javac.util.Name[]
4 5.05% 63.79% 131088 1 131088 1 301030 byte[]
5 5.05% 68.84% 131072 1 131072 1 301710 byte[]
HPROF is capable of presenting CPU usage, heap allocation statistics,
and monitor contention profiles. In addition, it can also report
complete heap dumps and states of all the monitors and threads in the
Java virtual machine.
The best way to solve this problem without modifying your application, is to not use VisualVM at all. As far as other free options are concerned, you could use either Eclipse TPTP or the Netbeans profiler, or whatever comes with your IDE.
If you can modify your application, to suspend it's state while you setup the profiler in VisualVM, it is quite possible to do so, using the VisualVM Eclipse plugin. I'm not sure why you are getting the NullPointerException, since it appears to work on my workstation. You'll need to configure the plugin by providing the path to the jvisualvm binary and the path of the JDK; this is done by visiting the VisualVM configuration dialog at Windows -> Preferences -> Run/Debug - > Launching -> VisualVM Configuration (as shown in the below screenshot).
You'll also need to configure your application to start with the VisualVM launcher, instead of the default JDT launcher.
All application launches from Eclipse, will now result in VisualVM tracking the new local JVM automatically, provided that VisualVM is already running. If you do not have VisualVM running, then the plugin will launch VisualVM, but it will also continue running the application.
Inferring from the previous sentence, it is evident that having the application halt in the main() method before performing any processing is quite useful. But, that is not the main reason for suspending the application. Apparently, VisualVM or its Eclipse plugin does not allow for automatically starting the CPU or memory profilers. This would mean that these profilers would have to be started manually, thereby necessitating the need to suspend the application.
Additionally, it is worth noting that adding the flags: -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=y to the JVM startup will not help you in the case of VisualVM, to suspend the application and setup up the profilers. The flags are meant to help you in the case of profilers that can actually connect to the open port of the JVM, using the JDWP protocol. VisualVM does not use this protocol and therefore you would have to connect to the application using JDB or a remote debugger; but that would not resolve the problem associated with profiler configuration, as VisualVM (at least as of Java 6 update 26) does not allow you to configure the profilers on a suspended process as it simply does not display the Profiler tab.
This is now possible with the startup profiler plugin to VisualVM.
The advice with -Xrunjdwp is incorrect. It just enables debugger and with suspend=y it waits for debugger to attach. Since VisualVM is not debugger, it does not help you. However inserting System.in or Thread.sleep() will pause the startup and allows VisualVM to attach to your application. Be sure to read Profiling with VisualVM 1 and Profiling with VisualVM 2 to better understand profiler settings. Note also that instead of profiling, you can use 'Sampler' tab in VisualVM, which is more suitable for profiling entire java program execution. As other mentioned you can also use NetBeans Profiler, which directly support profiling of the application startup.
I'm having problems with jetty crashing intermittently, I'm using Jetty 6.1.24.
I'm running a neo4j Spring MVC webapp, Jetty will stay running for approx 1 hour and then I have to restart Jetty. It is running on small amazon ec2 instance, debian with 1.7gb of RAM.
I start Jetty using java -Xmx900m -server -jar start.jar
I am connecting to the server using putty, when Jetty crashes the putty session disconnects, I cannot see what error caused it to crash.
I would like to be able to see if it is an error generated by Spring, I'm not sure how to log the output from the spring app with Jetty. Or if it is Jetty or a memory issue, what would be the best way to monitor Jetty? I cannot recreate this on my local machine running windows. What do you think would be the best way to approach this? Thanks
This isn't really a programmer question; perhaps it'll be moved over to ServerFault.
You didn't specifically state which operating system you're using, but I'm hazarding a guess at some Linux distribution. You have two options of figuring out what's wrong:
Start your session in screen. Screen will live for as long as the actual machine is powered on, until you reboot the operating system (or you exit screen).
you start screen like this
screen
and you get a new prompt where you can start your program (cd foo, jetty, etc). When you're happy and you just need to go somewhere, you can disconnect the screen by hitting CTRL+A and then CTRL+D. you'll drop back to the place you were before you invoked screen.
To get back to seeing the screen you type screen -R which means to resume an existing screen. you should see jetty again.
The nice thing is that if you lose connection (or you close putty by accident or whatever) then you can use screen -list to get a list of running screens, and then forcibly detach them -D and reattach them to the current putty -R, no harm done!
Use nohup. Nohup more or less detaches the process you're running from the console, so none of its output comes to the terminal. You start your program in the normal fashion, but you add the word nohup to your command.
For example:
nohup ls -l &
After ls -l is complete, your output is stored in nohup.out.
When you say crash do you mean the JVM segfaults and disappears? If that's the case I'd check and make sure you aren't exhausting the machine's available memory. Java on linux will crash when the system memory gets so low the JVM cannot allocate up to its maximum memory. For example, you've set the max JVM memory to 500MB of which it's using 250MB at the moment. However, the Linux OS only has 128MB available. This produces unstable results and the JVM will segfault.
On windows the JVM is more well behaved in this scenario and throws OutOfMemoryError when the system is running low on memory.
Validate how much system memory is available around the time of your crashes.
Verify if other processes on your box are eating up a lot of memory. Turn off anything that could be competing with the JVM.
Run jconsole and connect it to your JVM. That will tell you how memory is being used in your JVM process and give you a history to look back through when it does crash.
Eliminate any native code you might be loading into the JVM when doing this type of testing.
I believe Jetty has some native code to do high volume request processing. Make sure that's not being used. You want to isolate the crashes to Java and NOT some strange native lib. If you take out the native stuff and find it works then you have your answer as to what's causing it. If it continues to crash then it very well could be what I'm describing.
You can force the JVM to allocate all the memory at startup with -Xms900m that can make sure the JVM doesn't fight with other processes for memory. Once it has the full Xmx amount allocated it won't crash. Not a solution, but you can easily test it this way.
When you start java, redirect both outputs (stdout and stderr) to a file:
Using Bash:
java -Xmx900m -server -jar start.jar > stdout.txt 2> stderr.txt
After the crash, inspect those files.
If the crash is due to a signal (like SEGV=segmentation fault), there should be a file dump by the JVM at the location you've started java. For Sun VM (hotspot), it's something like hs_err_pid12121.log (here 12121 is the process ID).
Putty disconnecting STRONGLY hints that the server is running out of memory and starts shutting down processes left and right. It is probably your jetty instance growing too big.
The easiest thing to do now, is adding 1-2 Gb more swap space and do it again. Also note that you can use the jvisualvm to attach to the jetty instance to get runtime information directly.