I have a Wildfly-8.0.0.Final server running in standalone mode on a Windows machine.
To configure the JVM memory allocation I edited the $JBOSS_HOME\bin\standalone.conf.bat, adding the following:
set "JAVA_OPTS=-Xms512M -Xmx2048M -XX:MaxPermSize=512M"
In the console administration > Runtime > Platform > JVM I noticed that the memory is ok (after some time, it's being released), but the number of threads increases at each client connection.
For the other server configuration I kept the default values.
At the server startup the number of threads is: live 60, daemon 20, but after a few hours I found live 400, daemon 360. I'm not an expert, but this seems to be an error. Is it true? How can I limit the number of threads?
I noticed in console administration Profile > Core > Threads both "Thread Pools" and "Thread Factories" are empty.
With the information you provide it's nearly impossible to know what's going on.
When you have problems with threads what you should do is profile your application. You could have a thread pool misconfigured causing troubles ( from a Quartz to a database one) which you have no idea about.
IDE Approach
With modern IDEs you should be able to see all the running threads in debug mode. If you can't find a way to start in debug mode your application from the IDE, you could try to use remote debugging.
Thread dump
The JDK has a very nice took called jstack that you can use to get a snapshot of the stack trace for all the threads in a Java application.
Professional Profiler
You can try either JDK's VisualVM or one of the many payed profiles out there. The profiler could give you more information about the threads (when they are created, when they die, how their state changes...)
I guess you have to set the follwing parameter in the start script:
-Dorg.jboss.server.bootstrap.maxThreads
Related
How to troubleshoot/Optimize CPU usage in a Springboot application. Are the allocated resources sufficient for an application having a total of around a 300k user base? The application isn't heavy at all. It just calls third-party APIs and do the necessary checks and gives the response.
How to identify exact codes that could have been using more resources than normally required? I found out somewhere that tracking the processes id from top command and reaching to thread dump and looking up for the corresponding hexadecimal value of processid that could have been using more CPU is one way to figure out. This wasn't easily achievable as some of the commands suggested didn't work. I would appreciate any help or suggestions.
Thanks in advance.
Htop command output
Htop when it's normal
The process of Collection of Thread Stack is no different for a spring boot app. Before a boot app is containerized it is still a Jar. If you suspect that its your application that is actually contributing to the high CPUT then run your jar and attach a profiler to it and trace the code contributing to the high CPU on load. If you can not do it then take the thread dump of the running jar/java process and use any free or opensource tool to analyze the trace. The second logic explained is applicable for the containerized application as well.
Follow this steps to take the thread dump of a java app/boot app running inside a docker container :-
docker exec -it <containerName> jstack > someFile.txt
Take multiple snapshot of it for better visiblity and comparision.
If you have not added JMX enable options to the JVM commandline, do it to begin with:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=10000
-Dcom.sun.management.jmxremote.rmi.port=10000
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
Then on your local machine you start "jmc" from your JDK bin folder and connect to your spring boot server.
You will then be able to see all the threads and enable both CPU load and thread locks on all active threads.
Be aware though that the above opens up JVM for unauthenticated entry so keep the port safe.
Next if your JVM dies send a "kill -3" SIGHUP which will tell the JVM to dump the core. that can then be read via the Eclipse MAT plugin in order to analyze the JVM inner doings.
Another way is to install jolokia into your server for other ways to retrieve the same info.
I am new to jboss server level configuration. Our application is hosted in windows server and running through power shell. Once in a day power shell got hanged and we are restarting the jboss server.
Heap memory
-Xms1024m
-Xmx8192m
32 GB RAM
Please let me know if you required any other configuration to identify the issue.
Help us to identify the root cause.
One of the reason might be clicking inside Windows Power Shell, makes the power shell to wait and the cascade effect makes all the threads wait and finally results in server hung.
Reason:
Clicking inside the windows power shell, knowingly or unknowingly it stops the output in the console and also the output to the log file. This could hold the thread which tries to write the log and hence we have other threads waiting.
Solution:
Temporary:
Once any key is pressed on the windows power shell,
it releases the console output and log and hence all the threads are released.
Permanent:
Uncheck the properties QuickEdit Mode and\or Insert in the windows power shell, this will not have any impact on our application, as we don’t need this feature enabled for our window power shell. Please read this blog.
Check this link to tune the perfomance of JBoss AS 7
Performance tuning
I am running a durable Java program on a remote Ubuntu server, where I have root user rights. After some time, the usage on some CPU cores goes up to 100%. The logs show nothing suspicious and the application still works, but with reduced throughput.
How can I debug the JVM so that I can find out the cause of this, while it's still running?
One option is VisualVM, which is included in the JDK starting with Java 1.6. I have found it useful in some situations in the past.
You may connect to local applications or remote applications.
To connect to a remote app, run jstatd on your remote server, and then run VisualVM locally and enter your server's IP address. You should be provided with a list of running Java applications including the one you wish to explore. If you have any trouble listing your application, good documentation is available at the VisualVM website.
Connect to the process with jvisualvm
This tool will allow you to connect to the running process and view all of the threads and their state. This could show you which thread is the culprit merely by looking at which one is awake all the time. You can do a thread dump to see the stack trace for each thread and see what each thread is doing.
It's a very powerful tool for just this kind of debugging. It is distributed with the JDK only, so you will need more than just the JVM runtime installed to have access. Be sure you install the same version of the JDK that the JVM is running.
You will need to have your X display forwarded for this to work.
If you want to see the stack trace on linux just issue kill -SIGQUIT <java-program-pid>. That is one way to see where the the code is executing.
In Java profiling, it seems like all (free) roads nowadays lead to the VisualVM profiler included with JDK6. It looks like a fine program, and everyone touts how you can "attach it to a running process" as a major feature. The problem is, that seems to be the only way to use it on a local process. I want to be able to start my program in the profiler, and track its entire execution.
I have tried using the -Xrunjdwp option described in how to profile application startup with visualvm, but between the two transport methods (shared memory and server), neither is useful for me. VisualVM doesn't seem to have any integration with the former, and VisualVM refuses to connect to localhost or 127.0.0.1, so the latter is no good either. I also tried inserting a simple read of System.in into my program to insert a pause in execution, but in that case VisualVM blocks until the read completes, and doesn't allow you to start profiling until after execution is under way. I have also tried looking into the Eclipse plugin but the website is full of dead links and the launcher just crashes with a NullPointerException when I try to use it (this may no longer be accurate).
Coming from C, this doesn't seem like a particularly difficult task to me. Am I just missing something or is this really an impossible request? I'm open to any kinds of suggestions, including using a different (also free) profiler, and I'm not averse to the command line.
Consider using HPROF and opening the data file with a tool like HPjmeter - or just reading the resulting text file in your favorite editor.
Command used: javac -J-agentlib:hprof=heap=sites Hello.java
SITES BEGIN (ordered by live bytes) Fri Oct 22 11:52:24 2004
percent live alloc'ed stack class rank self accum bytes objs bytes objs trace name
1 44.73% 44.73% 1161280 14516 1161280 14516 302032 java.util.zip.ZipEntry
2 8.95% 53.67% 232256 14516 232256 14516 302033 com.sun.tools.javac.util.List
3 5.06% 58.74% 131504 2 131504 2 301029 com.sun.tools.javac.util.Name[]
4 5.05% 63.79% 131088 1 131088 1 301030 byte[]
5 5.05% 68.84% 131072 1 131072 1 301710 byte[]
HPROF is capable of presenting CPU usage, heap allocation statistics,
and monitor contention profiles. In addition, it can also report
complete heap dumps and states of all the monitors and threads in the
Java virtual machine.
The best way to solve this problem without modifying your application, is to not use VisualVM at all. As far as other free options are concerned, you could use either Eclipse TPTP or the Netbeans profiler, or whatever comes with your IDE.
If you can modify your application, to suspend it's state while you setup the profiler in VisualVM, it is quite possible to do so, using the VisualVM Eclipse plugin. I'm not sure why you are getting the NullPointerException, since it appears to work on my workstation. You'll need to configure the plugin by providing the path to the jvisualvm binary and the path of the JDK; this is done by visiting the VisualVM configuration dialog at Windows -> Preferences -> Run/Debug - > Launching -> VisualVM Configuration (as shown in the below screenshot).
You'll also need to configure your application to start with the VisualVM launcher, instead of the default JDT launcher.
All application launches from Eclipse, will now result in VisualVM tracking the new local JVM automatically, provided that VisualVM is already running. If you do not have VisualVM running, then the plugin will launch VisualVM, but it will also continue running the application.
Inferring from the previous sentence, it is evident that having the application halt in the main() method before performing any processing is quite useful. But, that is not the main reason for suspending the application. Apparently, VisualVM or its Eclipse plugin does not allow for automatically starting the CPU or memory profilers. This would mean that these profilers would have to be started manually, thereby necessitating the need to suspend the application.
Additionally, it is worth noting that adding the flags: -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=y to the JVM startup will not help you in the case of VisualVM, to suspend the application and setup up the profilers. The flags are meant to help you in the case of profilers that can actually connect to the open port of the JVM, using the JDWP protocol. VisualVM does not use this protocol and therefore you would have to connect to the application using JDB or a remote debugger; but that would not resolve the problem associated with profiler configuration, as VisualVM (at least as of Java 6 update 26) does not allow you to configure the profilers on a suspended process as it simply does not display the Profiler tab.
This is now possible with the startup profiler plugin to VisualVM.
The advice with -Xrunjdwp is incorrect. It just enables debugger and with suspend=y it waits for debugger to attach. Since VisualVM is not debugger, it does not help you. However inserting System.in or Thread.sleep() will pause the startup and allows VisualVM to attach to your application. Be sure to read Profiling with VisualVM 1 and Profiling with VisualVM 2 to better understand profiler settings. Note also that instead of profiling, you can use 'Sampler' tab in VisualVM, which is more suitable for profiling entire java program execution. As other mentioned you can also use NetBeans Profiler, which directly support profiling of the application startup.
I've been running Tomcat 5.5 with Java 1.4 for a while now with a huge webapp. Most of the time it runs fine, but sometimes it will just hang, with no exception generated, and no apparant way of getting it to run again other than re-starting Tomcat. The tomcat instance is allowed a gigabyte of memory on the heap, but rarely exceeds 300 MB. Has anyone else run into this issue, and is there a solution for it?
For clarification: I determined how much memory it is using via Task Manager and via Eclipse (I've also tried running it outside of Eclipse, but get the same problem eventually, though it takes a little longer). With Eclipse, I look at the memory allocated via its little (optional) memory pane and the amount allocated to javaw.exe via the task manager. I use the sysdeo? tomcat plugin for Eclipse.
For any jvm process, force a thread dump. In windows, this can be done with CTRL-BREAK, I believe, in the console window.
In *nix, it is almost always "kill -3 jvm-pid".
This may show if you have threads waiting on db connection pool/thread pool, etc.
Another thing to check out is how many connections you have currently to the JVM -- either use NETSTAT or SysInternals utility such as tcpconn/tcpview (google it).
Also, try to run with the verbose:gc JVM flag. For Sun's JVM, run like "java -verbose:gc". This will show your garbage collections. If it is collecting a lot (FULL COLLECTIONS, expecially) then you probably have a memory leak. The full collections are costly, especially on large heaps like that.
How are you determining that only 300mb are being used?
It sounds like you're hitting a deadlock.
If you can reproduce it in a dev environment then try attaching a debugger once it's happened. Take a look at your threads and see if you have any deadlocks.
If you can't get a debugger to attach you should be able to generate a thread dump, as Dustin pointed out.
Try increasing the logging sensitivity for the Tomcat application server.
http://tomcat.apache.org/tomcat-5.5-doc/logging.html
You can increase the sensitivity to FINEST or ALL for most of them for a few days and see if that helps you catch anything.
I agree with creating multiple thread dumps and viewing them though this: Thread Dump Analyzer