I have a shared environment where there are 3 VMs with 2 jboss-as 5 instances on each of them ( total 6 instances). On these instances, we have more that 15 applications deployed and all of them are java based. Lately we are getting a high CPU on one of the VMs, and when we do a 'top' on the Vm, it gives the list of all processes with java being the one having high CPU utilization %. But as I mentioned, this VM has more than 15 java applications, we dont know which application is consuming the high CPU cycles.
Can someone please help me on this?
The PID will only give the JBoss server that the bad application is running on.
If you want to know which application is causing the problem, I recommend installing jprofiler or some other profiling tool.
Related
I've asked for help on this before, here, but the issue still exists and the previously accepted answer doesn't explain it. I've also read most every article and SO thread on this topic, and most point to application leaks or modest overhead, neither of which I believe explain what I'm seeing.
I have a fairly large web service (application alone is 600MB), which when left alone grows to 5GB or more as reported by the OS. The JVM, however, is limited to 1GB (Xms, Xmx).
I'd done extensive memory testing and have found no leaks whatsoever. I've also run Oracle's Java Mission Control (basically the JMX Console) and verified that actual JVM use is only about 1GB. So that means about 4GB are being consumed by Tomcat itself, native memory, or the like.
I don't think and JNI, etc. is to blame as this particular installation has been mostly unused. All it's been doing is periodically checking the database for work requests, and periodically monitoring its resource consumption. Also, this hasn't been a problem until recently after years of use.
The JMX Console does report a high level of fragmentation (70%). But can that alone explain the additional memory consumption?
Most importantly, though, is not so much why is this happening, but how can I fix/configure it so that it stops happening. Any suggestions?
Here are some of the details of the environment:
Windows Server 2008 R2 64-bit
Java 1.7.0.25
Tomcat 7.0.55
JvmMX, JvmMs = 1000 (1GB)
Thread count: 60
Perm Gen: 90MB
Also seen on:
Windows 2012 R2 64-bit
Java 1.8.0.121
My project is based on spring framework java. War size of my application is about 38mb. I hosted my application on vps with 1 gb RAM. Within some days i got to know that all RAM is getting exhausted.
Then i extend RAM by 1gb. Now single war file is working on 2 gb RAM using tomcat server. After 2-3 days i checked 2gb RAM also exhausted and it is showing around 80 to 90 percent usage.
Currently, system is under development and no one is using application still all RAM is getting used.
Is that a normal behavior Or there are any issues?
or do i need to make any settings?
Can anyone tell me how much RAM getting used for normal java project..
I checked vps ram usage by 'free -m' command, It is showing that -/+ buffers/cache as 557 [used ] 1444 [free]
Also Mem values are 2001[total] 1736[used] 265[free] 38[shared] 130[buffers] 1048[cached]
In addition to endless loops, check for memory leaks and issues related to not releasing resources like db links etc. Refer to similar issues reported by the community like below
Why is this Java program taking up so much memory?
How to reduce Spring memory footprint
http://www.toptal.com/java/hunting-memory-leaks-in-java
In my opinion, normally it need 1GB RAM for small Java application. You need look into your code if any endless loops are there or any schedulers are running forever.
I am developing a web based application.
The computer where I write the code has 4 core Intel i5 4440 3.10 GHz processor.
The computer where I deploy the application has 8 core Intel i7 4790K 4.00 GHz processor.
One of the tasks that needs to be calculated is very heavy so I decided to use the java executor framework.
I have this:
ExecutorService executorService = Executors.newFixedThreadPool(8);
and then I add 30 tasks at once.
On my development machine the result was calculated in 3 seconds (it used to be 20 secs. when I used only one thread) whereas on the server machine it got calculated in 16 seconds ( which is the same as it used to be when the code used only one thread ).
As you can guess I am quite confused and have no idea why on the server machine it got calculated so much slower.
Anyone who knows why the faster processor does not get benefits from the multithreading algorithm?
It is hard to guess root cause without more evidence. Could you
profile running application on server machine?
connect to server machine with JConsole and see threading info
My guess is that server machine is under heavy load (maybe from other applications or background threads?). Maybe your server user/java application is allowed to use only core?
I would start with using top (on linux) or Task Manager (windows) to find out if server is under load when you run your application. Profiling/JMX monitoring adds overhead, but you will be able to find out how many threads are actually used.
Final note- is server using same architecture (32/64bit), operating system and major/minor Java version than development?
I am new to java web application development. I just wrote the first hello world application using the netbeans 7.3 IDE. When the application is launched, it keeps loading for more than 30minutes. I don't think that this is usual. Is there a way around this problem? Your help will be appreciated.
Assuming that your app is a simple Hello World (without external dependencies) potential performance issues can be caused by:
Network Connection. 30 s sounds like a timeout. Some 3rd party libs used in web apps refer by default to Internet URIs resources like XML schemas for validation.
JVM swapping due not enough free memory. This can impact your JVM performance.
JVM running out of Permanent Space. If your JVM is Hot Spot look for error messages in the log. In this case just increase the -XX:MaxPermSize java argument (256m is a good starting point).
To help diagnosis:
- Capture a sequence of Thead Dumps ( kill -3 PID for Linux/UNIX or Ctrl+Brk in Windows) 3 or 5 every 5 secs) to see what hangs the JVM ( network access to schemas, file system access, ...). Some tools like Samurai can help you to detect threads stuck in the same line of code .
- Check your system free memory
- Check your logs for memory errors (stdout,stderr)
Hi I'm trying to test my JAVA app on Solaris Sparc and I'm getting some weird behavior. I'm not looking for flame wars. I just curious to know what is is happening or what is wrong...
I'm running the same JAR on Intel and on the T1000 and while on the Windows machine I'm able to get 100% (Performance monitor) cpu utilisation on the Solaris machine I can only get 25% (prstat)
The application is a custom server app I wrote that uses netty as the network framework.
On the Windows machine I'm able to reach just above 200 requests/responses a second including full business logic and access to outside 3rd parties while on the Solaris machine I get about 150 requests/responses at only 25% CPU
One could only imagine how many more requests/responses I could get out of the Sparc if I can make it uses full power.
The servers are...
Windows 2003 SP2 x64bit, 8GB, 2.39Ghz Intel 4 core
Solaris 10.5 64bit, 8GB, 1Ghz 6 core
Both using jdk 1.6u21 respectively.
Any ideas?
The T1000 uses a multi-core CPU, which means that the CPU can run multiple threads simultaneously. If the CPU is at 100% utilization, it means that all cores are running at 100%. If your application uses less threads than the number of cores, then your application cannot use all the cores, and therefore cannot use 100% of the CPU.
Without any code, it's hard to help out. Some ideas:
Profile the Java app on both systems, and see where the difference is. You might be surprised. Because the T1 CPU lacks out-of-order execution, you might see performance lacking in strange areas.
As Erick Robertson says, try bumping up the number of threads to the number of virtual cores reported via prstat, NOT the number of regular cores. The T1000 uses UltraSparc T1 processors, which make heavy use of thread-level parallelism.
Also, note that you're using the latest-gen Intel processors and old Sun ones. I highly recommend reading Developing and Tuning Applications on UltraSPARC T1 Chip Multithreading Systems and Maximizing Application Performance on Chip Multithreading (CMT) Architectures, both by Sun.
This is quite an old question now, but we ran across similar issues.
An important fact to notice is that SUN T1000 is based on UltraSpac T1 processor which only have 1 single FPU for 8 cores.
So if you application does a lot or even some Float-Point calculation, then this might become an issue, as the FPU will become the bottleneck.