WebLogic 10.3.2 Performance issues - java

If I deploy to my local machine the application runs 4 times faster then when it is deployed to our Sun Application Server. Im not getting any memory errors and it does not mater how many sessions i have going. It just seems like every request waits before it runs. If I plug my machine into the same port as the server it still runs faster so im guessing its a Weblogic setting
My start up specifics
weblogic.Server
Virtual Machine: 
Java HotSpot(TM) 64-Bit Server VM version 14.3-b01
Vendor: 
Sun Microsystems Inc.
Name: 
Uptime: 
15 days 21 hours 18 minutes
Process CPU time: 
16 days 16 hours 19 minutes
JIT compiler: 
HotSpot 64-Bit Server Compiler
Total compile time: 
12 minutes
Live threads: 
    69
Peak: 
    71
Daemon threads: 
    68
Total threads started: 
51,573
Current classes loaded: 
27,654
Total classes loaded: 
33,709
Total classes unloaded: 
 6,055
Current heap size: 
1,324,827 kbytes
Maximum heap size: 
1,867,776 kbytes
Committed memory: 
1,730,240 kbytes
Pending finalization: 
0 objects
Garbage collector: 
Name = 'PS MarkSweep', Collections = 153, Total time spent = 37 minutes
Garbage collector: 
Name = 'PS Scavenge', Collections = 21,402, Total time spent = 40 minutes
Operating System: 
SunOS 5.10
Architecture: 
sparcv9
Number of processors: 
48
Committed virtual memory: 
3,013,032 kbytes
Total physical memory: 
66,879,488 kbytes
Free physical memory: 
21,949,936 kbytes
Total swap space: 
20,932,024 kbytes
Free swap space: 
20,932,024 kbytes
VM arguments: 
-Dweblogic.nodemanager.ServiceEnabled=true
-Dweblogic.security.SSL.ignoreHostnameVerification=false
-Dweblogic.ReverseDNSAllowed=false
-Xms1024m -Xmx2048m
-XX:PermSize=256m -XX:MaxPermSize=512m

Try doing a memory analysis. One way is to take a heap dump during a transaction and perform an analysis it.
This is the best way to go about it, use a tool like JRockit mission control or Oracle VisualVM to perform analysis. This will let you see what happens in your JVM real time. You could take your application off the JVM, connect to it and load your application and see information like heap and gc. They have many more features and definitely something a developer must have.
EDIT:
I have updated the link for mission control. Thanks Klara!

Related

Accounting for Java memory consumption

We are running a Java spring boot application on AWS. The platform we use is Tomcat 8.5 with Java 8 running on 64bit Amazon Linux/3.3.6. The machines are 4GB machines. We run this Java application with JVM args -XMX and -XMS as 1536m. The problem we are facing is that these instances quite frequently goes in to warning state due to 90%+ memory usage. Now we are trying to account for memory usage process by process.
To start with we just ran the top command on these machines. Here is the part of output.
top - 11:38:13 up 4:39, 0 users, load average: 0.90, 0.84, 0.90Tasks: 101 total, 1 running,
73 sleeping, 0 stopped, 0 zombieCpu(s): 31.8%us, 3.7%sy, 5.6%ni, 57.2%id, 0.3%wa, 0.0%hi, 1.5%si, 0.0%st
Mem: 3824468k total, 3717908k used, 106560k free, 57460k buffersSwap: 0k total, 0k used, 0k free, 300068k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2973 tomcat 20 0 5426m 2.2g 0 S 37.1 60.6 173:54.98 java
As you can see Java is taking 2.2GB of memory. We have given XMX as 1.5GB. Although, we are aware that by using XMX we are just restricting heap, we wanted to analyse where exactly this extra 0.7GB is going towards. Towards that end, we decided to use NewRelic. And here is the graph on non heap memory usage.
The total memory non heap memory usage we could see comes around ~200MB. So with this 200MB and 1.5GB heap memory, we expect the total memory to be consumed by Java to be 1.7GB. This 1.7GB figure is also confirmed from NewRelic graphs as below:
As I mentioned earlier, the top command is telling us the Java is taking 2.2GB of memory. However we could only account for 1.7GB using NewRelic. How can we reconcile this extra 0.5GB of memory?
There's more than you see on the NewRelic's non-heap memory usage graph.
E.g. there are also Thread stacks which can occupy up to 1MB per thread.
There's a JVM feature called Native Memory Tracking that you can use to track some of the non-heap memory usage.
There can still be native allocations that aren't tracked at all.
I suggest you look at these excellent resources from #apangin:
Java using much more memory than heap size (or size correctly Docker memory limit)
Memory footprint of a Java process by Andrei Pangin: https://www.youtube.com/watch?v=c755fFv1Rnk&list=PLRsbF2sD7JVqPgMvdC-bARnJ9bALLIM3Q&index=6

Java process memory growing indefinitely. Memory leak?

We have a java process running on Solaris 10 serving about 200-300 concurrent users. The administrators have reported that memory used by process increases significantly over time. It reaches 2GB in few days and never stops growing.
We have dumped the heap and analysed it using Eclipse Memory Profiler, but weren't able to see anything out of the ordinary there. The heap size was very small.
After adding memory stat logging, to our application we have found discrepancy between memory usage reported by "top" utility, used by the administrator, and the usage reported by MemoryMXBean and Runtime libraries.
Here is an output from both.
Memory usage information
From the Runtime library
Free memory: 381MB
Allocated memory: 74MB
Max memory: 456MB
Total free memory: 381MB
From the MemoryMXBean library.
Heap Committed: 136MB
Heap Init: 64MB
Heap Used: 74MB
Heap Max: 456MB
Non Heap Committed: 73MB
Non Heap Init: 4MB
Non Heap Used: 72MB
Current idle threads: 4
Current total threads: 13
Current busy threads: 9
Current queue size: 0
Max threads: 200
Min threads: 8
Idle Timeout: 60000
PID USERNAME NLWP PRI NICE SIZE RES STATE TIME CPU COMMAND
99802 axuser 115 59 0 2037M 1471M sleep 503:46 0.14% java
How can this be? top command reports so much more usage. I was expecting that RES should be close to heap+non-heap.
pmap -x , however, reports most of the memory in the heap:
Address Kbytes RSS Anon Locked Mode Mapped File
*102000 56 56 56 - rwx---- [ heap ]
*110000 3008 3008 2752 - rwx---- [ heap ]
*400000 1622016 1621056 1167568 - rwx---- [ heap ]
*000000 45056 45056 45056 - rw----- [ anon ]
Can anyone please shed some light on this? I'm completely lost.
Thanks.
Update
This does not appear to be an issue on Linux.
Also, based on the Peter Lawrey's response the "heap" reported by pmap is native heap not Java heap.
I have encountered a similar problem and found a resolution:
Solaris 11
JDK10
REST application using HTTPS (jetty server)
There was a significant increase of c-heap (observed via pmap) over time
I decided to do some stress tests with libumem.
So i started the proces with
UMEM_DEBUG=default UMEM_LOGGING=transaction LD_PRELOAD=libumem.so.1
and stressed the application with https requests.
After a while I connected to the process with mdb.
In mdb I used the command ::findleaks and it showed this as a leak:
libucrypto.so.1`ucrypto_digest_init
So it seems than the JCA (Java Cryptography Architecture) implementation OracleUcrypto has some issues on Solaris.
The problem was resolved by updating of the $JAVA_HOME/conf/security/java.security file -
I changed the priority of OracleUcrypto to 3 and the SUN implementation to 1
security.provider.3=OracleUcrypto
security.provider.2=SunPKCS11 ${java.home}/conf/security/sunpkcs11-solaris.cfg
security.provider.1=SUN
After this the problem dissapeared.
This also explains why there is no problem on linux - since there are different implememntations of JCA providers in play
In garbage collected environments, holding on to unused pointers amounts to "failure to leak" and prevents the GC from doing its job. It's really easy
to accidentally keep pointers around.
A common culprit is hashtables. Another is arrays or vectors which are
logically cleared (by setting the reuse index to 0) but where the actual
contents of the array (above the use index) is still pointing to something.

limit amount of RAM the JVM will be allocated

I am running TWS from Interactive Brokers in Parallels on a Mac. When I use the cloud-based link or the stand-alone application, TWS takes up 99% of available CPU. Is there a way that I can limit amount of RAM the JVM will be allocated?
I have 4gig of memory allocated to the Parallels VM. TWS is taking up about 433K of memory.
I added -Xmx300M -Xms300M to the command line for starting TWS, but this did nothing. When I start up, it is still consuming 99% of CPU and has 400K of memory allocated
I found the problem to be with Parallels. I was using Parallels 8. When I used TWS on a stand-alone Windows machine, the CPU usage never exceeded 50%.
I created a new VM with Parallels 8, and the CPU usage was still 99%.
I upgraded to Parallels 11 and created a new VM and installed TWS. Now it takes less than 10%.

High Object Copy times resulting in long garbage collection pauses with G1GC

I have a Java app running in a standalone JVM. The app listens for data on one or more sockets, queues the data, and has scheduled threads pulling the data off the queue and persisting it. The data is wide, over 700 data elements per record, though all of the data elements are small Strings, Integers, or Longs.
The app runs smoothly for periods of time, sometimes 30 minutes to an hour, but then we experience one or more long garbage collection pauses. The majority of the pause time is spent in the Object Copy time. The sys time is also high relative to the other collections.
Here is the JVM details:
java version "1.7.0_03"
Java(TM) SE Runtime Environment (build 1.7.0_03-b04)
Java HotSpot(TM) 64-Bit Server VM (build 22.1-b02, mixed mode)
Here are the JVM options:
-XX:MaxPermSize=256m -XX:PermSize=256m -Xms3G -Xmx3G -XX:+UseG1GC -XX:-UseGCOverheadLimit
The process is taskset to 4 cores (all on the same socket), but is barely using 2 of them. All of the processes on this box are pinned to their own cores (with 0 ansd 1 unused). The machine has plenty of free memory (20+G) and top shows the process using 2.5G of RES memory.
Here is some of the gc log output...
[Object Copy (ms): 2090.4 2224.0 2484.0 2160.1 1603.9 2071.2 887.8 1608.1 1992.0 2030.5 1692.5 1583.9 2140.3 1703.0 2174.0 1949.5 1941.1 2190.1 2153.3 1604.1 1930.8 1892.6 1651.9
[Eden: 1017M(1017M)->0B(1016M) Survivors: 7168K->8192K Heap: 1062M(3072M)->47M(3072M)]
[Times: user=2.24 sys=7.22, real=2.49 secs]
Any ideas on why the Object Copy time and sys time are so high and how to rectify it? There are numerous garbage collections in the log with nearly identical Eden/Survivors/Heap sizes that are only taking 10 or 20 ms.
3gb is not a large heap and the survivor size is also small. Anything else running on those cores? How much garbage are you generating and how often is it collecting? You may want to try it without G1GC as well.
Do you need 3 gigabytes of heap? Garbage-collection pauses get longer as a heap gets bigger, since (though less frequent) there is more work to do when GC is finally needed.
It appears you're locking the heap at 3 gig by setting a minimum. If the app doesn't need 3 gig, this is going to force it to use that much anyway.. and result in a humungous pause when that 3G finally does need to be collected.
I spent quite some time tuning Eclipse IDE for responsiveness, and found very early on that compact heap-sizes had better 'low pause' characteristics than large.
Apart from JVM heap settings, you can make sure your code 'nulls out' data elements & collection items as these are discarded.
This is standard practice in the java.util Collections package, but perhaps you have code that could benefit from it. The 700-wide records could especially be a candidate for this practice, which helps to simplify things for the GC and enable better cleanup from the 'minor GC' sweeps.

Sun JVM Committed Virtual Memory High Consumption

We have production Tomcat (6.0.18) server which runs with the following settings:
-server -Xms7000M -Xmx7000M -Xss128k -XX:+UseFastAccessorMethods
-XX:+HeapDumpOnOutOfMemoryError -Dcom.sun.management.jmxremote.port=7009
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -verbose:gc -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djava.util.logging.config.file=/opt/apache-tomcat-6.0.18/conf/logging.properties
-agentlib:jdwp=transport=dt_socket,address=8000,server=y,suspend=n
-Djava.endorsed.dirs=/opt/apache-tomcat-6.0.18/endorsed
-classpath :/opt/apache-tomcat-6.0.18/bin/bootstrap.jar
java version "1.6.0_12"
Java(TM) SE Runtime Environment (build 1.6.0_12-b04)
Java HotSpot(TM) 64-Bit Server VM (build 11.2-b01, mixed mode)
After some time of work we get (via JConsole) the following memory consumption:
Current heap size: 3 034 233 kbytes
Maximum heap size: 6 504 832 kbytes
Committed memory:  6 504 832 kbytes
Pending finalization: 0 objects
Garbage collector: Name = 'PS MarkSweep', Collections = 128, Total time spent = 16 minutes
Garbage collector: Name = 'PS Scavenge', Collections = 1 791, Total time spent = 17 minutes
Operating System: Linux 2.6.26-2-amd64
Architecture: amd64
Number of processors: 2
Committed virtual memory: 9 148 856 kbytes
Total physical memory:  8 199 684 kbytes
Free physical memory:     48 060 kbytes
Total swap space: 19 800 072 kbytes
Free swap space: 15 910 212 kbytes
The question is why do we have a lot of committed virtual memory? Note that max heap size is ~7Gb (as expected since Xmx=7G).
top shows the following:
31413 root 18 -2 8970m 7.1g 39m S 90 90.3 351:17.87 java
Why does JVM need additional 2Gb! of virtual memory? Can I get non-heap memory disrtibution just like in JRockit http://blogs.oracle.com/jrockit/2009/02/why_is_my_jvm_process_larger_t.html ?
Edit 1: Perm is 36M.
Seems that this problem was caused by a very high number of page faults JVM had. Most likely when Sun's JVM experiences a lot of page faults it starts to allocate additional virtual memory (still don't know why) which may in turn increase IO pressure even more and so on. As a result we got a very high virtual memory consumption and periodical hangs (up to 30 minutes) on full GC.
Three things helped us to get stable work in production:
Decreasing tendency of the Linux kernel to swap (for description see here What Is the Linux Kernel Parameter vm.swappiness?) helped a lot. We have vm.swappiness=20 on all Linux servers which run heavy background JVM tasks.
Decrease maximum heap size value (-Xmx) to prevent excessive pressure on OS itself. We have 9GB value on 12GB machines now.
And the last but very important - code profiling and memory allocations bottlenecks optimizations to eliminate allocation bursts as much as possible.
That's all. Now servers work very well.
-Xms7000M -Xmx7000M
That to me is saying to the JVM "allocate 7gb as an initial heap size with a maximum of 7gb".
So the process will always be 7gb to the OS as that's what the JVM has asked for via the Xms flag.
What it's actually using internal to the JVM is what is being reported as the heap size of a few hundred mb. Normally you set a high Xms when you are preventing slowdowns due to excessive garbage collection. When the JVM hits a (JVM defined) percentage of memory in use it'll do a quick garbage collection. if this fails to free up memory then it'll try a detaillled collection. Finally, if this fails and the max memory defined by Xmx hasn't been reached then it'll ask the OS for more memory. All this takes time and can really notice on a production server - doing this in advance saves this from happening.
You might want to try to hook up a JConsole to your JVM and look at the memory allocation... Maybe your Perm space is taking this extra 2GB... Heap is only a portion of what your VM needs to be alive...
I'm not familiar with jconsole, but are you sure the JVM is using the extra 2Gb? It looks to me like it's the OS or other processes that bring the total up to 9Gb.
Also, a common explanation for a JVM using significantly more virtual memory than the -Xmx param allows is that you have memory-mapped-files (MappedByteBuffer) or use a library that uses MappedByteBuffer.

Categories