JBOSS EAP 6.2 server crash due to out of memory - java

Facing the below issue in production environment but not reproducible in QA environment. Please help to resolve this issue.
We have a java application deployed in Jboss EAP 6.2 cluster setup using domain controller and using HornetQ as messaging system which has 4 queues and one topic and using MDB and MDP listeners for the queues and topic, application works fine with good server health for a week but after a week server crashes with out of memory error. I took the heap dump and analysed it with MAT and the finding is many instances of ClientSessionFactoryImpl (around 1,73,329 instances occupying 707MB of heap memory) is created and it is not getting garbage collected.
Problem Suspect 1
1,73,329 instances of "org.hornetq.core.client.impl.ClientSessionFactoryImpl", loaded by "org.jboss.modules.ModuleClassLoader # 0x760004438" occupy 741,790,528 (83.13%) bytes. These instances are referenced from one instance of "java.util.HashMap$Entry[]", loaded by "<system class loader>"
Keywords
org.jboss.modules.ModuleClassLoader # 0x760004438
org.hornetq.core.client.impl.ClientSessionFactoryImpl
java.util.HashMap$Entry[]
OS: RHEL 6.8
server: Jboss EAP 6.2
java: JDK 1.6_24
MAT Screenshot

Related

apache tomcat NioChannel Leak suspect

Hi all we are facing following leaks in our application
not able to identify why we are getting it
19,862,152 bytes (19.73 %) of Java heap is used by 200 instances of org/apache/coyote/RequestInfo
-Contained under org/apache/tomcat/util/net/NioEndpoint holding 29,220,608 bytes at 0xac587f30
Current Version of Tomact: 9.0.58
Java Version - 11
Spring boot -2.5.9
we tried to upgrade the tomcat to 10.0.16 but still we are observing the leaks.
we are also getting below leak suspects- when we ran app for 10 mins with load of 5k rpm

Heavy load produces possible memory leak in netty when using ejb-remoting in wildfly

We have two wildfly 16 server running on Linux. First with JDK 11.0.2, second with JDK 8.
Wildfly 1 has a remote outbound connection to wildfly 2 which is used for HTTP-remoting. This is necessary because it has to run with Java 8 32 bit.
When we perform a load test after 100.000 requests from wildfly 1 to wildfly 2 response time increases steadily.
A heap dump analysis of wildfly 2 using MAT gives us some information about the problem. The heap dump shows a lot of ‘io.netty.buffer.Poolchunks’ that use about 73% of the memory.
Seems the inbound buffers won't be cleaned properly.
Wildfly 2 does not recover when the load stops.
Is there any workaround or setting to avoid this?

High CPU consumption issue while Tomcat was configured as windows service

I have deployed one java application in tomcat server. And tomcat server was configured as windows service in one of my VMs.
Our VMs are windows servers with 64GB RAM and 8 core 2.4 GHz Intel Xeon Processors.
Below are the software details and JVM args configured.
JDK 1.7.0_67
Tomcat 7.0.90
JVM args for Tomcat :
-Xms2g -Xmx40g -XX:PermSize=1g -XX:MaxPermSize=2g
But still getting this issue, could you please any one help.
You can enable JMX (which is a technology to monitor java applications) by adding the -Dcom.sun.management ..... jvm options on the startup script and connect your application via JConsole with JTop Plugin which shows the top CPU consuming threads. Refer :https://arnhem.luminis.eu/top-threads-plugin-for-jconsole/

Openshift Wildfly 10 restarts automatically due to Out Of Memory error

OpenShit Prouction App is restarting automatically daily once due to heap space error even though i haven't configured a message broker in my application, i found activemq is trying create new thread getting heap space error though my logs.
Should we explicitly disable activemq?
[31m01:34:12,417 ERROR [org.apache.activemq.artemis.core.client] (Thread-114 (ActiveMQ-remoting-threads-ActiveMQServerImpl::serverUUID=54cb8ef9-d17a-11e5-b538-af749189a999-28800659-992791)) AMQ214017: Caught unexpected Throwable: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor.execute(OrderedExecutorFactory.java:85)
at org.apache.activemq.artemis.core.remoting.impl.invm.InVMConnection.write(InVMConnection.java:163)
at org.apache.activemq.artemis.core.remoting.impl.invm.InVMConnection.write(InVMConnection.java:151)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.send(ChannelImpl.java:259)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.send(ChannelImpl.java:201)
at org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.doConfirmAndResponse(ServerSessionPacketHandler.java:579)
at org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.access$000(ServerSessionPacketHandler.java:116)
at org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler$1.done(ServerSessionPacketHandler.java:561)
at org.apache.activemq.artemis.core.persistence.impl.journal.OperationContextImpl.executeOnCompletion(OperationContextImpl.java:161)
at org.apache.activemq.artemis.core.persistence.impl.journal.JournalStorageManager.afterCompleteOperations(JournalStorageManager.java:666)
at org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.sendResponse(ServerSessionPacketHandler.java:546)
at org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.handlePacket(ServerSessionPacketHandler.java:531)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.handlePacket(ChannelImpl.java:567)
at org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.doBufferReceived(RemotingConnectionImpl.java:349)
at org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:331)
at org.apache.activemq.artemis.core.remoting.server.impl.RemotingServiceImpl$DelegatingBufferHandler.bufferReceived(RemotingServiceImpl.java:605)
at org.apache.activemq.artemis.core.remoting.impl.invm.InVMConnection$1.run(InVMConnection.java:171)
at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$ExecutorTask.run(OrderedExecutorFactory.java:100)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
You could take a look at this comment on Wildfly openshift cartridge project open issues.
To summarize, some users have experienced memory, performance issues with this cartridge default settings.
This is particularly due to the fact that Openshift Wildfly cartridge enables Java EE 7 Full Profile by default (with a custom standalone.xml configuration file, not the standalone-full.xml as in default wildfly standalone configuration), which does not work well on small gears due to their limitations.
But a lot of Java EE 7 users only use Web Profile and do not need to enable all Java EE 7 Full Profile specs in their application actually.
So by enabling only Java EE 7 Web profile features and disabling Full Profile specific ones, such as messaging subsystem, you can make this cartridge work fine on a small gear.
See also this other comment for solution details and this table which lists differences between Java EE 7 profiles.
If you are running this on a small gear, that is your issue. WildFly 10 uses quite a bit of memory just by itself. You should run it on a medium or large gear. You can also try changing the amount of memory on the gear that is available to the JVM: https://developers.openshift.com/en/wildfly-jvm-memory.html
Try to add -Dactivemq.artemis.client.global.thread.pool.max.size=20.
The default global thread pool used by Artemis client is 500 but the small gears have a thread limit of 250 threads. I had similar problem when I started 8 Wildfly instances on my Linux machine where was 4096 threads per user. Next day there was always java.lang.OutOfMemoryError: unable to create new native thread. I observed that artemis creates constantly new threads until it reaches 500.
When you run Wildfly with standalone-full.xml configuration, you have the JMS subsystem enabled. Wildfly 10.0.0.Final has a default Artemis's thread pool initialized with 500 threads. In future versions, it will be changed with a custom thread pool.
With Wildfly 10.0.0.Final, the simple way to say to Artemis initializes a minor number of threads (as Andrsej Szywala says) is with a command line parameter at startup like this:
sh standalone.sh -c standalone-full-ha.xml -Dactivemq.artemis.client.global.thread.pool.max.size=30
You can read more in my post at jboss forum:
https://developer.jboss.org/thread/268397

Weblogic Server memory and swap space is not getting free

I have developed a Fusion Web Application on Oracle ADF.
Now i have deployed this Application on Weblogic Linux 64bit server and test this application using JMeter.
Initially i have test this application with 50 users then 100 and 500 users.
Problem is this, Server Memory and Swap space is not getting free.
It is getting increase even i run the 10 users test multiple times

Categories