Hi all we are facing following leaks in our application
not able to identify why we are getting it
19,862,152 bytes (19.73 %) of Java heap is used by 200 instances of org/apache/coyote/RequestInfo
-Contained under org/apache/tomcat/util/net/NioEndpoint holding 29,220,608 bytes at 0xac587f30
Current Version of Tomact: 9.0.58
Java Version - 11
Spring boot -2.5.9
we tried to upgrade the tomcat to 10.0.16 but still we are observing the leaks.
we are also getting below leak suspects- when we ran app for 10 mins with load of 5k rpm
Related
We have a Windows Server 2016 vm with 8192 MB ram with six cores and running RunDeck 3.4.1 with TomCat 9 from the rundeck.war file. Lately we've been seeing a couple issues crop up. First, RunDeck keeps user login sessions open well past the 30 minute idle limit in TomCat and Second, RunDeck does not respond or is extremely sluggish when the StandBy Memory leaves less than 400 MB of 'free memory' as if it never gets access to the StandBy Cache or queue or the priority is so low it can't get access to it. When a job fails, this problem gets even worse; but it also happens on successful jobs. This is causing our server to become unresponsive multiple times a day and the only way so far to free it is to manually release sessions in TomCat and/or to reboot the server completely. In the RunDeck Profile I have set the JVM to export RDECK_JVM="$RDECK_JVM -Xmx2048m -Xms512m -XX:MaxMetaspaceSize=512m -server".
Following the official documentation, those parameters (Xmx, Xms, and MaxMetaspaceSize) need to be defined in the setenv.bat file, take a look.
We have two wildfly 16 server running on Linux. First with JDK 11.0.2, second with JDK 8.
Wildfly 1 has a remote outbound connection to wildfly 2 which is used for HTTP-remoting. This is necessary because it has to run with Java 8 32 bit.
When we perform a load test after 100.000 requests from wildfly 1 to wildfly 2 response time increases steadily.
A heap dump analysis of wildfly 2 using MAT gives us some information about the problem. The heap dump shows a lot of ‘io.netty.buffer.Poolchunks’ that use about 73% of the memory.
Seems the inbound buffers won't be cleaned properly.
Wildfly 2 does not recover when the load stops.
Is there any workaround or setting to avoid this?
Facing the below issue in production environment but not reproducible in QA environment. Please help to resolve this issue.
We have a java application deployed in Jboss EAP 6.2 cluster setup using domain controller and using HornetQ as messaging system which has 4 queues and one topic and using MDB and MDP listeners for the queues and topic, application works fine with good server health for a week but after a week server crashes with out of memory error. I took the heap dump and analysed it with MAT and the finding is many instances of ClientSessionFactoryImpl (around 1,73,329 instances occupying 707MB of heap memory) is created and it is not getting garbage collected.
Problem Suspect 1
1,73,329 instances of "org.hornetq.core.client.impl.ClientSessionFactoryImpl", loaded by "org.jboss.modules.ModuleClassLoader # 0x760004438" occupy 741,790,528 (83.13%) bytes. These instances are referenced from one instance of "java.util.HashMap$Entry[]", loaded by "<system class loader>"
Keywords
org.jboss.modules.ModuleClassLoader # 0x760004438
org.hornetq.core.client.impl.ClientSessionFactoryImpl
java.util.HashMap$Entry[]
OS: RHEL 6.8
server: Jboss EAP 6.2
java: JDK 1.6_24
MAT Screenshot
OpenShit Prouction App is restarting automatically daily once due to heap space error even though i haven't configured a message broker in my application, i found activemq is trying create new thread getting heap space error though my logs.
Should we explicitly disable activemq?
[31m01:34:12,417 ERROR [org.apache.activemq.artemis.core.client] (Thread-114 (ActiveMQ-remoting-threads-ActiveMQServerImpl::serverUUID=54cb8ef9-d17a-11e5-b538-af749189a999-28800659-992791)) AMQ214017: Caught unexpected Throwable: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor.execute(OrderedExecutorFactory.java:85)
at org.apache.activemq.artemis.core.remoting.impl.invm.InVMConnection.write(InVMConnection.java:163)
at org.apache.activemq.artemis.core.remoting.impl.invm.InVMConnection.write(InVMConnection.java:151)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.send(ChannelImpl.java:259)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.send(ChannelImpl.java:201)
at org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.doConfirmAndResponse(ServerSessionPacketHandler.java:579)
at org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.access$000(ServerSessionPacketHandler.java:116)
at org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler$1.done(ServerSessionPacketHandler.java:561)
at org.apache.activemq.artemis.core.persistence.impl.journal.OperationContextImpl.executeOnCompletion(OperationContextImpl.java:161)
at org.apache.activemq.artemis.core.persistence.impl.journal.JournalStorageManager.afterCompleteOperations(JournalStorageManager.java:666)
at org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.sendResponse(ServerSessionPacketHandler.java:546)
at org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.handlePacket(ServerSessionPacketHandler.java:531)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.handlePacket(ChannelImpl.java:567)
at org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.doBufferReceived(RemotingConnectionImpl.java:349)
at org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:331)
at org.apache.activemq.artemis.core.remoting.server.impl.RemotingServiceImpl$DelegatingBufferHandler.bufferReceived(RemotingServiceImpl.java:605)
at org.apache.activemq.artemis.core.remoting.impl.invm.InVMConnection$1.run(InVMConnection.java:171)
at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$ExecutorTask.run(OrderedExecutorFactory.java:100)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
You could take a look at this comment on Wildfly openshift cartridge project open issues.
To summarize, some users have experienced memory, performance issues with this cartridge default settings.
This is particularly due to the fact that Openshift Wildfly cartridge enables Java EE 7 Full Profile by default (with a custom standalone.xml configuration file, not the standalone-full.xml as in default wildfly standalone configuration), which does not work well on small gears due to their limitations.
But a lot of Java EE 7 users only use Web Profile and do not need to enable all Java EE 7 Full Profile specs in their application actually.
So by enabling only Java EE 7 Web profile features and disabling Full Profile specific ones, such as messaging subsystem, you can make this cartridge work fine on a small gear.
See also this other comment for solution details and this table which lists differences between Java EE 7 profiles.
If you are running this on a small gear, that is your issue. WildFly 10 uses quite a bit of memory just by itself. You should run it on a medium or large gear. You can also try changing the amount of memory on the gear that is available to the JVM: https://developers.openshift.com/en/wildfly-jvm-memory.html
Try to add -Dactivemq.artemis.client.global.thread.pool.max.size=20.
The default global thread pool used by Artemis client is 500 but the small gears have a thread limit of 250 threads. I had similar problem when I started 8 Wildfly instances on my Linux machine where was 4096 threads per user. Next day there was always java.lang.OutOfMemoryError: unable to create new native thread. I observed that artemis creates constantly new threads until it reaches 500.
When you run Wildfly with standalone-full.xml configuration, you have the JMS subsystem enabled. Wildfly 10.0.0.Final has a default Artemis's thread pool initialized with 500 threads. In future versions, it will be changed with a custom thread pool.
With Wildfly 10.0.0.Final, the simple way to say to Artemis initializes a minor number of threads (as Andrsej Szywala says) is with a command line parameter at startup like this:
sh standalone.sh -c standalone-full-ha.xml -Dactivemq.artemis.client.global.thread.pool.max.size=30
You can read more in my post at jboss forum:
https://developer.jboss.org/thread/268397
I have developed a Fusion Web Application on Oracle ADF.
Now i have deployed this Application on Weblogic Linux 64bit server and test this application using JMeter.
Initially i have test this application with 50 users then 100 and 500 users.
Problem is this, Server Memory and Swap space is not getting free.
It is getting increase even i run the 10 users test multiple times