I am running Zookeeper Server v3.6.3
On my Zookeeper cluster, we currently have 5 servers configured. I have a tool configured to use Zookeeper as a lock manager for our applications. The lock manager can be very active in short bursts (1M+ znodes created/deleted within 30 minutes). I have been monitoring ZK's transactional logs, and I can see the confirmation that each node that is created is also deleted.
Problem:
Occasionally during those short bursts, Zookeeper hits its JVM Heap Space limit and goes down and must be restarted. I attempted to go to ZK's AdminServer, specifically to the 'monitor'/mntr endpoint in order to see the cluster's performance stats. What I found is that there is a breakdown of statistics for each node (5 different entries), including nodes that have already been deleted. I found documentation saying that ZK keeps a commit log of the previous 500 transactions by default, but what is being stored here is far greater than 500 now-removed nodes (more to the order of hundreds of thousands).
In addition, it has happened to me before that this 'monitor' endpoint causes the OOM error due to the vastness of data Zookeeper is keeping around.
My question is: Why and where is Zookeeper maintaining these statistics for deleted nodes, and how can I limit the amount of information being stored about them?
Related
we have an Ignite setup (apache-ignite-2.13.0-1,Zulu Java 11.0.13, RHEL 8.6) with 3 server nodes and ~20 clients joining the topology as client nodes. The client application additionally also connects via JDBC. The application is from a 3rd party vendor, so I don't know what they are doing internally.
Since some time we see that always one of the 3 servers logs a huge amount of these warnings:
[12:40:41,446][WARNING][tcp-disco-ip-finder-cleaner-#7-#62][TcpDiscoverySpi] Failed to ping node [nodeId=null]. Reached the timeout 60000ms. Cause: Connection refused (Connection refused)
It did not always do that, Ignite and the application were updated multiple times, and at some point this started showing up.
I don't understand what this means. All the nodes I see in the topology with ignitevisor have a nodeId set, but here it is null. All server nodes and clients have full connectivity between each on all high ports. All expected nodes are shown in the topology.
So what is this node with nodeId=null? How can I find more about where that comes from?
Regards,
Sven
Wrapping it up,
the message was introduced in 2.11 in order to provide additional logging to communication and networking.
The warning itself just means that a node might not be accessible from the current one, i.e. we can't ping that node. That is normal in many cases and you can ignore this warning.
The implementation seems to be quite incorrect, we'd like to write it down only first time instead of having a bunch of duplicate messages. Plus, that type of logging information used to be the DEBUG one, whereas now it's become more severe - WARN for no reason.
There is an open ticket for an improvement.
Issue
Create an ignite client (in client mode false) and put some data (10k entries/values) to it with very small expiration time (~20s) and TTL enabled.
Each time the thread is running it'll remove all the entries that expired, but after few attempts this thread is not removing all the expired entries, some of them are staying in memory and are not removed by this thread execution.
That means we got some expired data in memory, and it's something we want to avoid.
Please can you confirm that is a real issue or just misuse/configuration of my setup?
Thanks for your feedback.
Test
I've tried in three different setups: full local mode (embedded server) on MacOS, remote server using one node in Docker, and also remote cluster using 3 nodes in kubernetes.
To reproduce
Git repo: https://github.com/panes/ignite-sample
Run MyIgniteLoadRunnerTest.run() to reproduce the issue described on top.
(Global setup: Writing 10k entries of 64octets each with TTL 10s)
It seems to be a known issue. Here's the link to track it https://issues.apache.org/jira/browse/IGNITE-11438. It's to be included into Ignite 2.8 release. As far as I know it has already been released as a part of GridGain Community Edition.
In my application, I've noticed that HornetQ 2.4.1 has been piling up message journal files, (sometimes into the thousands.) I'm using HornetQ via JMS Queues and we're using Wildfly 8.2. Normally, when starting the server instance, HornetQ will have 3 messaging journals and a lock file.
The piling up of message journal files has caused issues when restarting the server, we'll see a log that states:
HQ221014: 54% loaded
When removing the files, the server loads just fine. I've experimented some, and it appears as though messages in these files have already been processed, but I'm not sure why they continue to pile up over time.
Edit 1: I've found this link that indicates we're not acknowledging messages. However, when we create the session like so connection.createSession(false,Session.AUTO_ACKNOWLEDGE);.
I'll continue looking for a solution.
I've come to find out that this has been caused (for one reason or another, I currently believe it has something to do with server load or network hangs) by the failure of calling the afterDelivery() method. I'm addressing this by not hitting that queue so often. It's not elegant, but it serves my purpose.
See following HornetQ messages I found in the logs:
HQ152006: Unable to call after delivery
javax.transaction.RollbackException: ARJUNA016053: Could not commit transaction. at org.jboss.as.ejb3.inflow.MessageEndpointInvocationHandler.afterDelivery(MessageEndpointInvocationHandler.java:87)
HQ222144: Queue could not finish waiting executors. Try increasing the thread pool size
HQ222172: Queue jms.queue.myQueue was busy for more than 10,000 milliseconds. There are possibly consumers hanging on a network operation
We have a Weblogic server running several apps. Some of those apps use an ActiveMQ instance which is configured to use the Weblogic XA transaction manager.
Now after about 3 minutes after startup, the JVM triggers an OutOfMemoryError. A heap dump shows that about 85% of all memory is occupied by a LinkedList that contains org.apache.activemq.command.XATransactionId instances. The list is a root object and we are not sure who needs it.
What could cause this?
We had exactly the same issue on Weblogic 12c and activemq-ra. XATransactionId object instances were created continuously causing server overload.
After more than 2 weeks of debugging, we found that the problem was caused by WebLogic Transaction Manager trying to recover some pending activemq transactions by calling the method recover() which returns the ids of transaction that seems to be not completed and have to be recovered. The call to this method by Weblogic returned always a not null number n (always the same) and that causes the creation of n instance of XATransactionId object.
After some investigations, we found that Weblogic stores by default its Transaction logs TLOG in filesystem and this can be changed to be persisted in DB. We thought that there was a problem in TLOGs being in file system and we tried to change it to DB and it worked ! Now our server runs for more that 2 weeks without any restart and memory is stable because no XATransactionId are created a part from the necessary amount of it ;)
I hope this will help you and keep us informed if it worked for you.
Good luck !
To be honest it sounds like you're getting a ton of JMS messages and either not consuming them or, if you are, your consumer is not acknowledging the messages if they are not in auto acknowledge mode.
Check your JMS queue backlog. There may be a queue with high backlog, which server is trying to read. These messages may have been corrupted, due to some crash
The best option is to delete the backlog in JMS queue or take a back up in some other queue
I'm facing a DatabaseLessLeasing issue. Our's is a middleware application. We don't have any database and our application is running on WebLogic server. We have 2 servers in one cluster. Both servers are up and running, but we are using only one server to do the processing. When the primary server fails, whole server and services will migrate to secondary server. This is working fine.
But we had one issue end of last year that our secondary server hardware was down and secondary server was not available. We got the below issue. When we went to Oracle, they suggested to have one more server or have one database which is high availability to hold the cluster leasing information to point out which is the master server. As of now we don't have that option to do as putting the new server means there will be a budget issue and client is not ready for it.
Our Weblogic configuration for cluster are:
one cluster with 2 managed servers
cluster messaging mode is Multicast
Migration Basis is Consensus
load algorithm is Round Robin
This is the log I found
LOG: Critical Health BEA-310006 Critical Subsystem
DatabaseLessLeasing has failed. Setting server state to FAILED.
Reason: Server is not in the majority cluster partition>
Critical WebLogicServer BEA-000385 Server health failed. Reason:
health of critical service 'DatabaseLessLeasing' failed Notice
WebLogicServer BEA-000365 Server state changed to FAILED
**Note: **I remember one thing, the server was not down when this happened. Both the servers were running but all of a sudden server tried to restart and it unable to restart. Restart was failed. I saw that status was showing as failedToRestart and application went down.
Can anyone please help me on this issue.
Thank you
Consensus leasing requires a majority of servers to continue functioning. Any time there is a network partition, the servers in the majority partition will continue to run while those in the minority partition will fail since they cannot contact the cluster leader or elect a new cluster leader since they will not have the majority of servers. If the partition results in an equal division of servers, then the partition that contains the cluster leader will survive while the other one will fail.
Owing to above functionality, If automatic server migration is enabled, the servers are required to contact the cluster leader and renew their leases periodically. Servers will shut themselves down if they are unable to renew their leases. The failed servers will then be automatically migrated to the machines in the majority partition.
The server which got partitioned (and not part of majority cluster) will get into FAILED state. This behavior is put in place to avoid split-brain scenarios where there are two partitions of a cluster and both think they are the real cluster. When a cluster gets segmented, the largest segment will survive and the smaller segment will shut itself down. When servers cannot reach the cluster master, they determine if they are in the larger partition or not. If they are in the larger partition, they will elect a new cluster master. If not, they will all shut down when their lease expires. Two-node clusters are problematic in this case. When a cluster gets partitioned, which partition is the largest? When the cluster master goes down in a two-node cluster, the remaining server has no way of knowing if it is in the majority or not. In that case, if the remaining server is the cluster master, it will continue to run. If it is not the master, it will shut down.
Usually this error shows up when there are only 2 managed servers in onc cluster.
To solve this kind of issues, create another server; since the cluster is only of 2 nodes, any server will fall out of the majority cluster partition if it loses connectivity/drops cluster broadcast messages. In this scenario, there are no other servers part of the cluster.
For Consensus Leasing, it is always recommended to create a cluster with at-least 3 nodes; that way you can ensure some stability.
In that scenario, even if one server falls out of the cluster, the other two still function correctly as they remain in the majority cluster partition The third one will rejoin the cluster, or will be eventually restarted.
In a scenario where you have only 2 servers as part of the cluster, one falling out from the cluster will result in both the servers being restarted, as they are not a part of the majority cluster partition; this would ultimately result in a very unstable environment.
Another possible scenario is that there was a communication issue between the Managed servers, you should look out for messages like "lost .* message(s)" [in case of unicast it is some thing like "Lost 2 unicast message(s)."] This may be caused due to temporary network issues
Make sure that the node manger for the secondary node in the clustered migration configuration is up and running.