KahaDB error "No space left on Disk". How to solve this? - java

I am using activemq. KahaDB is activemq's default message store. But it keeps on increasing in size and eventually runs out of disk space. Even if all the messages are acknowledged, it still grows in size and creates new log files in its data store continuously.
I have set no properties related to KahaDB, it is using the default properties.
broker = new BrokerService();
TransportConnector connector = new TransportConnector();
connector.setUri(new URI("tcp://localhost:61616"));
broker.addConnector(connector);
broker.start();
These are the only properties I have set on broker. Can someone please tell me the properties I can use on KahaDB to not have this error?

The KahaDB journals and index files stick around for a number of reasons some of which aren't always readily apparent so you need to do some debugging and see what is keeping the log files around, it can be as simple as one unacknowledged message that holds the whole log file, and in some cases future log files that track acks for the other messages in that log.
The ActiveMQ site has a nice article on it about looking into this so you can see what is in your case keeping things in the logs alive. It is also a good idea to use the latest release because things get fixed along the way to prevent logs from sticking around when they shouldn't.

Related

ActiveMQ migration to another server wiath save version

I have activemq which has a huge storeUsage ~100GB, sice it was about to exhaust I cnaged to value of storeUsage and triying to restart again but not activemq server is not staring.
What can be issue, nothing more in log in debug mode
Can I just backup kahaDb and delete all file inside kahaDb and restart it ?
Please suggest what to do, Its prod server and we have big issue.
Thanks,
Arvind
If you delete kahaDB the contents of your queues and topics will be gone. If you do not care about them - delete it. It will most likely render your broker usable again.
You should have tried to get rid of old messages in the first place. They use a lot of resources. A message broker is not a database.
You could also try to restore your kahadb on a much more powerful machine.
I assume monitoring of your production environment was faulty or non-existent. So you should invest in that area, too.

OutOfMemoryError due to a huge number of ActiveMQ XATransactionId objects

We have a Weblogic server running several apps. Some of those apps use an ActiveMQ instance which is configured to use the Weblogic XA transaction manager.
Now after about 3 minutes after startup, the JVM triggers an OutOfMemoryError. A heap dump shows that about 85% of all memory is occupied by a LinkedList that contains org.apache.activemq.command.XATransactionId instances. The list is a root object and we are not sure who needs it.
What could cause this?
We had exactly the same issue on Weblogic 12c and activemq-ra. XATransactionId object instances were created continuously causing server overload.
After more than 2 weeks of debugging, we found that the problem was caused by WebLogic Transaction Manager trying to recover some pending activemq transactions by calling the method recover() which returns the ids of transaction that seems to be not completed and have to be recovered. The call to this method by Weblogic returned always a not null number n (always the same) and that causes the creation of n instance of XATransactionId object.
After some investigations, we found that Weblogic stores by default its Transaction logs TLOG in filesystem and this can be changed to be persisted in DB. We thought that there was a problem in TLOGs being in file system and we tried to change it to DB and it worked ! Now our server runs for more that 2 weeks without any restart and memory is stable because no XATransactionId are created a part from the necessary amount of it ;)
I hope this will help you and keep us informed if it worked for you.
Good luck !
To be honest it sounds like you're getting a ton of JMS messages and either not consuming them or, if you are, your consumer is not acknowledging the messages if they are not in auto acknowledge mode.
Check your JMS queue backlog. There may be a queue with high backlog, which server is trying to read. These messages may have been corrupted, due to some crash
The best option is to delete the backlog in JMS queue or take a back up in some other queue

Separate storage for ActiveMQ Scheduled/Delayed messages

Recently we had to drop all the KahaDB files in order to get a production ActiveMQ server up and running. We waited for 45min but nothing showed up, so we were forced to drop all the info.
The pending messages were not an issue, but there were a number of scheduled jobs that requires rescheduling.
I was thinking, anticipating future similar situations, if it is possible to store that information in a differente storage, or maybe in a different KahaDB using mKahaDB?
Thanks
Carlos

Hornetq large messages filling up disk space

I have configured hornetq for handling large messages but large message directory becomes too large if client is not up ,consuming whole disk space
Plz help to figure out how to delete or clean large message folder after message been delivered ?
also paging directory not being created in jboss/server/default/data/hornetq.
The messages should be removed after consumed.
If they are filling up it means your clients are not acking the messages.
What version are you using? I would also update the versions as we have fixed a few bugs around large message deliver.
Stop the instance and clear all the messaginglargemessages and messagingjournal then start the instance again.

Detecting ActiveMQ flow control

I have a production system that uses ActiveMQ (5.3.2) to send messages from server A to server B. A few weeks ago, the system inexplicably started taking 10+ second to send a message. After a reboot of the producer, the system worked fine.
After investigation, I'm pretty sure this is due to producer flow control. (I have a fairly standard activemq setup). The day before this happened (for other reasons) my consumer software had been acting erratically and had even stopped accepting connections for a while. So I'm guessing this triggered this. (It does puzzle me that the requests were still being throttled a day later).
Question -- how can I confirm that the requests were being throttled. I took a heap dump of the server -- is there data in memory I can look for?
Edit: I've found the following:
WireFormatNegotiator.tcpNoDelayEnabled=false for one of three WireFormatNegotiator instances in the memory. I'm trying to figure out what sets this.
And second (and more important), is there a way I can use JMX to tell if the messages are being throttled? I'd like to set up a Nagios alert to let me know if this happens in the future. What property should I check for with JMX?
you can configure your producer client to throw javax.jms.ResourceAllocationException exceptions which can then be detected/logged, etc. just set one of the following...
<systemUsage>
<systemUsage sendFailIfNoSpaceAfterTimeout="3000">
...OR...
<systemUsage sendFailIfNoSpace="true">

Categories