Hornetq large messages filling up disk space - java

I have configured hornetq for handling large messages but large message directory becomes too large if client is not up ,consuming whole disk space
Plz help to figure out how to delete or clean large message folder after message been delivered ?
also paging directory not being created in jboss/server/default/data/hornetq.

The messages should be removed after consumed.
If they are filling up it means your clients are not acking the messages.
What version are you using? I would also update the versions as we have fixed a few bugs around large message deliver.

Stop the instance and clear all the messaginglargemessages and messagingjournal then start the instance again.

Related

KahaDB error "No space left on Disk". How to solve this?

I am using activemq. KahaDB is activemq's default message store. But it keeps on increasing in size and eventually runs out of disk space. Even if all the messages are acknowledged, it still grows in size and creates new log files in its data store continuously.
I have set no properties related to KahaDB, it is using the default properties.
broker = new BrokerService();
TransportConnector connector = new TransportConnector();
connector.setUri(new URI("tcp://localhost:61616"));
broker.addConnector(connector);
broker.start();
These are the only properties I have set on broker. Can someone please tell me the properties I can use on KahaDB to not have this error?
The KahaDB journals and index files stick around for a number of reasons some of which aren't always readily apparent so you need to do some debugging and see what is keeping the log files around, it can be as simple as one unacknowledged message that holds the whole log file, and in some cases future log files that track acks for the other messages in that log.
The ActiveMQ site has a nice article on it about looking into this so you can see what is in your case keeping things in the logs alive. It is also a good idea to use the latest release because things get fixed along the way to prevent logs from sticking around when they shouldn't.

ActiveMQ migration to another server wiath save version

I have activemq which has a huge storeUsage ~100GB, sice it was about to exhaust I cnaged to value of storeUsage and triying to restart again but not activemq server is not staring.
What can be issue, nothing more in log in debug mode
Can I just backup kahaDb and delete all file inside kahaDb and restart it ?
Please suggest what to do, Its prod server and we have big issue.
Thanks,
Arvind
If you delete kahaDB the contents of your queues and topics will be gone. If you do not care about them - delete it. It will most likely render your broker usable again.
You should have tried to get rid of old messages in the first place. They use a lot of resources. A message broker is not a database.
You could also try to restore your kahadb on a much more powerful machine.
I assume monitoring of your production environment was faulty or non-existent. So you should invest in that area, too.

OutOfMemoryError due to a huge number of ActiveMQ XATransactionId objects

We have a Weblogic server running several apps. Some of those apps use an ActiveMQ instance which is configured to use the Weblogic XA transaction manager.
Now after about 3 minutes after startup, the JVM triggers an OutOfMemoryError. A heap dump shows that about 85% of all memory is occupied by a LinkedList that contains org.apache.activemq.command.XATransactionId instances. The list is a root object and we are not sure who needs it.
What could cause this?
We had exactly the same issue on Weblogic 12c and activemq-ra. XATransactionId object instances were created continuously causing server overload.
After more than 2 weeks of debugging, we found that the problem was caused by WebLogic Transaction Manager trying to recover some pending activemq transactions by calling the method recover() which returns the ids of transaction that seems to be not completed and have to be recovered. The call to this method by Weblogic returned always a not null number n (always the same) and that causes the creation of n instance of XATransactionId object.
After some investigations, we found that Weblogic stores by default its Transaction logs TLOG in filesystem and this can be changed to be persisted in DB. We thought that there was a problem in TLOGs being in file system and we tried to change it to DB and it worked ! Now our server runs for more that 2 weeks without any restart and memory is stable because no XATransactionId are created a part from the necessary amount of it ;)
I hope this will help you and keep us informed if it worked for you.
Good luck !
To be honest it sounds like you're getting a ton of JMS messages and either not consuming them or, if you are, your consumer is not acknowledging the messages if they are not in auto acknowledge mode.
Check your JMS queue backlog. There may be a queue with high backlog, which server is trying to read. These messages may have been corrupted, due to some crash
The best option is to delete the backlog in JMS queue or take a back up in some other queue

Separate storage for ActiveMQ Scheduled/Delayed messages

Recently we had to drop all the KahaDB files in order to get a production ActiveMQ server up and running. We waited for 45min but nothing showed up, so we were forced to drop all the info.
The pending messages were not an issue, but there were a number of scheduled jobs that requires rescheduling.
I was thinking, anticipating future similar situations, if it is possible to store that information in a differente storage, or maybe in a different KahaDB using mKahaDB?
Thanks
Carlos

JMS queue is full

My Java EE application sends JMS to queue continuously, but sometimes the JMS consumer application stopped receiving JMS. It causes the JMS queue very large even full, that collapses the server.
My server is JBoss or Websphere. Do the application servers provide strategy to remove "timeout" JMS messages?
What is strategy to handle large JMS queue? Thanks!
With any asynchronous messaging you must deal with the "fast producer/slow consumer" problem. There are a number of ways to deal with this.
Add consumers. With WebSphere MQ you can trigger a queue based on depth. Some shops use this to add new consumer instances as queue depth grows. Then as queue depth begins to decline, the extra consumers die off. In this way, consumers can be made to automatically scale to accommodate changing loads. Other brokers generally have similar functionality.
Make the queue and underlying file system really large. This method attempts to absorb peaks in workload entirely in the queue. This is after all what queuing was designed to do in the first place. Problem is, it doesn't scale well and you must allocate disk that 99% of the time will be almost empty.
Expire old messages. If the messages have an expiry set then you can cause them to be cleaned up. Some JMS brokers will do this automatically while on others you may need to browse the queue in order to cause the expired messages to be deleted. Problem with this is that not all messages lose their business value and become eligible for expiry. Most fire-and-forget messages (audit logs, etc.) fall into this category.
Throttle back the producer. When the queue fills, nothing can put new messages to it. In WebSphere MQ the producing application then receives a return code indicating that the queue is full. If the application distinguishes between fatal and transient errors, it can stop and retry.
The key to successfully implementing any of these is that your system be allowed to provide "soft" errors that the application will respond to. For example, many shops will raise the MAXDEPTH parameter of a queue the first time they get a QFULL condition. If the queue depth exceeds the size of the underlying file system the result is that instead of a "soft" error that impacts a single queue the file system fills and the entire node is affected. You are MUCH better off tuning the system so that the queue hits MAXDEPTH well before the file system fills but then also instrumenting the app or other processes to react to the full queue in some way.
But no matter what else you do, option #4 above is mandatory. No matter how much disk you allocate or how many consumer instances you deploy or how quickly you expire messages there is always a possibility that your consumer(s) won't keep up with message production. When this happens your producer app should throttle back, or raise an alarm and stop or do anything other than hang or die. Asynchronous messaging is only asynchronous up to the point that you run out of space to queue messages. After that your apps are synchronous and must gracefully handle that situation, even if that means to (gracefully) shut own.
Sure!
http://download.oracle.com/docs/cd/E17802_01/products/products/jms/javadoc-102a/index.html
Message#setJMSExpiration(long) does exactly what you want.

Categories