How can I get this external JMS client to terminate? - java

I'm working through the 'Simple Point-to-Point Example' section of the Sun JMS tutorial (sender source, receiver source), using Glassfish as my JMS provider. I've set up the QueueConnectionFactory and Queue in the Glassfish admin UI, and added the relevant JARs to my classpath and the receiver is receiving the messages sent by the sender.
However, neither sender nor receiver terminate. The main thread exits normally (after successfully calling queueConnection.close()) but two non-daemon threads are left hanging around:
iMQReadChannel-0
imqConnectionFlowControl-0
It seems (from this java.net thread) that the reason is that queueConnection.close() just returns the connection to the pool, rather than really closing it. I can't find any way to tell the pool to shutdown, so the only option I'm left with is System.exit(), which feels wrong.
I've tried setting the minimum pool size to 0, the maximum pool size to 1 and the idle timeout to 10 seconds but it seems to make no difference. Even when I just lookup the connection factory and don't ask for a connection, these two threads are still started and don't terminate.
Any help much appreciated!

Why don't you simply terminate with a System.exit(0)? Given the sample, the current behavior is correct (a Java program terminates when all non-daemon threads end).
Maybe you can have the samples shutting down properly by playing with client library's properties (idle time, etc...), but it seems others ( http://www.nabble.com/Simple-JMS-Client-doesn%27t-quit-td15662753.html) still experience the very same problem (and, anyway, i still don't understand what the point is).

Good news for us. "Will not fixed"
http://java.net/jira/browse/GLASSFISH-1429?focusedCommentId=85555&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_85555

Related

ActiveMQ stops receiving messages after servers left idle for few hours

I've been browsing the forums for last few days and tried almost everything i could find, but without any luck.
The situation is: inside our Java Web Application we have ActiveMQ 5.7 (I know it's very old, eventually we will upgrade to newer version - but for some reasons it's not possible right now). We have only one broker and multiple consumers.
When I start the servers (I have tried to do so for 2, 3, 4 and more servers) everything is ok. The servers are comunicating with each other, QUEUE messages are consumed instantly. But when I leave the servers idle (for example to finally catch some sleep ;) ) it is no longer the case. Messages are stuck in the database and are not beign consumed. The only option to have them delivered is to restart the server.
Part of my configuration (we keep it in properties file, it's the actual state, however I have tried many different combinations):
BrokerServiceURI=broker:(tcp://0.0.0.0:{0})/{1}?persistent=true&useJmx=false&populateJMSXUserID=false&useShutdownHook=false&deleteAllMessagesOnStartup=false&enableStatistics=true
ConnectionFactoryURI=failover://({0})?initialReconnectDelay=100&timeout=6000
ConnectionFactoryServerURI=tcp://{0}:{1}?keepAlive=true&soTimeout=100&wireFormat.cacheEnabled=false&wireFormat.tightEncodingEnabled=false&wireFormat.maxInactivityDuration=0
BrokerService.startAsync=true
BrokerService.networkConnectorStartAsync=true
BrokerService.keepDurableSubsActive=false
Do you have a clue?
I cannot actually tell you the reason from the description mentioned above but I can list down a few checks that are fresh in my mind. Please confirm the following if they are valid for you or not.
Can you check the consumer connections?
Are the consumer sessions still active?
If all the consumer-connections are up, then check the thread-dump whether the active consumer threads (I'm assuming you created consumer threads, correct me if I'm wrong) are in RUNNING or WAITING state(this happened with me where all the consumers were active but some other thread was keeping a lock on Logger while posting a message to slack and the consumers were in WAITING state) because of some other thread in the server).
Check the Dispatch queue size for each consumer. Check the prefetch of each consumer and then compare Dispatch Queue size with Prefetch, refer
Is there a JMSXGroupID you are allotting to each message?
Can you tell a little more about your consumer/producer/broker configurations?

My application is stucked when DB is down

I'm using Hibernate and Tomcat JDBC connection pool to use my MySQL DB.
When, from any reason, the DB is down, my application got stuck.
For example, I have REST resources (with Jersey), they are not getting any requests.
I also using quartz for schedule tasks, they aren't running as well.
When I start my DB again, everything goes back to normal.
I don't even know where to start looking, anyone has an idea?
Thanks
What must be happening is your application must be receiving request but there must be some exception while establishing database connection .see the logs.
try some flow where your are not doing any database operation. It must work fine.
When the application has hung, get a thread dump of the JVM, this will tell you the state of each thread and, rather than guessing as to the cause, you'll have concrete evidence.
Having said that, and going with the guess work approach, a lot comes down to how your connection pool is configured and the action your code takes when it receives the SQLException. If the application is totally hung, then my first port of call would be to find out if the db accessing threads are in a WAIT state, or even deadlocked. The thread dump will help you to determine if that is the case.
see kill -3 to get java thread dump for how to take a thread dump

how to kill a hang connection in jvm when you are outside of a jvm (ubuntu box)?

I have an application runs in jboss 4.2.2 server with jdk 1.6. The program has bug in it that it doesn't set http connection timeout when it opens its the connection. So when the third party side has issues the connection is hanged forever which makes the thread hangs as well. And soon we are running out of thread. However, due to the release cycle we can't put a fix in immediately. I was wondering there is a way to terminate a network connection from outside the jvm? So the thread can be release back to the thread pool? I potentially has a lot of connection open to the same third party site so it is nice to figure out the problem connection and just kill that one.
Thanks,
While searching for a question of my own, I came across what seems to be a great tutorial on how to externally kill a thread.
http://www.rhcedan.com/2010/06/22/killing-a-java-thread/
You can grep the output of netstat and kill the connection using tcpkill, and run this using cron.
However this cannot be more than a very temporary solution.
This ServerFault Q & A may be relevant. It explains that tcpkill will only work if there is active traffic on the connection.
(This is because ... apparently ... tcpkill works by sending a TCP RESET packet. In order for this to work it needs to know the correct sequence number, and it can only figure this out by examining other packets for the session.)

Tomcat maxthreads, am I doing something wrong?

In the catalina.out I saw this message appearing in the log:
Maximum number of threads (200) created for connector with address null and port 80
Does this mean a process is hogging something or do I need to just increase the thread size?
After restarting tomcat, I had spam like this message:
"SEVERE: The web application [/MyServlet] is still processing a request that has yet to finish. This is very likely to create a memory leak. You can control the time allowed for requests to finish by using the unloadDelay attribute of the standard Context implementation."
Is there a way to solve my situation?
Yes, it sounds like you've got some request handler which never completes. Each time it's invoked, it'll basically soak up another thread, until you've run out of threads in the pool.
You need to work out which request is failing to complete, and fix the code. If you can take a dump of the stacks of all threads, it's likely to become clear which requests are failing to complete.

closing a socket channel asynchronously

I have a single-threaded non-blocking socket IO server written in Java using nio.
When I have finished writing to a connection, I want to close it.
Does the closing of the channel mean blocking until all buffered writes have been acknowledged by the recipient?
It would be useful to know if, when asynchronously closing, it succeeded or not, but I could live with any errors in the closing being ignored.
Is there any way to configure this, e.g. with setSoLinger() (and what settings would be appropriate?)
(A general discussion beyond Java about Linux and other OS in this respect would be useful to)
Closing in non-blocking mode is non-blocking.
You could put the channel into blocking mode, set a positive linger timeout, and close, and that would block for up to the linger timeout while the socket send buffer was being emptied, but alas Java doesn't throw an exception if the linger timeout expires, so you can't know whether all the data has gone. I reported this bug ten or more years ago and it came back 'will not fix' because of compatiblity concerns. If you can wait until Java 7 comes out I believe the nio2 stuff has this fixed, I certainly requested it, but who knows when that will be?
And even if you have all that, all you know is that the data was sent. You don't know anything about it being received or processed by the recipient application. If you need that you have to build it into your application protocol.
I'm not sure what really happens but I know that close() includes flush() (except in PrintStream and PrintWriter...).
So my approach would be to add the connections to close to a queue and process that queue in a second thread (including error handling).
I understand that your server is single-threaded but a second thread doesn't cost that much, the complexity of the problem is low and the solution will be easy to understand any maintain.

Categories