I have a distributed system application that uses JBoss as an application server. I have a client application that serves as a simulation engine. When client is up, it sends an registration message(JMS message) to Server, then some field is set in the database. When Server is up, it sends a message ( a topic) to all clients to check that they are alive. If clients are alive, they can read message and send a response to server (queue) that it is alive.
If user close client normally, client send a message to server that I will unregister. Then server unregisters it. This is done in database side.
If user close client abnormally(kill) , then client can not send a message to server for unregistration. Then server does not know this client is not alive anymore. This causes inconsistency in my application. So I need a way to understand that client subscribed a topic is not subscribed anymore.
Server sends a message to topic to check that clients are alive.
#Schedule(hour = "*", minute = "*", second = "30", persistent = false)
public void sendNodeStatusRequest() {
Message msg = MessageFactory.createStatusRequestMessage();
publishNodeMessage(msg);
}
After a time, Server show following logs. Could I catch this warning from Java?
07:17:00,698 WARN [org.hornetq.core.protocol.core.impl.RemotingConnectionImpl] Connection failure
has been detected: Did not receive ping from /127.0.0.1:61888. It is likely
the client has exited or crashed without closing its connection, or the
network between the server and client has failed. The connection will now be closed. [code=3]
07:17:00,698 WARN [org.hornetq.core.server.impl.ServerSessionImpl] Client
connection failed, clearing up resources for session 4e4e9dc6-153e-11e7-
80fa-742b62812c29
To me the whole point of messaging system is decoupled communication. The sender (server in your case) send its stuff to the topic without actually knowing who will get the message. The clients come and go, and they should be able to read the message whenever it (still) resides in the topic.
Now from your question I understand that the server keeps track of all the connected clients by means of receiving the message back to the dedicated queue.
So I'm asking myself - maybe its something wrong with the design here.
Let me propose slightly different way of implementation.
The server should not be aware of any client, at most (because your system seems to work this way) it should know that client A, B and C are alive now only because these clients passed to the server this knowledge.
Why just don't make clients sending the "keep-alive" message every, say 1 minute (or less, depending on your needs) to the server queue without prior message from the server.
The message can include some client identifier and probably time if its not added by the infrastructure or something)
So the server will just get this message and it will keep track in memory the list of available clients along with the last time they've sent something.
So if some client disconnects "gracefully" - it can send a special message to the server like "I'm client A and consider me disconnected". Otherwise (abnormal termination/network outage/whatever) - it just won't send anything, the server will have a special process that will check whether there are stale clients on the list and if it finds them - it knows that something went wrong.
If you still want to stick with JMS way of doing, then you can try to send the message synchronously, meaning the producer will wait until it hears from the consumer. More information here : http://docs.oracle.com/javaee/6/tutorial/doc/bncfa.html
Related
I'm using channel-adapters (not gateways) to send data with MessagingTemplate's sendAndReceive from spring integration server to a connected nonspring client (or just telnet).
After receiving the data in the client, somewhen I want to reply data to the server and resolve that sendAndReceive-Waiting. I still want to be able to send other data to the server.
How will sendAndReceive detect a response? Right now I can send whatever I want to the server, it will assume it as a new incoming message.
Is there a predefined way, like prefixing a messageid or do I have to implement it manually by interpreting the incoming messages and somehow "resolve" the sendAndReceive-blocker?
MessagingTemplate.sendAndReceive is based on the TemporaryReplyChannel which is placed to the MessageHeaders and afterward some AbstractReplyProducingMessageHandler just uses that header to send reply back.
Yes, the sending Thread is blocked to wait for the reply throughout that TemporaryReplyChannel.
Hope that can help you a bit.
All other your comment regarding TCP/IP isn't clear for me yet...
The design of my current stomp client process is as follows:
Open stomp connection (sending CONNECT frame)
Subscribe to a feed (send a SUBSCRIBE frame)
Do a loop to continually receive feed:
while (true) {
connection.begin("txt1");
StompFrame message = connection.receive();
System.out.println("message get header"+message.toString());
LOG.info(message.getBody());
connection.ack(message, "txt1");
connection.commit("txt1");
}
My problem with this process is that I get
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)...
and I think the cause of this is mostly because the feed I am subscribed to gives information slower on certain times (as I normally get this error when the weekend comes, holidays or evenings).
I have been reading up on this here and I think this would help with my problem. However, I'm not so sure how to incorporate it with the current layout of my stomp client. Would I have to send a CONNECT header within Step 3?
I am currently using activemq to create my stomp client if that helps.
In the stomp spec we have:
Regarding the heart-beats themselves, any new data received over the
network connection is an indication that the remote end is alive. In a
given direction, if heart-beats are expected every milliseconds:
the sender MUST send new data over the network connection at least every milliseconds
if the sender has no real STOMP frame to send, it MUST send a single newline byte (0x0A)
if, inside a time window of at least milliseconds, the receiver did not receive any new data, it CAN consider the
connection as dead
because of timing inaccuracies, the receiver SHOULD be tolerant and take into account an error margin
Would that mean my client would need to send a newline bye every n seconds?
The stomp server you are connected to has timed out your connection due to innactivity.
Providing the server supports Stomp version 1.1 or newer, the easiest solution for your client is to include a heart-beat instruction in the header of your CONNECT, such as "0,10000". This tells the server that you cannot send heart-beats, but you want it to send one every 10 seconds. This way you don't need to implement them, and the server will keep the connection active by sending them to you.
Of course the server will have its own requirements of the client. In your comment it responds to your request with "1000,0". This indicates that it will send a heart-beat every 1000 millisecs, and it expects you to send one every 0 millisecs, 0 indicating none at all. So your job will be minimal.
According to this config page on the ActiveMQ site, the connection.sendTimeout property is:
Time to wait on Message Sends for a Response, default value of zero indicates to wait forever. Waiting forever allows the broker to have flow control over messages coming from this client if it is a fast producer or there is no consumer such that the broker would run out of memory if it did not slow down the producer. Does not affect Stomp clients as the sends are ack'd by the broker. (Since ActiveMQ-CPP 2.2.1)
I'm having difficulty interpreting what this means (and what the sendTimeout property really is/what it does):
What is a "Message Sends" object?
Why would ActiveMQ be waiting for a response? Isn't it on the server-side of a JMS connection? Shouldn't it be waiting for a request?
What does it actually timeout? When should it be used?
Thanks in advance!
The timeout affects the send of a Message by the client to the Broker. In the case where a send is not async then the client waits for the Broker to return a response indicating that the Message was received and added to the Message store. In some cases this can block for a long time if the Broker has engaged producer flow control because one of its preset memory limits has been reached. If the client application can't tolerate a long wait on send it could configure this timeout so that MessageProducer::send doesn't indefinitely block.
Messages are sent in synchronous mode either because the Connection was configured with alwaysSyncSend=true or because the MessageProducer is sending with the delivery mode set to Persistent.
In general this setting shouldn't need to be used if you've configured your Broker with limits that match your use case.
I'm a learner, so please be patient and clear. I am writing an echo client with Java sockets (DatagramSocket).
After the client sends a message to the echo server, the server deliberately sends 1-10 copies of the message back to simulate message duplication in UDP.
However, my code can only receive the first of those messages sent back, never the full number sent by the server. My receive code is like this:
socket.receive(receivePacket);
How would I put my client in a state where you can enter a string to echo, say "Hi", it is then sent to the server, but can then receive all the replies? I am assuming that they all make it back to the client (I am testing this on my local machine so there will be no loss)
Call socket.receive again to receive additional packets. Set a timeout to wait a reasonable amount of time before deciding the server has sent all its packets.
I have a "server" application receiving messages from a JMS queue. And client applications which create a temp queue, and then send a message to the server, setting the JMSReplyTo header to the temp queue.
The server replies back to the client using the temp queue. However the server has a lot of replies back to the client all sent over the temp queue for a long period of time.(The replies are specific to that client, and are not interesting to anyone else)
How can my server detect if the client disconnected - so I can stop sending messages over that particular temp queue ? Or am I trying to do things with JMS I shouldn't ?
With activeMQ, you can cast your temporary queue to a Destination and then interrogate the destination, e.g.
if (dest.getConsumers().size() < 1) {
// No more consumers on this destination, so kill it.
}
Or from the destination, get the DestinationStatistics, and then get the queue depth from getMessages(), if greater than n then kill the tempQ.
Well, posting to that queue should fail since it should no longer exist once the client is gone. The temporary queue is only supposed to exist while the session that created it exists.
So I don't see that there is a need to be notified when the client is gone, which you can't do via JMS, as the attempt to send a reply message will in fact indicate this.