There is a cluster of tomcats, each tomcat node generates "tasks" which can be performed by any other node. I'd prefer task to be performed by the node which created it.
I thought that it would be good idea to use an embedded broker for each tomcat and configure it as a store-and-forward network. The problem is that a node can go down and the tasks/messages should then be performed by other tomcat instead of waiting for current one to get up.
On the other hand - when using master/slave cluster how to prioritize the node which sent the message?
How to configure it in activemq?
The priority of a local consumer should be default. In AMQ Docs:
ActiveMQ uses Consumer Priority so that local JMS consumers are always
higher priority than remote brokers in a store and forward network.
However, you will not really achive what you want. If one tomcat node goes down, so will the embedded ActiveMQ (and any messages still attached to that instance). A message will not automatically get copied to all other brokers.
But you ask something about Master/slave cluster. Do you intend to have a network of brokers OR a master/slave setup? Or do you intend to have a combo?
Related
We are connecting to Hazelcast cluster using Java clients from multiple nodes.
HazelcastClient.newHazelcastClient(cfg)
We need our EntryEvictedListener to be executed only once per cluster.
By default it is executed on all connected clients.
Found how to reach this goal with Hazelcast embedded (Time Based Eviction in Hazelcast), but looks like
map.addLocalEntryListener(...)
is not allowed for client.
So is there any way to execute eviction listener only once per cluster using client?
Unfortunately not. You're listener would need to run on a cluster node, since the local is directly connected to the underlying partitioning scheme. What do you want to do on the evict event, maybe you can achieve it differently.
I want to setup multiple JMS nodes (brokers) which have multiple topics. Recently I discovered failover feature (http://activemq.apache.org/failover-transport-reference.html#FailoverTransportReference-BrokersideOptionsforFailover) which allows consumers to be distributed among all the broker nodes + redirects in case if target node failed.
I'm new to JMS and to ActiveMQ and perhaps my question would sound stupidly, but anyway:
I wonder if ActiveMQ supports distributed Topics from producer point of view so when producer publishes the message then it appears in a cluster rather than in a single cluster node (to where producer publishes it). The reason why I'm interested in this kind of feature is because I'm afraid that if this single node (where producer publishes message) fails, then producer will not be able to publish messages until this node is up again. But it would be much more reliable if producer could publish a message to a cluster (just as producer uses failover feature) and if the original topic holder node is down, then message is just redirected to other broker nodes. I've been looking for some examples and was unable to find them. Could anybody give a hint if ActiveMQ supports this kind of feature? Thanks
Yes, you combine the failover: scheme to provide client-side recovery and then use the network-of-broker on the server-side to distribute the messages to other consumers in the cluster.
I am going to implement a kafka cluster consisting of 3 machines, one for zookeeper and other 2 as brokers. I have about 6 consumer machines and about hundred producers.
Now if one of the broker fails data loss is avoided thanks to replication feature. But what if zookeeper fails and the same machine cannot be started? I have several questions:
I noticed that even after zookeeper failure producers continued to push messages in designated broker. But they could no longer be retrieved by consumers. Because Consumers got unregistered. So in this case is data lost permanently?
How to change zookeeper ip in broker config in run time? Will they have to be shutdown to change zookeeper ip?
Even if new zookeeper machine is somehow brought into the cluster previous would the previous data be lost?
Running only one instance of Zookeeper is not fault-tolerant and the behavior cannot be predicted. According to HBase reference, you should setup an ensemble with at least 3 servers.
Have a look at the official documentation page: Zookeeper clustered setup.
I have JMS implementation based on JBoss (to be precise, JBossMQ on JBoss4.2). There are 5 clusters, with each cluster having few nodes. One node in each of the cluster acts as master node. Out of the 5 clusters, one of the cluster is supposed to publish messages to a persistent Topic, and the other 4 clusters consumes those messages. The publishing and consuming is done by only the master node of each cluster.
I want to device a mechanism where the publisher knows that the message was consumed by all the subscribers or a subscriber knows that it has consumed all the messages produced by the publisher. How can this be achieved?
In principle, you use a JMS system in order to not care about that and only configure it the way you need. You could save state information in a shared resource like a database, but I wouldn't do it. Better use monitor features of the JMS system in order to track that. In case your application really needs to know about the successful processing of a message, then you could have a queue where process acknowledge go back to the sender.
For HornetQ, which you might use with JBoss, you'll find an example of a clustered topic here.
I'm facing a design issue in which I would like to have only one JMS producer sending messages to two consumers. There are only two servers, and the producer will start generating messages that will be load balanced (with round robin) to both consumers.
In the hypothetical case of one server failing, I do have a mechanism so a new producer will be activated in the remaining server. But what will happen to the messages that were being processed in the server that went down?
Will they be reassigned to the remaining server thus being processed by the remaining consumer? or they will be lost?
If the last case is true there will be another problem. The producer creates messages based on files in a NAS so when a server goes down, the newly activated producer will start creating messages based on the contents of the NAS and that may replicate messages (but that case is handled) the problem is that if the server that goes down is not the server with the active producer then when the server goes up again it will not have messages to consume and also no messages will replace the ones lost.
How can I achieve a design so that no messages are lost?
Note: When one server goes down, the journal and bindings are lost.
Once the message is transferred to a particular node it belongs to that node.
If a node goes down, you would have to activate that node with its journal and the message state would be recovered from disk. You could eventually have messages being redistributed if you don't have more consumers (that will depend on redistribution configuration of course).
Or the best approach would be to have a backup node for each node.
We have been advising the use of collocated topologies, where one VM has an active instance and a backup instance for the other Server... That way each alive server would also have a backup config. That's being improved on 2.4.0 as we speak as you need a lot of manual configuration at the moment.
So, in summary either:
Restart the node
configure backup nodes