I'm using Qpid Proton (proton-j-0.13.0) to send messages over AMQP to an ActiveMQ 5.12.0 queue. On a development machine, where ActiveMQ and the Java program run on the same machine, this is working fine. On a test environment, where ActiveMQ is running on a separate server, we see the send() method hangs in 15 to 20 percent of the cases. The CPU also remains around 100% when the send() method hangt. When the send() succeeds, it completes within 0.1 seconds.
Statements to perform a send are similar to this:
final Messenger messenger = Messenger.Factory.create();
messenger.start;
messenger.put(message); // one message of 1 KByte
messenger.send(1);
messenger.stop();
I'm aware Messenger.send(int n) is a blocking method. However, I don't know why it would block my calls. I can add a timeout and try to resend the message, but that's a workaround instead of a proper solution.
Statements to receive the sent messages from ActiveMQ are similar to this:
this.messenger = Messenger.Factory.create();
this.messenger.start();
this.messenger.subscribe(this.address);
while (this.isRunning) {
try {
this.messenger.recv(1);
while (this.messenger.incoming() > 0) {
final Message message = this.messenger.get();
this.messageListener.onMessage(message);
} catch (final Exception e) {
LOGGER.error("Exception while receiving messages", e);
}
}
Am I missing something simple, being a Qpid newbie? Could this be configuration in ActiveMQ? Is it normal to add a timeout and retry? Any help to resolve this would appreciated.
Related
I have a problem when reading messages from multiple JMS Queues in a single transaction using WebLogic JMS client (wlthin3client.jar) from WebLogic 11g (WebLogic Server 10.3.6.0). I am trying to read first one message from queue Q1 and then, if this message satisfy some requirements, read other message (if available at that time) from queue Q2.
I expect that after committing the transaction both messages should disappear from Q1 and Q2. In case of rollback - messages should remain in both Q1 and Q2.
My first approach was to use an asynchronous queue receiver to read from Q1 and then synchronously read from Q2 when it is needed:
void run() throws JMSException, NamingException {
QueueConnectionFactory cf = (QueueConnectionFactory) ctx.lookup(connectionFactory);
// create connection and session
conn = cf.createQueueConnection();
session = conn.createQueueSession(true, Session.SESSION_TRANSACTED);
Queue q1 = (Queue) ctx.lookup(queue1);
// setup async receiver for Q1
QueueReceiver q1Receiver = session.createReceiver(q1 );
q1Receiver.setMessageListener(this);
conn.start();
// ...
// after messages are processed
conn.close();
}
#Override
public void onMessage(Message q1msg) {
try {
QueueReceiver q2receiver = session.createReceiver(queue2);
if(shouldReadFromQ2(q1msg)){
// synchronous receive from Q2
Message q2msg = q2receiver.receiveNoWait();
process(q2msg);
}
session.commit();
} catch (JMSException e) {
e.printStackTrace();
} finally {
q2receiver.close();
}
}
Unfortunately even though I issue a session.commit() the message from Q1 remains uncommitted. It is in receive state until the connection or receiver is closed. Then is seems to be rolled back as it gets delayed state.
Other observations:
Q1 message is correctly committed if Q2 is empty and there is nothing to read from it.
The problem does not occur when I am using synchronous API in similar, nested way for both Q1 and Q2. So if I use q1Receiver.receiveNoWait() everything is fine.
If I use asynchronous API in similar, nested way for Q1 and Q2, then only Q1 message listener is called and commit works on Q1. But Q2 message listener is not called at all and Q2 is not committed (message stuck in receive / delayed).
Am I misusing the API somehow? Or is this a WLS JMS bug? How to combine reading from multiple queues with asynchronous API?
It turns out this is an WLS JMS bug 28637420.
The bug status says it is fixed, but I wouldn't rely on this - a WLS 11g patch with this fix doesn't work (see bug 29177370).
Oracle says that this happens because two different delivery mechanisms (synchronous messages vs asynchronous messages) were not designed to work together on the same session.
Simplest way to work around the problem is just use synchronous API (polling) for cases when you need to work on multiple queues in a single session. I decided on this approach.
Another option suggested by oracle is to to use UserTransactions with two different sessions, one session for the async consumer and another session for the synchronous consumer. I didn't test that though.
I am working on implementing Akka Alpakka for consuming from and producing to ActiveMQ queues, in Java. I can consume from the queue successfully, but I haven't yet been able to implement application-level message acknowledgement.
My goal is to consume messages from a queue and send them to another actor for processing. When that actor has completed processing, I want it to be able control the acknowledgement of the message in ActiveMQ. Presumably this would be done by sending a message to another actor that can do the acknowledgement, calling an acknowledge function on the message itself, or some other way.
In my test, 2 messages are put into the AlpakkaTest queue, and then this code attempts to consume and acknowledge them. However, I don't see a way to set the ActiveMQ session to CLIENT_ACKNOWLEDGE, and I don't see any difference in behavior with or without the call to m.acknowledge();. Because of this, I think messages are still being auto-acknowledged.
Does anyone know the accepted way to configure ActiveMQ sessions for CLIENT_ACKNOWLEDGE and manually acknowledge ActiveMQ messages in Java Akka systems using Alpakka?
The relevant test function is:
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("tcp://0.0.0.0:2999"); // An embedded broker running in the test.
Source<Message, NotUsed> jmsSource = JmsSource.create(
JmsSourceSettings.create(connectionFactory)
.withQueue("AlpakkaTest")
.withBufferSize(2)
);
Materializer materializer = ActorMaterializer.create(system); // `system` is an ActorSystem passed to the function.
try {
List<Message> messages = jmsSource
.take(2)
.runWith(Sink.seq(), materializer)
.toCompletableFuture().get(4, TimeUnit.SECONDS);
for(Message m:messages) {
System.out.println("Found Message ID: " + m.getJMSMessageID());
try {
m.acknowledge();
} catch(JMSException jmsException) {
System.out.println("Acknowledgement Failed for Message ID: " + m.getJMSMessageID() + " (" + jmsException.getLocalizedMessage() + ")");
}
}
} catch (InterruptedException e1) {
e1.printStackTrace();
} catch (ExecutionException e1) {
e1.printStackTrace();
} catch (TimeoutException e1) {
e1.printStackTrace();
} catch (JMSException e) {
e.printStackTrace();
}
This code prints:
Found Message ID: ID:jmstest-43178-1503343061195-1:26:1:1:1
Found Message ID: ID:jmstest-43178-1503343061195-1:27:1:1:1
Update: The acknowledgement mode is configurable in the JMS connector since Alpakka 0.15. From the linked documentation:
Source<Message, NotUsed> jmsSource = JmsSource.create(JmsSourceSettings
.create(connectionFactory)
.withQueue("test")
.withAcknowledgeMode(AcknowledgeMode.ClientAcknowledge())
);
CompletionStage<List<String>> result = jmsSource
.take(msgsIn.size())
.map(message -> {
String text = ((ActiveMQTextMessage)message).getText();
message.acknowledge();
return text;
})
.runWith(Sink.seq(), materializer);
As of version 0.11, Alpakka's JMS connector does not support application-level message acknowledgment. Alpakka creates internally a Session with the CLIENT_ACKNOWLEDGE mode here and acknowledges each message here in the internal MessageListener. The API does not expose these settings for overriding.
There is an open ticket that discusses enabling downstream acknowledgement of queue-based sources, but that ticket has been inactive for a while.
Currently you cannot prevent Alpakka from acknowledging the messages at the JMS level. However, that doesn't preclude you from adding a stage to your stream that sends each message to an actor for processing and uses the actor's replies as backpressure signals. The Akka Streams documentation describes how to do this with either a combination of mapAsync and ask or with Sink.actorRefWithAck. For example, to use the former:
Timeout askTimeout = Timeout.apply(4, TimeUnit.SECONDS);
jmsSource
.mapAsync(2, msg -> ask(processorActor, msg, askTimeout))
.runWith(Sink.seq(), materializer);
(Side note: In the related Streamz project, there is a recently opened ticket to allow application-level acknowledgement. Streamz is the replacement for the old akka-camel module and, like Alpakka, is built on Akka Streams. Streamz also has a Java API and is listed in the Alpakka documentation as an external connector.)
Looking at the source code for the Alpakka JmsSourceStage it already acknowledges each incoming message for you (and it's session is a Client Ack session). From what I can tell from the source there is no mode that allows you to do the acknowledgement of messages.
You can view the source code for Alpakka here.
I am using Durable subscription of RabbitMQ Stomp (documentation here). As per the documentation, when a client reconnects (subscribes) with the same id, he should get all the queued up messages. However, I am not able to get anything back, even though the messages are queued up on the server side. Below is the code that I am using:
RabbitMQ Version : 3.6.0
Client code:
var sock;
var stomp;
var messageCount = 0;
var stompConnect = function() {
sock = new SockJS(options.url);
stomp = Stomp.over(sock);
stomp.connect({}, function(frame) {
debug('Connected: ', frame);
console.log(frame);
var id = stomp.subscribe('<url>' + options.source + "." + options.type + "." + options.id, function(d) {
console.log(messageCount);
messageCount = messageCount + 1;
}, {'auto-delete' : false, 'persistent' : true , 'id' : 'unique_id', 'ack' : 'client'});
}, function(err) {
console.log(err);
debug('error', err, err.stack);
setTimeout(stompConnect, 10);
});
};
Server Code:
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Override
public void configureMessageBroker(final MessageBrokerRegistry config) {
config.enableStompBrokerRelay("<endpoint>", "<endpoint>").setRelayHost(host)
.setSystemLogin(username).setSystemPasscode(password).setClientLogin(username)
.setClientPasscode(password);
}
#Override
public void registerStompEndpoints(final StompEndpointRegistry registry) {
registry.addEndpoint("<endpoint>").setAllowedOrigins("*").withSockJS();
}
}
Steps I am executing:
Run the script at client side, it sends subscribe request.
A queue gets created on server side (with name stomp-subscription-*), all the messages are pushed in the queue and client is able to stream those.
Kill the script, this results in disconnection. Server logs show that client is disconnected and messages start getting queued up.
Run the script again with the same id. It somehow manages to connect to server, however, no message is returned from the server. Message count on that queue remains the same (also, RabbitMQ Admin console doesn't show any consumer for that queue).
After 10 seconds, the connection gets dropped and following gets printed on the client logs:
Whoops! Lost connection to < url >
Server also shows the same messages (i.e. client disconnected). As shown in the client code, it tries to establish the connection after 10 seconds and then, same cycle gets repeated again.
I have tried the following things:
Removed 'ack' : 'client' header. This results in all the messages getting drained out of queue, however, none reaches to client. I added this header after going through this SO answer.
Added d.ack(); in the function, before incrementing messageCount. This results in error at server side as it tries to ack the message after session is closed (due to disconnection).
Also, in some cases, when I reconnect with number of queued up messages is less than 100, I am able to get all the messages. However, once it crosses 100, nothing happens(not sure whether this has anything to do with the problem).
I don't know whether the problem exists at server or client end. Any inputs?
Finally, I was able to find (and fix) the issue. We are using nginx as proxy and it had proxy_buffering set to on (default value), have a look at the documentation here.
This is what it says:
When buffering is enabled, nginx receives a response from the proxied
server as soon as possible, saving it into the buffers set by the
proxy_buffer_size and proxy_buffers directives.
Due to this, the messages were getting buffered (delayed), causing disconnection. We tried bypassing nginx and it worked fine, we then disabled proxy buffering and it seems to be working fine now, even with nginx proxy.
I am getting started with the Finagle library in Java and trying to bring up couple of basic HTTP services which communicate in JSON.
Let this be master and slave services.
Master service has the following logic:
It runs a thread on start up which sends command requests to the slave
It listens for error/success reports from slave
The slave server's logic is this:
For the command it received, it immediately sends an ack.
Then it starts a thread to perform a task specified by the command.
It sends the result of the job (or error) back to master in JSON.
I have the following code:
HttpMuxer muxService = new HttpMuxer().withHandler("/", new MasterService());
ListeningServer server = Http.serve(new InetSocketAddress("localhost", 8000), muxService);
// This method runs a Function0 closure in a Future pool.
// It sends requests to slave, and exits after the commands are sent.
sendCommands();
try {
Await.ready(server);
} catch (TimeoutException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
My question is this:
Now, ideally the job could take a few seconds to complete. But in case of errors, it could almost immediately send a message to master with error report. Once I call sendCommands(), any moment I can expect the slave to attempt contacting the master.
Is the server up and listening with just the call to Http.serve()? Or does this happen in the Await.ready() call?
I am assuming the latter, and putting a Thread.sleep() in the thread spawned by sendCommands(). Is this required?
Also, is there an altogether better way to cleanly start this command issuing thread at the master?
Okay I figured this out after some testing.
The call to Http.serve() starts the server on the given port, the incoming requests are correctly handled by the MasterService
The Await.ready() loop just keeps the thread alive. The server will respond even if you replace this with a Thread.sleep(n)
So I've made a client/server pair in Java using RMI. The point is to send messages from the client to the server and print out how many messages we've received and how many we've lost (im comparing it to UDP so dont ask why I'm expecting to lose any with RMI).
So anyway, written the code, everything seems to work fine, I bind to the server and send the messages, the server receives them all and outputs the results.
Only problem is, after I've sent the last message from the client it throws a RemoteException and I have no idea why.
My only guess is that the server has shut down fist so its letting me know I can't contact the server any more (ie my iRMIServer variable is now invalid).
Funny thing is I thought that the client would shut down first because it terminates after sending the messages. The reason it might not be shutting down first is because it has to wait for a reply (ACK) from the server to confirm receipt of the message??
Maybe in this overlap time of the server replying to let us know everything is ok, the server shuts down and we can't connect to it again.
The code for sending the messages at the client end is as follows:
try {
iRMIServer = (RMIServerI) Naming.lookup(urlServer);
// Attempt to send messages the specified number of times
for(int i = 0; i < numMessages; i++) {
MessageInfo msg = new MessageInfo(numMessages,i);
iRMIServer.receiveMessage(msg);
}
} catch (MalformedURLException e) {
System.out.println("Errpr: Malformed hostname.");
} catch (RemoteException e) {
System.out.println("Error: Remote Exception.");
} catch (NotBoundException e) {
System.out.println("Error: Not Bound Exception.");
}
So it is sending the messages from 0-999 if I select 1000 messages to be sent.
After printing out the results from the server I call System.exit() straight away which could cause it to terminate early without waiting for the appropriate responses from the client?
If you can help I'd be greatly appreciative, and if you need any more info I'd be happy to provide.
Thanks in advance.
You can't shutdown the server in the middle of a remote method. The server has to send back an OK or exception status, or a return value if the method has one, and that's what your client is failing on when trying to receive. You have to schedule the shutdown to execute a bit later.