Check message source in Jms - Weblogic 12c - java

In our application we have configured our jms queue in weblogic and sending messages as ByteMessage after converting the object to byte array.
We are reading messages from the queue in asynchronous manner using onMessge method.
As we are sending the messages as byte message so in receiving end we are parsing the Message object to ByteMessage. But here for some of the messages we are getting ClassCastException.
We are not able to find out from where these messages are coming and how to stop them. We are also setting Jms type to identify that these messages are sent by us, but for these messages jms type is coming as null.
Anyone has any idea how to fix it?
Exception:
weblogic.jms.common.ObjectMessageImpl can not be cast to javax.jms.ByteMessage

Related

How to handle incoming 'messageMediaPoll ' messages?

Hi I am writing a telegram client using https://github.com/rubenlagus/TelegramApi that listen to incoming messages but I noticed some relevant messages carried in 'Polls' could not be read.
After a few debugging, it appears when such message is received, the incoming 'MessageMedia' part of the TLMessage being deserialized is mapped to messageMediaUnsupported#9f84f49e
According to documentation it means it is 'not supported by current client version'.
Indeed, I could see no implementation for messageMediaPoll message media in org.telegram.api.message.media package, and I could add it. But, how to have the server consider my client as valid for receiving such media?

i.grpc.internal.AbstractClientStream - Received data on closed stream meaning

I have a Spring boot application (v2.2.10.RELEASE) that subscribes to multiple topics in pubSub and pulls async data and sends it to somewhere else. I am not using SpringGCP, just native google libraries
this is my subscriber setting:
// Instantiate an asynchronous message receiver.
MessageReceiver receiver =
(PubsubMessage message, AckReplyConsumer consumer) -> {
messages.add(message);
consumer.ack();
};
Subscriber subscriber = Subscriber.newBuilder(subscriptionName, receiver)
.setParallelPullCount(2)
.setFlowControlSettings(flowControlSettings)
.setCredentialsProvider(credentialsProvider)
.setExecutorProvider(executorProvider)
//.setChannelProvider()
.build();
With high traffic and big messages (2 - 4 kb) I encounter this info message:
[grpc-default-worker-ELG-1-1] INFO i.grpc.internal.AbstractClientStream - Received data on closed stream
first of all, I don't fully understand what that means? all that I noticed was that when this happens the delivered duplicated messages increase. so I assumed it meant that pubSub tried to reach the subscriber with some messages but the subscriber for some reason was not ready so pubSub will try to deliver the messages again. and hence more duplicates, is that right?
would this problem be solved using the TransportChannelProvider in subscribers? my understanding of the poorly written documentation, that this will create a new channel for delivery when the current in-use channel is closed, hence get rid of the previous log message.
if yes, how do I define the channel target string? and where can I find A NameResolver-compliant URI for the mangagedChannel. the snippet I mean is this:
private TransportChannelProvider getChannelProvider() {
ManagedChannel channel = ManagedChannelBuilder.forTarget(target).usePlaintext(true).build();
return FixedTransportChannelProvider.create(GrpcTransportChannel.create(channel));
}
I am pretty new to GCP so sorry if my question is not coherent enough
Using a custom TransportChannelProvider won't solve this type of issue. This is more likely an issue deeper down in the stack, e.g., at the gRPC level. There have been some open issues for this type of error [1, 2].
With regard to why it is causing duplicates, it is possible that the messages are getting delivered via a stream that is already closed (which aligns with the error message) because they were trapped in a lower-level buffer at the gRPC layer and therefore ended up being duplicates of messages that were subsequently delivered and processed via another stream. This could be a version of the issue discussed in the documentation around large backlogs of small messages. There was a fix for this issue in v1.109.0 of the Java client library, so if you are using a version older than that, it is worth updating.
If duplicates continue to be an issue, it would be best to reach out to support with the name of your subscription and the message IDs of some of the duplicate messages so that they can look at the delivery patterns for those messages and further diagnose if these redeliveries are unexpected.

AmazonMQ/ActiveMQ Message RedeliveryPolicy MaximumRedeliveries Ignored

I have configured ActiveMQ redelivery plugin as follows (with max 4 redeliveries)
<redeliveryPlugin fallbackToDeadLetter="true" sendToDlqIfMaxRetriesExceeded="true">
<redeliveryPolicyMap>
<redeliveryPolicyMap>
<defaultEntry>
<redeliveryPolicy initialRedeliveryDelay="5000" maximumRedeliveries="4" redeliveryDelay="10000"/>
</defaultEntry>
</redeliveryPolicyMap>
</redeliveryPolicyMap>
</redeliveryPlugin>
If a client fails to send ACK the message is redelivered. So far so good... However, the max redeliveries are completely ignored by the broker and it keeps redelivering the messages "infinitely" many times. Also the message is never moved to DLQ.
I also tried using:
?jms.redeliveryPolicy.maximumRedeliveries=4 on the connection URI (STOMP Connector), but also with no luck.
Any help is most appreciated!
For a STOMP client I would assume that the broker will not consider the message as being delivered unless the client either ACKs it or NACKs it otherwise it must assume that it never made it to a client and therefore treats it as always having a delivery count of zero. The broker redelivery plugin keys off the message's delivery count so if the message is treated as not having been delivered which in this case it likely is then it will take no action on the message.

how to retrieve RFH message headers from a message coming from MQ?

Can anyone please help me in retrieving message headers from the message coming from Websphere IBM MQ ?
We are using JMS OnMessage method to browse the MQ and the type of message received from MQ is "BytesMessage". We want to iterate through the RFH message headers and collect them.
We have tried using MQHeaders to iterate over the MQMessage but could not as it was throwing an exception.
Please advice me.
a good place to start reading is https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_8.0.0/com.ibm.mq.dev.doc/q031990_.htm which explains how IBM MQ implements JMS.
Next you should print out your received JMS Message with toString() or iterate through your properties and check the properties you receive. Depending on RFH or RFH2 you will see different fields.
You can find explanations for these fields in https://www.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.dev.doc/q032000_.htm
and https://www.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.dev.doc/q032060_.htm

Incoming messages are treated as outgoing in Atmosphere

I have a problem concerning receiving messages (I use #ManagedService). I use the same connection to send and receive messages between browser and my Java program. I can see that all the messages pass through ManagedAtmosphereHandler.message(AtmosphereResource resource, Object o) method.
If it is an incoming message, Atmosphere iterates through all methods marked #Message. Then it tries to find a decoder and eventually invokes correct method.
For outgoing messages, Atmosphere retrieves invoked method. It does so by getting localAttribute named "ManagedAtmosphereHandler" (name of current class). It is present only for outgoing messages. Then the message is encoded and send to browser.
The problem is, sometimes invokedMethod is set for incoming messages. It results in treating my incoming messages as outgoing. Does anybody know why it happens? My outgoing messages are scheduled and I suppose that's the reason why it happens, but I'm not sure. When are these localAttributes set and what are they for?
I updated Atmosphere 2.3.0-RC6 to 2.3.0 and it works like a charm now.

Categories