Hornetq embedded, Couldn't find any bindings for address - java

I'm using an embedded HornetQ instance from within a JUnit test case.
Somehow I can't get my test driver to deliver a message onto the bus. There is no exception or anything that says that the message bus wasn't working or properly set up(see [1]).
Only when I run the test case in debug mode I'm seeing some traces starting with "Couldn't find any bindings for address..." (see [2]).
Is this trace message something that can be ignored? "No binding" sounds to me like there could be no hornetq available at all.
[1]
Q221007: Server is now live
[FF] [ScalaTest-run] [2014-06-11 15:03:03,555 INFO] HornetQServerImpl.java:460 - HQ221001: HornetQ Server version 2.5.0.SNAPSHOT (Wild Hornet, 124) [ea2511b0-e5c6-11e3-a213-b1fcc2ec9262]
[2]
Couldn't find any bindings for address=hornetq.notifications on message=ServerMessage[messageID=5,durable=true,userID=null,priority=0, bodySize=512,expiration=0, durable=true, address=hornetq.notifications,properties=TypedProperties[{_HQ_User=null, _HQ_NotifTimestamp=1402491783941, _HQ_Distance=0, _HQ_SessionName=b9525487-f168-11e3-8314-fb544e2d7270, _HQ_NotifType=CONSUMER_CREATED, _HQ_Address=xxx.messaging.RequestMessage-integ-test, _HQ_ClusterName=d78dbd27-bfe8-47f9-8b51-06c4eeb63543-integ-testea2511b0-e5c6-11e3-a213-b1fcc2ec9262, _HQ_RoutingName=d78dbd27-bfe8-47f9-8b51-06c4eeb63543-integ-test, _HQ_ConsumerCount=1, _HQ_RemoteAddress=invm:0}]]#1086110741
[FF] [Thread-0 (HornetQ-remoting-threads-HornetQServerImpl::serverUUID=ea2511b0-e5c6-11e3-a213-b1fcc2ec9262-1032009487-1905514837)] [2014-06-11 15:03:03,942 DEBUG] PostOfficeImpl.java:685 - Message ServerMessage[messageID=5,durable=true,userID=null,priority=0, bodySize=512,expiration=0, durable=true, address=hornetq.notifications,properties=TypedProperties[{_HQ_User=null, _HQ_NotifTimestamp=1402491783941, _HQ_Distance=0, _HQ_SessionName=b9525487-f168-11e3-8314-fb544e2d7270, _HQ_NotifType=CONSUMER_CREATED, _HQ_Address=xxx.messaging.RequestMessage-integ-test, _HQ_ClusterName=d78dbd27-bfe8-47f9-8b51-06c4eeb63543-integ-testea2511b0-e5c6-11e3-a213-b1fcc2ec9262, _HQ_RoutingName=d78dbd27-bfe8-47f9-8b51-06c4eeb63543-integ-test, _HQ_ConsumerCount=1, _HQ_RemoteAddress=invm:0}]]#1086110741 is not going anywhere as it didn't have a binding on address:hornetq.notifications

This specific code is just a Log.debug.
Couldn't find any bindings for address=hornetq.notifications on...
We send notifications for things that happen on the servers, and you won't always have a listener for these notifications. on this case the notification message is just not being routed as you have no consumers.. which is perfectly fine. This has nothing to do with the error you're having... it's irrelevant. You should look for other clues on your test. I'm not giving this as an answer as it doesn't answer your question. I don't have enough info to do it.
I would need more information to answer exactly why you're not receiving messages on your test.. but this specific message you posted has no direct relation.

Related

Kafka Consumer Error - Could not find attribute records-lag

I am trying to connect my application to Kafka and I am able to do that successfully.
However, when I look at the logs, I get the below exception :
WARN 2021-01-18 21:45:11,438 [**] org.apache.kafka.common.metrics.JmxReporter [RequestID=] Error getting JMX attribute 'records-lag'
javax.management.AttributeNotFoundException: Could not find attribute records-lag
at org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttribute(JmxReporter.java:192)
at org.apache.kafka.common.metrics.JmxReporter$KafkaMbean.getAttributes(JmxReporter.java:200)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttributes(DefaultMBeanServerInterceptor.java:709)
at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttributes(JmxMBeanServer.java:705)
at com.compuware.apm.agent.measures.jmx.MBeanServerProxy$MBeanGetAttributesExecutor.execute(MBeanServerProxy.java:183)
at com.compuware.apm.agent.measures.jmx.MBeanServerProxy$MBeanExecutionStrategy.execute(MBeanServerProxy.java:233)
at com.compuware.apm.agent.measures.jmx.MBeanServerProxy.executeMBeanMethod(MBeanServerProxy.java:93)
at com.compuware.apm.agent.measures.jmx.MBeanServerProxy.getAttributeList(MBeanServerProxy.java:74)
at com.compuware.apm.agent.measures.jmx.MBeanReference.getAttributes(MBeanReference.java:59)
at com.compuware.apm.agent.measures.jmx.MBeanAttributeReader.read(MBeanAttributeReader.java:86)
at com.compuware.apm.agent.measures.jmx.MBeanSubscriptionManager.performMeasurements(MBeanSubscriptionManager.java:218)
at com.compuware.apm.agent.measures.jmx.MBeanTracker.performMeasurements(MBeanTracker.java:63)
at com.compuware.apm.agent.measures.MetricsProvider.captureJMX(MetricsProvider.java:220)
at com.compuware.apm.agent.measures.MetricsProvider.capture(MetricsProvider.java:172)
This error is getting printed repeatedly. I tried different configurations but failed to resolve this.
Is there a way I can fix it ?
This is result of internal call to create mBeans for related JMX metrices.
Refer link to get details
When collecting bulk metrics, this warning message in logs is unhelpful, it is
impossible to determine which MBean is missing this attribute and fix the
metric

How to recover client from "No handler waiting for message" warning?

At medium to high load (test and production), when using the Vert.x Redis client, I get the following warning after a few hundred requests.
2019-11-22 11:30:02.320 [vert.x-eventloop-thread-1] WARN io.vertx.redis.client.impl.RedisClient - No handler waiting for message: [null, 400992, <data from redis>]
As a result, the handler supplied to the Redis call (see below) does not get called and the incoming request times out.
Handler<AsyncResult<String>> handler = res -> {
// success handler
};
redis.get(key, res -> {
handler.handle(res);
});
The real issue is that once the "No handler ..." warning comes up, the Redis client becomes useless because all further calls to Redis made via the client fails with the same warning resulting in the handler not getting called. I have an exception handler set on the client to attempt reconnection, but I do not see any reconnections being attempted.
How can one recover from this problem? Any workarounds to alleviate the severity would also be great.
I'm on vertx-core and vertx-redis-client 3.8.1 .
The upcoming 4.0 release had addressed this issue and a release should be hapening soon, how soon, I can't really tell.
The problem is that we can't easily port back from the master branch to the 3.8 branch because a major refactoring has happened on the client and the codebases are very different.
The new code, uses a connection pool and has been tested for concurrent access (and this is where the issue you're seeing comes from). Under load the requests are routed across all event loops and the queue that maintains the state between in flight requests (requests sent to redis) and waiting handlers would get out of sync in very special conditions.
So I'd first try to see if you can already start moving your code to 4.0, you can have a try with the 4.0.0-milestone3 version but to be totally fine, just have a run with the latest master which has more issues solved in this area.

How to configure Camel's RedeliveryPolicy retriesExhaustedLogLevel?

I have set up an errorHandler in a Camel route that will retry a message several times before sending the message to a dead letter channel (an activemq queue in this case). What I would like is to see an ERROR log when the message failed to be retried the max number of times and was then sent to the dead letter queue.
Looking at the docs for error handling and dead letter channels, it seems that there are 2 options available on the RedeliveryPolicy: retriesAttemptedLogLevel and retriesExhaustedLogLevel. Supposedly by default the retriesExhaustedLogLevel is already set at LoggingLevel.ERROR, but it does not appear to actually log anything when it has expended all retries and routes the message to the dead letter channel.
Here is my errorHandler definition via Java DSL.
.errorHandler(this.deadLetterChannel(MY_ACTIVE_MQ_DEAD_LETTER)
.useOriginalMessage()
.maximumRedeliveries(3)
.useExponentialBackOff()
.retriesExhaustedLogLevel(LoggingLevel.ERROR)
.retryAttemptedLogLevel(LoggingLevel.WARN))
I have explicitly set the level to ERROR now and it still does not appear to log out anything (to any logging level). On the other hand, retryAttemptedLogLevel is working just fine and will log to the appropriate LoggingLevel (ie, I could set retryAttemptedLogLevel to LoggingLevel.ERROR and see the retries as ERROR logs). However I only want a single ERROR log in the event of exhaustion, instead of an ERROR log for each retry when a subsequent retry could potentially succeed.
Maybe I am missing something, but it seems that the retriesExhaustedLogLevel does not do anything...or does not log anything if the ErrorHandler is configured as a DeadLetterChannel. Is there a configuration that I am still needing, or does this feature of RedeliveryPolicy not execute for this specific ErrorHandlerFactory?
I could also set up a route to send my exhausted messages that simply logs and routes to my dead letter channel, but I would prefer to try and use what is already built into the ErrorHandler if possible.
Updated the ErrorHandler's DeadLetterChannel to be a direct endpoint. Left the 2 logLevel configs the same. I got the 3 retry attempted WARN logs, but no ERROR log telling me the retries were exhausted. I did, however, set up a small route listening to the direct dead letter endpoint that logs, and that is working.
Not a direct solution to my desire to have the ERROR log work for the exhaustion, but is an acceptable workaround for now.
Please try with this code:
.errorHandler(deadLetterChannel("kafka:sample-dead-topic")
.maximumRedeliveries(4).redeliveryDelay(60000)
.retriesExhaustedLogLevel(LoggingLevel.WARN)
.retryAttemptedLogLevel( LoggingLevel.WARN)
.retriesExhaustedLogLevel(LoggingLevel.ERROR)
.logHandled(true)
.allowRedeliveryWhileStopping(true)
.logRetryStackTrace(true)
.logExhausted(true)
.logStackTrace(true)
.logExhaustedMessageBody(true)
)
retry is configured for 1 minute interval.
Camel application logged the errors for evry retry with the detailed information.

Java Mail API: Callbacks

Context:
I am working on a piece of Java code where I am reading mails from an array (which works fine). I was wondering if someone can help me with the callback in order to show a fancy message like Your email was sent.
Questions:
How do I implement this?
Is there any way to get any Boolean type return value from javax.mail to check if the message was sent or not?
Maybe I should create a pool? If yes, how do I do that? Is there any signal to kill the pool?
Code:
// addressTo is the array.
Transport t = sesion.getTransport(this.beanMail.getProtocolo());
t.connect(this.beanMail.getUsuario(), this.beanMail.getPassword());
t.sendMessage(mensaje, addressTo);
t.close();
Quoting from the JavaMail API FAQ (in the context of tracking bounced messages):
While there is an Internet standard for reporting such errors (the multipart/report MIME type, see RFC1892), it is not widely implemented yet. RFC1211 discusses this problem in depth, including numerous examples.In Internet email, the existence of a particular mailbox or user name can only be determined by the ultimate server that would deliver the message. The message may pass through several relay servers (that are not able to detect the error) before reaching the end server. Typically, when the end server detects such an error, it will return a message indicating the reason for the failure to the sender of the original message. There are many Internet standards covering such Delivery Status Notifications but a large number of servers don't support these new standards, instead using ad hoc techniques for returning such failure messages. This makes it very difficult to correlate a "bounced" message with the original message that caused the problem. (Note that this problem is completely independent of JavaMail.)
Source

Work around for MessageNotReadableException in Java

I am building a small api around the JMS API for a project of mine. Essentially, we are building code that will handle the connection logic, and will simplify publishing messages by providing a method like Client.send(String message).
One of the ideas being discussed right now is that we provide a means for the users to attach interceptors to this client. We will apply the interceptors after preparing the JMS message and before publishing it.
For example, if we want to timestamp a message and wrote an interceptor for that, then this is how we would apply that
...some code ...
Message message = session.createMessage()
..do all the current processing on the message and set the body
for(interceptor:listOfInterceptors){
interceptor.apply(message)
}
One of the intrerceptors we though of was to compress the message body. But when we try to read the body of the message in the interceptor, we are getting a MessageNotReadableException. In the past, I normally compressed the content before setting it as the body of the message - so never had to worry about this exception.
Is there any way of getting around this exception?
It looks like your JMS client attempts to read a write-only message. Your interceptor cannot work this way, please elaborate how you were compressing message earlier.

Categories