I am using Camel to setup some routing using the file and jms-queue components. The problem that I am having is that I cannot disable polling messages sent to the console.
I tryed multiple ways to disable these messages by setting the logging level(runLoggingLevel = OFF) on the routes, trace = false on the context, set a logger on the routes and a few others but nothing works.
A message from the file component looks like this:
2013-08-26 09:34:47,651 DEBUG [Camel (camelContextOrder) thread #0 - file://order-import/order-in] o.a.c.c.f.FileConsumer Took 0.001 seconds to poll: order-import\order-in
And a messsage from the jms queue:
2013-08-26 09:34:46,281 DEBUG [ActiveMQ Journal Checkpoint Worker] o.a.a.s.k.MessageDatabase Checkpoint started.
2013-08-26 09:34:46,403 DEBUG [ActiveMQ Journal Checkpoint Worker] o.a.a.s.k.MessageDatabase Checkpoint done.
You have DEBUG logging level configured. You should change that to INFO etc so Camel / ActiveMQ will not log so much.
Check your logging configuration to adjust this.
Related
I have want to connect 2 micro services java with c#, in my c# I have setup my rabbitmq and I am running my services on docker. I believe everything is setup correctly but when trying to send over a message I get an error telling me my event bus does not exist. Even though i can see that the event bus does infact exist on the rabbitmq localhost website.
Would anyone know what causes this and how i could fix this, thank you for your time and help.
my rabbitmq port:
"15672:15672"
"5672:5672"
I can go on my localhost:15672 and see my exchanges and queues on rabbitmq
Yet when I try to use the "ConvertAndSend" Methode in java I get this error:
2022-08-30 12:01:47.801 INFO 90768 --- [nio-8090-exec-1] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:5672]
2022-08-30 12:01:48.100 INFO 90768 --- [nio-8090-exec-1] o.s.a.r.c.CachingConnectionFactory : Created new connection: rabbitConnectionFactory#1a1cc163:0/SimpleConnection#39d3d67d [delegate=amqp://guest#127.0.0.1:5672/, localP
ort= 56701]
2022-08-30 12:01:48.249 ERROR 90768 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'StudyP
lannerAndMonitor_event_bus' in vhost '/', class-id=60, method-id=40)
I have also let a friend try and run my code and he does not have the issue, everything seems to work fine with him.
I am running ActiveMQ 5.15.5 as a standalone broker and my spring application
is connecting to it.
I wanted to know if I can log the Task-ID that the broker logs, in the
client application logs.
Currently application logs look like:
INFO ] 2018-11-29 09:52:19,144 [ActiveMQ Session Task] ....
[INFO ] 2018-11-29 09:52:19,168 [ActiveMQ Session Task] ...
[INFO ] 2018-11-29 09:52:19,199 [ActiveMQ Session Task] ....
I believe if I had embedded activeMQ the logs would look like
INFO ] 2018-11-29 09:52:19,144 [ActiveMQ Session Task-9] ....
[INFO ] 2018-11-29 09:52:19,168 [ActiveMQ Session Task-9] ...
Looking at the client application logs, i do not have a way to categorize
transactions by multiple users as they are all logged as "ActiveMQ Session
Task"
Is there a way to log the Task-ID from broker (I do see the Task-ID at the
broker logs activemq.log) in the client logs.
I tried to set the ActiveMQ logs in the client log4j.xml to info with no
luck.
Thanks
The "Task-ID", as you call it, which is logged here is actually just the name of the thread on the broker which is performing the work. The client has no idea about the thread name on the broker and there is no way to communicate that information with the client. Those threads are pooled and re-used over & over so using their names to identify a unique transaction almost certainly wouldn't work anyway.
With Kafka client Java library, consuming logs has worked for some time but with the following errors it doesn't work any more:
2016-07-15 19:37:54.609 INFO 4342 --- [main] o.a.k.c.c.internals.AbstractCoordinator : Marking the coordinator 2147483647 dead.
2016-07-15 19:37:54.933 ERROR 4342 --- [main] o.a.k.c.c.internals.ConsumerCoordinator : Error UNKNOWN_MEMBER_ID occurred while committing offsets for group logstash
2016-07-15 19:37:54.933 WARN 4342 --- [main] o.a.k.c.c.internals.ConsumerCoordinator : Auto offset commit failed: Commit cannot be completed due to group rebalance
2016-07-15 19:37:54.941 ERROR 4342 --- [main] o.a.k.c.c.internals.ConsumerCoordinator : Error UNKNOWN_MEMBER_ID occurred while committing offsets for group logstash
2016-07-15 19:37:54.941 WARN 4342 --- [main] o.a.k.c.c.internals.ConsumerCoordinator : Auto offset commit failed:
2016-07-15 19:37:54.948 INFO 4342 --- [main] o.a.k.c.c.internals.AbstractCoordinator : Attempt to join group logstash failed due to unknown member id, resetting and retrying.
It keeps resetting.
Running another instance of the same application gets errors immediately.
I suspect Kafka or its ZooKeeper has a problem but there's no error log.
Any one who has idea on what's going on here?
This is the application I'm using: https://github.com/izeye/log-redirector
I just faced the same issue. I have been investigating, and in this thread and in this wiki you can find the solution.
The issue seems to be that the processing of a batch takes longer than the session timeout.
Either increase the session timeout or the polling frequency or limit the number of bytes received.
What worked for me was changing max.partition.fetch.bytes. But you can also modify session.timeout.ms or the value you pass to your consumer.poll(TIMEOUT)
I want to start my app even if the RabbitMQ is not reachable. Currently my app hangs while AmqpInboundChannelAdapter tries to establish connection and I see pattern how long it waits before it tries again. How to configure app using Spring AMQP to start regardless of RabbitMQ availability and configure this "back off policy"?
18/May/2015 16:52:57,666 INFO [main] - AmqpInboundChannelAdapter - started org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter#0
18/May/2015 16:54:47,769 INFO [main] - AmqpInboundChannelAdapter - started org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter#1
18/May/2015 16:57:07,818 INFO [main] - AmqpInboundChannelAdapter - started org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter#2
Can someone assist me in understanding and troubleshooting this issue? I do not know what is causing Hector to fail when it tries to connect to the Cassandra cluster.
How can I find out where the issue is?
0 [main] INFO me.prettyprint.cassandra.connection.CassandraHostRetryService - Downed Host Retry service started with queue size 10 and retry delay 30s
168 [main] INFO me.prettyprint.cassandra.service.JmxMonitor - Registering JMX me.prettyprint.cassandra.service_keyspace-name:ServiceType=hector,MonitorType=hector
399 [main] INFO me.prettyprint.cassandra.model.ConfigurableConsistencyLevel - READ ConsistencyLevel set to QUORUM for ColumnFamily Files
400 [main] INFO me.prettyprint.cassandra.model.ConfigurableConsistencyLevel - WRITE ConsistencyLevel set to QUORUM for ColumnFamily Files
406 [main] INFO me.prettyprint.cassandra.model.ConfigurableConsistencyLevel - READ ConsistencyLevel set to QUORUM for ColumnFamily FileList
407 [main] INFO me.prettyprint.cassandra.model.ConfigurableConsistencyLevel - WRITE ConsistencyLevel set to QUORUM for ColumnFamily FileList
From the trace it seems you are using QUORUM as consistency level. Try to use ONE and see if it works. It seems that one or more of the nodes that should satisfy your request are down. Use noderool ring/status or see if any node is down in your cluster.