The use case is following.
I am passing producer or consumer reference over many objects instances in Java code.
At some of them I would like to do some checks for the Kafka configuration.
It means I would like to get back, what effective configuration is stored in Kafka Producer/Consumer (including defaults).
I do not see anthing explicit in java docs:
KafkaProducer
KafkaConsumer
So, how to get back Kafka producer and consumer configuration?
Unfortunately it's not possible. I have to admit it could be a useful feature for showing the "core" configuration properties at least (avoiding the possibility to get the "secrets" for authentication stuff for example).
The only solution that I see today for you is to have a link between the consumer/producer instance and the related properties bag used for setting the client configuration. I understand it's a waste of memory because such configuration is internally in the client but you need to keep your properties bag for having it.
Related
I've started to use Axon 4.3.1 (latest version) in my project and having a problem.
Where can I config the kafka retry policies after #eventhandler throw an exception?
OBS: I'm using SubscribingEventProcessor type as event processor (both projects). I'm using separate projects! Command model use mongo and publish events on Kafka. Query model consume events from Kafka (eventbus). In this way, using separate JVMs.
#processinggroup(event-processor) is configured to class with event-handler method. I'd like to have a config to Kafka auto retry after some time in error cases (from query model project).
Can I use some default Axon component? Could I use something like spring-retry or internal kafka configs itself?
I've found something like that (documentation):
https://docs.axoniq.io/reference-guide/configuring-infrastructure-components/event-processing/event-processors#error-handling
"Based on the provided ErrorContext object, you can decide to ignore the error, schedule retries, perform dead-letter-queue delivery or rethrow the exception."
How can I config (for example, schedule retries) on #eventhandler after errors?
Could you help me?
Thanks.
The current implementation of Axon's Kafka Extension (version 4.0-M2) does not support setting a retry policy when it comes to event handling.
I'd argue your best approach right now is to set up something like that on Kafka, if that's even possible. Otherwise, forcing a replay of the events through Kafka would be your best approach.
I am using Spring Boot with spring-rabbitmq. My connection factory is configured in application.properties and it seems to be nice.
My aim is: during start check if exists queue if specific name, and in case of absence create such queue. I am not sure how to deal with it. What beans should I create in config class? From what I read it should be RabbitAdmin, however I am not sure about it. Can you help me?
Everything is described clearly in the Reference Manual:
The AMQP specification describes how the protocol can be used to configure Queues, Exchanges and Bindings on the broker. These operations which are portable from the 0.8 specification and higher are present in the AmqpAdmin interface in the org.springframework.amqp.core package.
And further:
When the CachingConnectionFactory cache mode is CHANNEL (the default), the RabbitAdmin implementation does automatic lazy declaration of Queues, Exchanges and Bindings declared in the same ApplicationContext.
So, you should declare Queue, Exchange and Binding beans in your application context and AmqpAdmin will take care about their definition on the target Broker.
There must be a note that according AMQP protocol, if entity already exists on the Broker, the declaration is just silent and idempotent.
So, in your case you don't need to worry about queues existence and just provide their declarations as beans in the application context.
I am currently successfully using an MQConnectionFactory to connect and post to a Websphere MQ queue using JMS.
However I'm getting a requirement from a client that I must use mqclient.ini instead.
So my question is, for a 'standard' JMS setup, should I be using:
Straight up MQConnectionFactory instance
A JMS configuration file
An mqclient.ini file
? What would one use one over the other? Does one take precedence over another?
The mqclient.ini and JMSconfig files are used setting attributes like what client side exits to use, TCP level overrides etc. They are basically used for configuring the client libraries/jars. They are not meant for application configuration for example what queue manager or queue to use. This sort of info, connection factory or destination info, is typically pulled from a JNDI so that the configuration can be modified without affecting application.
I have traditional (com.ibm.mq.jar) MQ application in Java for testing purpose. Now I need to use that application to send some messages to JMS. When I try to set any JMS property on MQ message, for example:
message.setStringProperty("JMSDestination", "queue:///" + queueName);
I always get error: 2471 - MQRC_PROPERTY_NOT_AVAILABLE. It works if I just remove JMS from the property name.
Is it possible to set JMS properties directly on MQMessage? What is a correct way to do that on MQ level?
Btw. I have the same application in .NET where setting JMS properties this way is possible so I'm only trying to use the same code in Java.
It is not allowed to do this manually. Please use the JMS API to set JMS properties.
Restrictions to MQ properties are explained here.
One thing is interessting in that document page though,
The names of properties specified directly as MQRFH2 elements are not guaranteed to be validated by the MQPUT call.
You could perhaps work around this, on a short term basis. There seems to be no guarantee that setting the MQRFH2 elements directly will not be validated, though.
I am developing application which is embedded within the cluster environment in Websphere AS. I am using several nodes and sometimes I would like to change configuration settings on the fly and propagate it to all nodes within the cluster. I don't want to hold the config in the db or at least I would like to cache it on the node level and trigger config refresh action which forces each node to refresh the config from some common ground (i.e. db or net drive)
to avoid constant round-trips to the config storage.
More over some configuration can't be stored in db i.e. log level needs to be applied on the logger object in each node separately.
I was thinking about using JMS Topics and publish/subscribe approach to achive that goal.
The idea is that each node could subscribe to each Topic and no matter which nodes initate the config change modification would be propagated to all nodes within the cluster.
Has anyone ever tried to do that in WAS and whether there are any obstacles with this approach. If there are or if you have any other suggestion on how to solve that problem I would be very greatfull for your help.
Tx in advance,
Marcin
Here are a few options to consider as alternatives to JMS -
Use Java EE environment entries. These are scoped to the application, and WAS will automatically propagate any changes to all servers against which the application is deployed. This is a good approach since it is the standard Java EE approach to application configuration, if it is robust enough to meet your use case.
Use a WebSphere Shared Library. This allows you to link your applications to static files external to your application (i.e. on the filesystem), such that they are available on your classpath. Although these files are located on the node file systems, there is a way that you can place these files in WebSphere's centralized configuration repository such that they are automatically propagated to all WAS nodes. For more details on this, see this answer.
Both of these options are optimized for static configuration; in other words, configuration settings that are intended to be set at assembly-time, deployment-time, or to be changed by system administrators, but they are not typically used for values that change frequently, nor are they generally changed programmatically at runtime. WAS does allow your applications to pick these configuration settings in a rolling fashion, such that no application downtime is required though.
Currently we solved the problem with maybe not the most pretty approach but with the most simple one. Since we are using only 2 nodes we have possibility to enter web interface of specific node where we modify settings per each node. Maybe it is not very pretty but for now it is the easiest way. The config is stored in DB and we are planning to trigger config reload in each node and change the log level per node as well.