Now I can do like this:
#RabbitListener(queues = {ENTITY_KEY + "-snapshots", ENTITY_KEY + "-updates"})
public void handleMessage(ProviderOddsOffer offer, #Header("update_type") Long updateType) {
...
}
Can I do it without declaring queues in annotation itself?
It's not clear what you mean; the listener has to be configured to consume from some queue, or queues.
If you that mean you wish to externalize the queue name(s) rather than hard-coding in java, you can use a property placeholder ${...} or a SpEL expression #{...} for the queue name(s); they will be resolved during bean initialization.
Related
I would like to know if this is possible.
#KafkaListener(topics = #Value("${kafka.topic}"), groupId = "group1")
public void listen(ConsumerRecord<String, CloudEvent> cloudEventRecord) {
}
My IDE is saying that the way of specifying the add value is wrong.
I know that ConcurrentKafkaListenerContainerFactory is used by the lister to configure properly.
I am looking for a way to map the topic name from application property yml into the listener method.
I have an application with some externalized configuration in the form of properties. I would like the application to react to a change of such properties without a restart or full context refresh.
I am not clear what my options are.
Artificial example: the application implements a service that receives requests and decides whether to queue or reject them. The maximum size of the queue is a property.
final int queueMaxSize = queueProperties.getMaxSize();
if (queue.size() >= queueMaxSize) { //reject }
Where QueueProperties is a class annotated with #ConfigurationProperties.
#ConfigurationProperties(prefix = "myapp.limits.queue")
#Getter
#Setter
public class QueueProperties {
public int maxSize = 10;
}
This works as far as allowing me to control behavior via system properties, profiles, etc.
However, I would like to be able to change this value without releasing/deploying/restarting/refreshing the application.
My application already uses Archaius.
If I push a new value for this property using our internal infrastructure, i can see the application Spring Environment does receive the new value.
(e.g., /admin/env reflects the value and changes dynamically).
The part I'm not clear on is: how to make my service react to the change of value in the environment?
I found two ways, but they seem hairy, I wonder if there are better options. I expected this to be a common problem with a first class solution in the Spring ecosystem.
Hacky solution #1:
#Bean
#Scope("prototype")
#ConfigurationProperties(prefix = "myapp.limits.queue")
QueueProperties queueProperties() {
return new QueueProperties();
}
And inject this into the service using the properties as Provider<QueueProperties> and use it as queuePropertiesProvider.get().getMaxSize().
This works but has a few side-effects I'm not a fan of:
ConfigurationProperties annotation moved from the class to the bean definition
A new QueueProperties object is created and bound to values for every request coming in
Provider might throw on get()
Invalid values are not detected until the first request comes in
Hacky solution #2:
Don't annotate my properties class with ConfigurationProperties, inject the Spring environment at construction time. Implement the getters as such:
int getMaxSize() {
return environment.getProperty("myapp.limits.queue", 10);
}
This also works ok in terms of behavior. However
- This is not annotated as property (unlike the rest of the properties classes in this large project, makes it harder to find)
- This class does not show up in /admin/configprops
Hacky solution #3:
Schedule a recurring task that uses Environment to update my singleton QueueProperties bean.
Any further ideas/suggestions/pointers?
Is there a canonical/recommended way to do this that does not have the shortcoming of my solutions above?
I know I can inject as an instance all the beans that match the interface and then choose between them programmatically :
#Inject #Any Instance<PaymentProcessor> paymentProcessorSource;
That means I have to put the selecting logic into the client.
Can I, as an alternative, cache the value of the ejb using lexical scoping with lambda expression? Will the container be able to correctly manage the lifecycle of the ejb in that case or is this practice to avoid?
For example, having PaymentProcessorImpl1 e PaymentProcessorImpl2 as two strategies of PaymentProcessor, something like that:
public class PaymentProcessorProducer {
#Inject
private PaymentProcessorImpl1 paymentProcessorImpl1;
#Inject
private PaymentProcessorImpl2 paymentProcessorImpl2;
#Produces
private Function<String, PaymentProcessor> produce() {
return (strategyValue) -> {
if ("strategy1".equals(strategyValue)) {
return paymentProcessorImpl1;
} else if ("strategy2".equals(strategyValue)) {
return paymentProcessorImpl2;
} else {
throw new IllegalStateException("Tipo non gestito: "
+ strategyValue);
}
};
}
}
and then into the client to something like that:
#Inject
Function<String, PaymentProcessor> paymentProcessor;
...
paymentProcessor.apply("strategy1")
Can I, as an alternative, cache the value of the ejb using lexical scoping with lambda expression?
Theoretically, you could do this. Whether it works is easy to try on our own.
Will the container be able to correctly manage the lifecycle of the ejb in that case or is this practice to avoid?
What exactly is an EJB here? The implementation of PaymentProcessor? Note that EJB beans are different from CDI beans. As in CDI container does not control lifecycle of EJB beans, it "only provides a wrapper for you to use them as if they were CDI beans".
That being said, the lifecycle is still the same - in your case the producer is creating #Dependent bean meaning every time you inject Function<String, PaymentProcessor>, the producer will be invoked.
What poses certain problem is that you create an assumption on two or more context being active at any given time. The moment you decide to actually apply() the function, the scope within which your implementation(s) exist may or may not be active. If they are both ApplicationScoped for instance, you should be alright. If, however, they are SessionScoped and you happen to timeout/invalidate session before applying function, then you get into a very weird state.
This is probably why I would rather avoid this approach and go with qualifiers. Or you can introduce a new bean which has both strategies in it and have a method with an argument which decides which strategy to use.
So I need to set the timeout parameter for #Transactional annotation. This property will come from a property file which I'm not able to do since I'm encountering "The value for annotation attribute Transactional.timeout must be a constant expression". Something like this
#Value("${mytimeout}")
private int myTimeout;
#Transactional(timeout=myTimeout)
public void myMethod(){
}
The only time the timeout attribute can be set by a variable is when a variable is final.
So, I was thinking if it is possible to set the timeout property programmatically while using the #Transaction annotation. Or any other way I can set this attribute Thanks!
If you need the same timeout for all transactions, you can configure it as defaultTimeout in your transaction manager
Otherwise, you may try to play with custom AnnotationTransactionAttributeSource and TransactionAnnotationParser, though you'll need to replace <tx:annotation-drivern> with manual definition of corresponding beans in order to configure a custom attribute source.
Then you can create a custom annotation and make TransactionAnnotationParser generate TransactionDefinition with custom timeout when it sees your annotation.
Otherwise, the easiest way to solve this problem is to give up using #Transactional and use TransactionTemplate instead.
I'm using jms with jboss/wildfly (jee6).
A JMS Queue can easily be injected using:
#Resource(name="java:jboss/exported/jms/queue/queuename")
private Queue myQueue;
Now I'd like to implement a central Stateless Bean which gets all the queues injected and which provides a single String parameterized method to retrieve the Queue in a factory like way:
#Resource(name="java:jboss/exported/jms/queue/queuename1")
private Queue myQueue1;
#Resource(name="java:jboss/exported/jms/queue/queuename2")
private Queue myQueue2;
public Queue getQueueIWant(String identifier) {
if("IdentifyingString".equals(identifier))
{ return myQueue1; }
if...
}
Inside another Bean this "FactoryBean" gets injected:
#EJB
private MyQueueFactory queueFactory;
and can easily be used:
...
Queue queue = queueFactory.getQueueIWant("AIdentifier");
producer = session.createProducer(queue);
...
The retrieved Queue instance will be used to send messages to the queue (MessageProducer) and to retrieve them (MessageConsumer) in different places (Beans). I have tried this implementation and it seems to work.
My Question is does anyone see any Problems with this?
Could this lead to instability?
Should i use jndiLookup instead?
Is there a better/easier way?
Or is this nice and possible :-) ?
Thanks
Philipp
In your code you have managed to centralize the resource declration/injection, but at your usage site you still need to address each queue individually. So not much has been gained compared with direct injection.
It seems that you are trying to have something like an array of queues together with resource injection, but resource injection in that case does not really scale: Each new queue requires a deployment.
In your case I suggest using a manual JNDI lookup in an iteration. Then you can put the queues in an array/list for further processing. You could even pass the number of queues as a parameter into the method, and so the number of queues can be changed dynamically at run-time:
Pseudo-code (not tested, just for illustration):
InitialContext ic = new InitialContext();
Queue[] qs = new Queue[count];
for (int i = 0; i < count; i++) {
String name = "queue/queuename" + i;
qs[i] = (Queue) ic.lookup(name);
}
If the queue name is based on a run-time argument (like an incoming JMS message) a JNDI lookup would be suitable, because static resource injection and dynamic naming conflicts somehow. Using a dynamic JNDI lookup scales, as a deployment to support additional queues is not required.