We have multiple applications consumer listening to the same kafka topic and a producer sets the message header when sending message to the topic so specific instance can evaluate the header and process the message. eg
#StreamListener(target=ITestSink.CHANNEL_NAME,condition="headers['franchiseName'] == 'sydney'")
public void fullfillOrder(#Payload TestObj message) {
log.info("sydney order request received message is {}",message.getName());
}
In Spring Cloud Stream 3.0.0 the #StreamListener is deprecated and I could not find the equivalent of the condition property in Function.
Any suggestion?
Though I was not able to find the equivalent for the functional approach either, I do have a suggestion.
The #StreamListener annotations condition does not stop the fact that the application must consume the message, read its header, and filter out specific records before passing it to the listener (fullfillOrder()). So it's safe to assume you're consuming every message that hits the topic regardless (by the event receiver that Spring Cloud has implemented for us under the hood), but the listener only gets executed when header == sydney.
If there was a way to configure the event receiver that Spring Cloud uses (to discard message before hitting listener), I would suggest looking into that. If not, would resort to filtering out any messages (non-sydney) before doing any processing. If you're familiar with Spring Cloud's functional approach, would look something like this:
#Bean
public Consumer<Message<TestObj>> fulfillOrder() {
return msg -> {
// to get header - msg.getHeaders().get(key, valueType);
// filter out bad messages
}
}
or
#Bean
public Consumer<ConsumerRecord<?, TestObj>> fulfillOrder() {
return msg -> {
// msg.headers().lastHeader("franchiseName").value() -> filter em out
}
}
Other:
^ my code assumes you're integrating the kafka-client API with Spring cloud stream via spring-cloud-stream-binder-kafka. based on tags listed, i will note Spring Cloud Stream has two versions of binders for Kafka - one for the kafka client library, and one for kafka streams library.
Without considering Spring Cloud / Frameworks, the high-lvl DSL in kafka streams doesn't give you access to headers, but the low-level Processor API does. From the example, it seems like you're leveraging the client binder and not spring-cloud-stream-binder-kafka-streams / kafka streams binder. I haven't seen an implementation of spring cloud stream + kafka streams binder using the low-level processor API, so i can't tell if that was the aim.
I want to subscribe to a PubSub channel, to get all messages payload and processing them.
The thing is, I saw example with IntegrationFlow, but if I understand this correctly, integrationFlow is used to get from a channel, and send to another one.
I don't want to send to another channel, I just want to get the payload, to process it.
How can I do this?
All examples I found use xml files to make the configuration, but the project was not created using this way of doing.
I saw example with IntegrationFlow, but if I understand this correctly, integrationFlow is used to get from a channel, and send to another one.
I think that this is not correct information. You can just handle the messages.
You can do something like this in a configuration class:
#Bean
public IntegrationFlow myFlow() {
return IntegrationFlows.from(...)
.handle((GenericHandler<YourMessage>) (message, headers) -> {...})
}
I want to use Spring Integration to expose a simple web service that pushes incoming message into ActiveMQ and responds immediately. My go-to solution was MarshallingWebServiceInboundGateway connected to Jms.outboundAdapter with IntegrationFlow. Below the Gateway and IntegrationFlow snippets. Problem with this is Adapter does not provide response (duh) which Gateway expects. The response I get back from the service is empty 202, with delay of about 1500ms. This is caused by a reply timeout I see in TRACE logs:
"2020-04-14 17:17:50.101 TRACE 26524 --- [nio-8080-exec-6] o.s.integration.core.MessagingTemplate : Failed to receive message from channel 'org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#518ffd27' within timeout: 1000"
No hard exceptions anywhere. The other problem is I cannot generate the response myself. I can't add anything to IntegrationFlow after the .handle with Adapter.
Any other way I can try to fulfill the scenario?
How, if at all possible, can I generate and return response in situation there is no better approach?
Most likely the proper way would be to use Gateways on both ends, but this is not possible. I cannot wait with response until message in the queue gets consumed and processed.
'''
#Bean
public MarshallingWebServiceInboundGateway greetingWebServiceInboundGateway() {
MarshallingWebServiceInboundGateway inboundGateway = new MarshallingWebServiceInboundGateway(
jaxb2Marshaller()
);
inboundGateway.setRequestChannelName("greetingAsync.input");
inboundGateway.setLoggingEnabled(true);
return inboundGateway;
}
#Bean
public IntegrationFlow greetingAsync() {
return f -> f
.log(LoggingHandler.Level.INFO)
.handle(Jms.outboundAdapter(this.jmsConnectionFactory)
.configureJmsTemplate(c -> {
c.jmsMessageConverter(new MarshallingMessageConverter(jaxb2Marshaller()));
})
.destination(JmsConfig.HELLO_WORLD_QUEUE));
}
'''
The logic and assumptions are fully correct: you can't return after one-way handle() and similar to that Jms.outboundAdapter().
But your problem that you fully miss one of the first-class citizens in Spring Integration - a MessageChannel. It is important to understand that even in the flow like yours there are channels between endpoints (DSL methods) - implicit (DirectChannel), like in your case, or explicit: when you use a channel() in between and can place there any possible implementation: https://docs.spring.io/spring-integration/docs/5.3.0.M4/reference/html/dsl.html#java-dsl-channels
One of the crucial channel implementation is a PublishSubscribeChannel (a topic in JMS specification) when you can send the same message to several subscribed endpoints: https://docs.spring.io/spring-integration/docs/5.3.0.M4/reference/html/core.html#channel-implementations-publishsubscribechannel
In your case the fists subscriber should be your existing, one-way Jms.outboundAdapter(). And another something what is going to generate response and reply it into a replyChannel header.
For this purpose Java DSL provides a nice hook via sub-flows configuration: https://docs.spring.io/spring-integration/docs/5.3.0.M4/reference/html/dsl.html#java-dsl-subflows
So, some sample of publish-subscriber could be like this:
.publishSubscribeChannel(c -> c
.subscribe(sf -> sf
.handle(Jms.outboundAdapter(this.jmsConnectionFactory))))
.handle([PRODUCE_RESPONSE])
I'm trying to create a Spring Cloud Stream Source Bean inside a Spring Boot Application that simply sends the results of a method to a stream (underlying Kafka topic is bound to the stream).
Most of the Stream samples I've seen use #InboundChannelAdapter annotation to send data to the stream using a poller. But I don't want to use a poller. I've tried setting the poller to an empty array but the other problem is that when using #InboundChannelAdapter you are unable to have any method parameters.
The overall concept of what I am trying to do is read from an inbound stream. Do some async processing, then post the result to an outbound stream. So using a processor doesn't seem to be an option either. I am using #StreamListener with a Sink channel to read the inbound stream and that works.
Here is some code i've been trying but this doesn't work at all. I was hoping it would be this simple because my Sink was but maybe it isn't. Looking for someone to point me to an example of a source that isn't a Processor (i.e. doesn't require listening on an inbound channel) and doesn't use #InboundChannelAdapter or to give me some design tips to accomplish what I need to do in a different way. Thanks!
#EnableBinding(Source.class)
public class JobForwarder {
#ServiceActivator(outputChannel = Source.OUTPUT)
#SendTo(Source.OUTPUT)
public String forwardJob(String message) {
log.info(String.format("Forwarding a job message [%s] to queue [%s]", message, Source.OUTPUT));
return message;
}
}
Your orginal requirement can be achieved through the below steps.
Create your custom Bound Interface (you can use the default #EnableBinding(Source.class) as well)
public interface CustomSource {
String OUTPUT = "customoutput";
#Output(CustomSource.OUTPUT)
MessageChannel output();
}
Inject your bound channel
#Component
#EnableBinding(CustomSource.class)
public class CustomOutputEventSource {
#Autowired
private CustomSource customSource;
public void sendMessage(String message) {
customSource.output().send(MessageBuilder.withPayload(message).build());
}
}
Test it
#RunWith(SpringRunner.class)
#SpringBootTest
public class CustomOutputEventSourceTest {
#Autowired
CustomOutputEventSource output;
#Test
public void sendMessage() {
output.sendMessage("Test message from JUnit test");
}
}
So if you don't want to use a Poller, what causes the forwardJob() method to be called?
You can't just call the method and expect the result to go to the output channel.
With your current configuration, you need an inputChannel on the service containing your inbound message (and something to send a message to that channel). It doesn't have to be bound to a transport; it can be a simple MessageChannel #Bean.
Or, you could use a #Publisher to publish the result of the method invocation (as well as being returned to the caller) - docs here.
#Publisher(channel = Source.OUTPUT)
Thanks for the input. It took me a while to get back to the problem. I did try reading the documentation for #Publisher. It looked to be exactly what I needed but I just couldn't get the proper beans initialized to get it wired properly.
To answer your question the forwardJob() method is called after some async processing of the input.
Eventually I just implemented using spring-kafka library directly and that was much more explicit and felt easier to get going. I think we are going to stick to kafka as the only channel binding so I think we'll stick with that library.
However, we did eventually get the spring-cloud-stream library working quite simply. Here was the code for a single source without a poller.
#Component
#EnableBinding(Source.class)
public class JobForwarder {
private Source source;
#Autowired
public ScheduledJobForwarder(Source source) {
this.source = source;
}
public void forwardScheduledJob(String message) {
log.info(String.format("Forwarding a job message [%s] to queue [%s]", message, Source.OUTPUT));
source.output().send(MessageBuilder.withPayload(message).build());
}
}
Note that I'd like multiple message listeners to handle successive messages from the topic concurrently. In addition I'd like each message listener to operate transactionally so that a processing failure in a given message listener would result in that listener's message remaining on the topic.
The spring DefaultMessageListenerContainer seems to support concurrency for JMS queues only.
Do I need to instantiate multiple DefaultMessageListenerContainers?
If time flows down the vertical axis:
ListenerA reads msg 1 ListenerB reads msg 2 ListenerC reads msg 3
ListenerA reads msg 4 ListenerB reads msg 5 ListenerC reads msg 6
ListenerA reads msg 7 ListenerB reads msg 8 ListenerC reads msg 9
ListenerA reads msg 10 ListenerB reads msg 11 ListenerC reads msg 12
...
UPDATE:
Thanks for your feedback #T.Rob and #skaffman.
What I ended up doing is creating multiple DefaultMessageListenerContainers with concurrency=1 and then putting logic in the message listener so that only one thread would process a given message id.
You don't want multiple DefaultMessageListenerContainer instances, no, but you do need to configure the DefaultMessageListenerContainer to be concurrent, using the concurrentConsumers property:
Specify the number of concurrent
consumers to create. Default is 1.
Specifying a higher value for this
setting will increase the standard
level of scheduled concurrent
consumers at runtime: This is
effectively the minimum number of
concurrent consumers which will be
scheduled at any given time. This is a
static setting; for dynamic scaling,
consider specifying the
"maxConcurrentConsumers" setting
instead.
Raising the number of concurrent
consumers is recommendable in order to
scale the consumption of messages
coming in from a queue. However, note
that any ordering guarantees are lost
once multiple consumers are
registered. In general, stick with 1
consumer for low-volume queues.
However, there's big warning at the bottom:
Do not raise the number of concurrent consumers for a topic.
This would lead to concurrent
consumption of the same message, which
is hardly ever desirable.
This is interesting, and makes sense when you think about it. The same would occur if you had multiple DefaultMessageListenerContainer instances.
I think perhaps you need to rethink your design, although I'm not sure what I'd suggest. Concurrent consumption of pub/sub messages seems like a perfectly reasonable thing to do, but how to avoid getting the same message delivered to all of your consumers at the same time?
At least in ActiveMQ what you want is totally supported, his name is VirtualTopic
The concept is:
You create a VirtualTopic (Simply creating a Topic using the prefix VirtualTopic. ) eg. VirtualTopic.Color
Create a consumer subscribing to this VirtualTopic matching this pattern Consumer.<clientName>.VirtualTopic.<topicName> eg. Consumer.client1.VirtualTopic.Color, doing it, Activemq will create a queue with that name and that queue will subscribe to VirtualTopic.Color then every message published to this Virtual Topic will be delivered to client1 queue, note that it works like rabbitmq exchanges.
You are done, now you can consume client1 queue like every queue, with many consumers, DLQ, customized redelivery policy, etc.
At this point I think you understood that you can create client2, client3 and how many subscribers you want, all of them will receive a copy of the message published to VirtualTopic.Color
Here the code
#Component
public class ColorReceiver {
private static final Logger LOGGER = LoggerFactory.getLogger(MailReceiver.class);
#Autowired
private JmsTemplate jmsTemplate;
// simply generating data to the topic
long id=0;
#Scheduled(fixedDelay = 500)
public void postMail() throws JMSException, IOException {
final Color colorName = new Color[]{Color.BLUE, Color.RED, Color.WHITE}[new Random().nextInt(3)];
final Color color = new Color(++id, colorName.getName());
final ActiveMQObjectMessage message = new ActiveMQObjectMessage();
message.setObject(color);
message.setProperty("color", color.getName());
LOGGER.info("status=color-post, color={}", color);
jmsTemplate.convertAndSend(new ActiveMQTopic("VirtualTopic.color"), message);
}
/**
* Listen all colors messages
*/
#JmsListener(
destination = "Consumer.client1.VirtualTopic.color", containerFactory = "colorContainer"
selector = "color <> 'RED'"
)
public void genericReceiveMessage(Color color) throws InterruptedException {
LOGGER.info("status=GEN-color-receiver, color={}", color);
}
/**
* Listen only red colors messages
*
* the destination ClientId have not necessary exists (it means that his name can be a fancy name), the unique requirement is that
* the containers clientId need to be different between each other
*/
#JmsListener(
// destination = "Consumer.redColorContainer.VirtualTopic.color",
destination = "Consumer.client1.VirtualTopic.color",
containerFactory = "redColorContainer", selector = "color='RED'"
)
public void receiveMessage(ObjectMessage message) throws InterruptedException, JMSException {
LOGGER.info("status=RED-color-receiver, color={}", message.getObject());
}
/**
* Listen all colors messages
*/
#JmsListener(
destination = "Consumer.client2.VirtualTopic.color", containerFactory = "colorContainer"
)
public void genericReceiveMessage2(Color color) throws InterruptedException {
LOGGER.info("status=GEN-color-receiver-2, color={}", color);
}
}
#SpringBootApplication
#EnableJms
#EnableScheduling
#Configuration
public class Config {
/**
* Each #JmsListener declaration need a different containerFactory because ActiveMQ requires different
* clientIds per consumer pool (as two #JmsListener above, or two application instances)
*
*/
#Bean
public JmsListenerContainerFactory<?> colorContainer(ActiveMQConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
final DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrency("1-5");
configurer.configure(factory, connectionFactory);
// container.setClientId("aId..."); lets spring generate a random ID
return factory;
}
#Bean
public JmsListenerContainerFactory<?> redColorContainer(ActiveMQConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
// necessary when post serializable objects (you can set it at application.properties)
connectionFactory.setTrustedPackages(Arrays.asList(Color.class.getPackage().getName()));
final DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrency("1-2");
configurer.configure(factory, connectionFactory);
return factory;
}
}
public class Color implements Serializable {
public static final Color WHITE = new Color("WHITE");
public static final Color BLUE = new Color("BLUE");
public static final Color RED = new Color("RED");
private String name;
private long id;
// CONSTRUCTORS, GETTERS AND SETTERS
}
Multiple Consumers Allowed on the Same Topic Subscription in JMS 2.0, while this was not the case with JMS 1.1. Please refer:
https://www.oracle.com/technetwork/articles/java/jms2messaging-1954190.html
This is one of those occasions where the differences in transport providers bubble up through the abstraction of JMS. JMS wants to provide a copy of the message for each subscriber on a topic. But the behavior that you want is really that of a queue. I suspect that there are other requirements driving this to a pub/sub solution which were not described - for example other things need to subscribe to the same topic independent of your app.
If I were to do this in WebSphere MQ the solution would be to create an administrative subscription which would result in a single copy of each message on the given topic to be placed onto a queue. Then your multiple subscribers could compete for messages on that queue. This way your app could have multiple threads among which the messages are distributed, and at the same time other subscribers independent of this application could dynamically (un)subscribe to the same topic.
Unfortunately, there's no generic JMS-portable way of doing this. You are dependent on the transport provider's implementation to a great degree. The only one of these I can speak to is WebSphere MQ but I'm sure other transports support this in one way or another and to varying degrees if you are creative.
Here's a possibility:
1) create only one DMLC configured with the bean and method to handle the incoming message. Set its concurrency to 1.
2) Configure a task executor with its #threads equal to the concurrency you desire. Create an object pool for objects which are actually supposed to process a message. Give a reference of task executor and object pool to the bean you configured in #1. Object pool is useful if the actual message processing bean is not thread-safe.
3) For an incoming message, the bean in DMLC creates a custom Runnable, points it to the message and the object pool, and gives it to task executor.
4) The run method of Runnable gets a bean from the object pool and calls its 'process' method with the message given.
#4 can be managed with a proxy and the object pool to make it easier.
I haven't tried this solution yet, but it seems to fit the bill. Note that this solution is not as robust as EJB MDB. Spring e.g. will not discard an object from the pool if it throws a RuntimeException.
Creating a custom task executor seemingly solved the issue for me, w/o duplicate processing:
#Configuration
class BeanConfig {
#Bean(destroyMethod = "shutdown")
public ThreadPoolTaskExecutor topicExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setAllowCoreThreadTimeOut(true);
executor.setKeepAliveSeconds(300);
executor.setCorePoolSize(4);
executor.setQueueCapacity(0);
executor.setThreadNamePrefix("TOPIC-");
return executor;
}
#Bean
JmsListenerContainerFactory<?> topicListenerFactory(ConnectionFactory connectionFactory, DefaultJmsListenerContainerFactoryConfigurer configurer, #Qualifier("topicExecutor") Executor topicExecutor) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setPubSubDomain(true);
configurer.configure(factory, connectionFactory);
factory.setPubSubDomain(true);
factory.setSessionTransacted(false);
factory.setSubscriptionDurable(false);
factory.setTaskExecutor(topicExecutor);
return factory;
}
}
class MyBean {
#JmsListener(destination = "MYTOPIC", containerFactory = "topicListenerFactory", concurrency = "1")
public void receiveTopicMessage(SomeTopicMessage message) {}
}
I've run into the same problem. I'm currently investigating RabbitMQ, which seems to offer a perfect solution in a design pattern they call "work queues." More info here: http://www.rabbitmq.com/tutorials/tutorial-two-java.html
If you're not totally tied to JMS you might look into this. There might also be a JMS to AMQP bridge, but that might start to look hacky.
I'm having some fun (read: difficulties) getting RabbitMQ installed and running on my Mac but think I'm close to having it working, I will post back if I'm able to solve this.
on server.xml configs:
so , in maxSessions you can identify the number of sessions you want.
Came across this question. My configuration is :
Create a bean with id="DefaultListenerContainer", add property name="concurrentConsumers" value="10" and property name="maxConcurrentConsumers" value ="50".
Works fine, so far. I printed the thread id and verified that multiple threads do get created and also reused.