We try to publish and subscribe to MQTT protocol using smallrye reactive messaging. We managed to actually publish a message into a specific topic/channel through the following simple code
import io.smallrye.mutiny.Multi;
import org.eclipse.microprofile.reactive.messaging.Outgoing;
import javax.enterprise.context.ApplicationScoped;
import java.time.Duration;
#ApplicationScoped
public class Publish {
#Outgoing("pao")
public Multi<String> generate() {
return Multi.createFrom().ticks().every(Duration.ofSeconds(1))
.map(x -> "A Message in here");
}
}
What we want to do is to call whenever we want the generate() method somehow with a dynamic topic, where the user will define it. That one was our problem but then we found these classes from that repo in github. Package name io.smallrye.reactive.messaging.mqtt
For example we found that there is a class that says it makes a publish call to a MQTT broker(Mosquitto server up).
Here in that statement SendingMqttMessage<String> message = new SendingMqttMessage<String>("myTopic","A message in here",0,false);
We get the a red underline under the SendingMqttMessage<String> saying 'SendingMqttMessage(java.lang.String, java.lang.String, io.netty.handler.codec.mqtt.MqttQoS, boolean)' is not public in 'io.smallrye.reactive.messaging.mqtt.SendingMqttMessage'. Cannot be accessed from outside package
UPDATE(Publish done)
Finally made a Publish request to the mqtt broker(a mosquitto server) and all this with a dynamic topic configured from user. As we found out the previous Class SendingMqttMessage was not supposed to be used at all. And we found out that we also needed and emitter to actually make a publish request with a dynamic topic.
#Inject
#Channel("panatha")
Emitter<String> emitter;
#POST
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public Response createUser(Device device) {
System.out.println("New Publish request: message->"+device.getMessage()+" & topic->"+device.getTopic());
emitter.send(MqttMessage.of(device.getTopic(), device.getMessage()));
return Response.ok().status(Response.Status.CREATED).build();
}
Now we need to find out about making a Subscription to a topic dynamically.
first to sett us to the same page:
Reactive messaging does not work with topics, but with channels.
That is important to note, because you can exclusively read or write to a channel. So if you want to provide both, you need to configure two channels pointing at the same topic, one incoming and one outgoing
To answer your question:
You made a pretty good start with Emitters, but you still lack the dynamic nature you'd like.
In your example, you acquired that Emitter thru CDI.
Now that is all we need, to make this dynamic, since we cann dynamically inject Beans at runtime using CDI like this:
Sending Messages
private Emitter<byte[]> dynamicEmitter(String topic){
return CDI.current().select(new TypeLiteral<Emitter<byte[]>>() {}, new ChannelAnnotation(topic)).get();
}
please also note, that i am creating a Emitter of type byte[], as this is the only currently supportet type of the smallrye-mqtt connector (version 3.4.0) according to its documentation.
Receiving Messages
To read messages from a reactive messaging channel, you can use the counterpart of the Emitter, which is the Publisher.
It can be used analog:
private Publisher<byte[]> dynamicReceiver(String topic){
return CDI.current().select(new TypeLiteral<Publisher<byte[]>>() {}, new ChannelAnnotation(topic)).get();
}
You can then process these Date in any way you like.
As demo, it hung it on a simple REST Endpoint
#GET
#Produces(MediaType.SERVER_SENT_EVENTS)
public Multi<String> stream(#QueryParam("topic") String topic) {
return Multi.createFrom().publisher(dynamicReceiver(topic)).onItem().transform(String::new);
}
#GET
#Path("/publish")
public boolean publish(#QueryParam("msg") String msg, #QueryParam("topic") String topic) {
dynamicEmitter(topic).send(msg.getBytes());
return true;
}
One more Thing
When creating this solution I hit a few pitfalls you should know about:
Quarkus removes any CDI-Beans that are "unused". So if you want to inject them dynamically, you need to exclude those, or turne off that feature.
All channels injected that way must be configured. Otherwise the injection will fail.
For some Reason, (even with removal completely disabled) I was unable to inject Emitters dynamically, unless they are ever injected elsewhere.
Related
I'm implementing Apache Kafka in Quarkus. I've followed the guide provided by Quarkus, but I'm failing to understand how to implement any handling. Below is the code I currently have:
fun send(classificationFinishedEvent: ClassificationFinishedEvent) =
emitter.send(Message.of(classificationFinishedEvent).withAck {
CompletableFuture.completedFuture(null)
}.withNack {
CompletableFuture.completedFuture(null)
})
It's about identical to the code from the guide. Now I believe the .withAck method is provided when the message is acked (or acknowledged) by the Kafka broker I'm sending the message to. Same goes for the .withNack method, but only when the message is nacked (or not acknowledged).
Both of these methods need to return a CompletionStage. So how would it then be possible to return an error/problem object to the handler which is calling this send method?
We have multiple applications consumer listening to the same kafka topic and a producer sets the message header when sending message to the topic so specific instance can evaluate the header and process the message. eg
#StreamListener(target=ITestSink.CHANNEL_NAME,condition="headers['franchiseName'] == 'sydney'")
public void fullfillOrder(#Payload TestObj message) {
log.info("sydney order request received message is {}",message.getName());
}
In Spring Cloud Stream 3.0.0 the #StreamListener is deprecated and I could not find the equivalent of the condition property in Function.
Any suggestion?
Though I was not able to find the equivalent for the functional approach either, I do have a suggestion.
The #StreamListener annotations condition does not stop the fact that the application must consume the message, read its header, and filter out specific records before passing it to the listener (fullfillOrder()). So it's safe to assume you're consuming every message that hits the topic regardless (by the event receiver that Spring Cloud has implemented for us under the hood), but the listener only gets executed when header == sydney.
If there was a way to configure the event receiver that Spring Cloud uses (to discard message before hitting listener), I would suggest looking into that. If not, would resort to filtering out any messages (non-sydney) before doing any processing. If you're familiar with Spring Cloud's functional approach, would look something like this:
#Bean
public Consumer<Message<TestObj>> fulfillOrder() {
return msg -> {
// to get header - msg.getHeaders().get(key, valueType);
// filter out bad messages
}
}
or
#Bean
public Consumer<ConsumerRecord<?, TestObj>> fulfillOrder() {
return msg -> {
// msg.headers().lastHeader("franchiseName").value() -> filter em out
}
}
Other:
^ my code assumes you're integrating the kafka-client API with Spring cloud stream via spring-cloud-stream-binder-kafka. based on tags listed, i will note Spring Cloud Stream has two versions of binders for Kafka - one for the kafka client library, and one for kafka streams library.
Without considering Spring Cloud / Frameworks, the high-lvl DSL in kafka streams doesn't give you access to headers, but the low-level Processor API does. From the example, it seems like you're leveraging the client binder and not spring-cloud-stream-binder-kafka-streams / kafka streams binder. I haven't seen an implementation of spring cloud stream + kafka streams binder using the low-level processor API, so i can't tell if that was the aim.
I want to use Spring Integration to expose a simple web service that pushes incoming message into ActiveMQ and responds immediately. My go-to solution was MarshallingWebServiceInboundGateway connected to Jms.outboundAdapter with IntegrationFlow. Below the Gateway and IntegrationFlow snippets. Problem with this is Adapter does not provide response (duh) which Gateway expects. The response I get back from the service is empty 202, with delay of about 1500ms. This is caused by a reply timeout I see in TRACE logs:
"2020-04-14 17:17:50.101 TRACE 26524 --- [nio-8080-exec-6] o.s.integration.core.MessagingTemplate : Failed to receive message from channel 'org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#518ffd27' within timeout: 1000"
No hard exceptions anywhere. The other problem is I cannot generate the response myself. I can't add anything to IntegrationFlow after the .handle with Adapter.
Any other way I can try to fulfill the scenario?
How, if at all possible, can I generate and return response in situation there is no better approach?
Most likely the proper way would be to use Gateways on both ends, but this is not possible. I cannot wait with response until message in the queue gets consumed and processed.
'''
#Bean
public MarshallingWebServiceInboundGateway greetingWebServiceInboundGateway() {
MarshallingWebServiceInboundGateway inboundGateway = new MarshallingWebServiceInboundGateway(
jaxb2Marshaller()
);
inboundGateway.setRequestChannelName("greetingAsync.input");
inboundGateway.setLoggingEnabled(true);
return inboundGateway;
}
#Bean
public IntegrationFlow greetingAsync() {
return f -> f
.log(LoggingHandler.Level.INFO)
.handle(Jms.outboundAdapter(this.jmsConnectionFactory)
.configureJmsTemplate(c -> {
c.jmsMessageConverter(new MarshallingMessageConverter(jaxb2Marshaller()));
})
.destination(JmsConfig.HELLO_WORLD_QUEUE));
}
'''
The logic and assumptions are fully correct: you can't return after one-way handle() and similar to that Jms.outboundAdapter().
But your problem that you fully miss one of the first-class citizens in Spring Integration - a MessageChannel. It is important to understand that even in the flow like yours there are channels between endpoints (DSL methods) - implicit (DirectChannel), like in your case, or explicit: when you use a channel() in between and can place there any possible implementation: https://docs.spring.io/spring-integration/docs/5.3.0.M4/reference/html/dsl.html#java-dsl-channels
One of the crucial channel implementation is a PublishSubscribeChannel (a topic in JMS specification) when you can send the same message to several subscribed endpoints: https://docs.spring.io/spring-integration/docs/5.3.0.M4/reference/html/core.html#channel-implementations-publishsubscribechannel
In your case the fists subscriber should be your existing, one-way Jms.outboundAdapter(). And another something what is going to generate response and reply it into a replyChannel header.
For this purpose Java DSL provides a nice hook via sub-flows configuration: https://docs.spring.io/spring-integration/docs/5.3.0.M4/reference/html/dsl.html#java-dsl-subflows
So, some sample of publish-subscriber could be like this:
.publishSubscribeChannel(c -> c
.subscribe(sf -> sf
.handle(Jms.outboundAdapter(this.jmsConnectionFactory))))
.handle([PRODUCE_RESPONSE])
I'm trying to create a Spring Cloud Stream Source Bean inside a Spring Boot Application that simply sends the results of a method to a stream (underlying Kafka topic is bound to the stream).
Most of the Stream samples I've seen use #InboundChannelAdapter annotation to send data to the stream using a poller. But I don't want to use a poller. I've tried setting the poller to an empty array but the other problem is that when using #InboundChannelAdapter you are unable to have any method parameters.
The overall concept of what I am trying to do is read from an inbound stream. Do some async processing, then post the result to an outbound stream. So using a processor doesn't seem to be an option either. I am using #StreamListener with a Sink channel to read the inbound stream and that works.
Here is some code i've been trying but this doesn't work at all. I was hoping it would be this simple because my Sink was but maybe it isn't. Looking for someone to point me to an example of a source that isn't a Processor (i.e. doesn't require listening on an inbound channel) and doesn't use #InboundChannelAdapter or to give me some design tips to accomplish what I need to do in a different way. Thanks!
#EnableBinding(Source.class)
public class JobForwarder {
#ServiceActivator(outputChannel = Source.OUTPUT)
#SendTo(Source.OUTPUT)
public String forwardJob(String message) {
log.info(String.format("Forwarding a job message [%s] to queue [%s]", message, Source.OUTPUT));
return message;
}
}
Your orginal requirement can be achieved through the below steps.
Create your custom Bound Interface (you can use the default #EnableBinding(Source.class) as well)
public interface CustomSource {
String OUTPUT = "customoutput";
#Output(CustomSource.OUTPUT)
MessageChannel output();
}
Inject your bound channel
#Component
#EnableBinding(CustomSource.class)
public class CustomOutputEventSource {
#Autowired
private CustomSource customSource;
public void sendMessage(String message) {
customSource.output().send(MessageBuilder.withPayload(message).build());
}
}
Test it
#RunWith(SpringRunner.class)
#SpringBootTest
public class CustomOutputEventSourceTest {
#Autowired
CustomOutputEventSource output;
#Test
public void sendMessage() {
output.sendMessage("Test message from JUnit test");
}
}
So if you don't want to use a Poller, what causes the forwardJob() method to be called?
You can't just call the method and expect the result to go to the output channel.
With your current configuration, you need an inputChannel on the service containing your inbound message (and something to send a message to that channel). It doesn't have to be bound to a transport; it can be a simple MessageChannel #Bean.
Or, you could use a #Publisher to publish the result of the method invocation (as well as being returned to the caller) - docs here.
#Publisher(channel = Source.OUTPUT)
Thanks for the input. It took me a while to get back to the problem. I did try reading the documentation for #Publisher. It looked to be exactly what I needed but I just couldn't get the proper beans initialized to get it wired properly.
To answer your question the forwardJob() method is called after some async processing of the input.
Eventually I just implemented using spring-kafka library directly and that was much more explicit and felt easier to get going. I think we are going to stick to kafka as the only channel binding so I think we'll stick with that library.
However, we did eventually get the spring-cloud-stream library working quite simply. Here was the code for a single source without a poller.
#Component
#EnableBinding(Source.class)
public class JobForwarder {
private Source source;
#Autowired
public ScheduledJobForwarder(Source source) {
this.source = source;
}
public void forwardScheduledJob(String message) {
log.info(String.format("Forwarding a job message [%s] to queue [%s]", message, Source.OUTPUT));
source.output().send(MessageBuilder.withPayload(message).build());
}
}
I have just started experimenting with Spring and rabbitMQ.
I would like to create a microsevice infrastructure with rabbit and spring,
I have been following Spring boot tutorial
But it is very simplistic. As well I am looking at the documentation (springs, Rabbit) for how to create an RPC, i understand the Rabbits approach, but i would like to leverage Spring template to save me the boilerplate.
I just cant seem to understand where to register the reciveAndReplay callback at.
I tried doing this:
sending
System.out.println("Sending message...");
Object convertSendAndReceive = rabbitTemplate.convertSendAndReceive("spring-boot", "send and recive: sent");
System.out.println("GOT " + convertSendAndReceive); //is null
receiving
#Component
public class Receiver {
#Autowired
RabbitTemplate rabbitTemplate;
public void receiveMessage(String message) {
this.rabbitTemplate.receiveAndReply("spring-boot", (Message)->{
return "return this statement";
});
}
}
But its not a big surprise this doesn't work the message is received but nothing comes back. I assume that this needs to be registered somewhere in the factory/template at the bean creation level but i don't seem to understand where and sadly the documentation is unclear.
First, please use the Spring AMQP Documentation.
You would generally use a SimpleMessageListenerContainer wired with a POJO listener for RPC.
The template receiveAndReply method is intended for "scheduled" server-side RPC - i.e. only receive (and reply) when you want to, rather than whenever a message arrives in the queue. It does not block waiting for a message.
If you want to use receiveAndReply(), there's a test case that illustrates it.
EDIT:
This code...
this.template.convertAndSend(ROUTE, "test");
sends a message to the queue.
This code...
this.template.setQueue(ROUTE);
boolean received = this.template.receiveAndReply(new ReceiveAndReplyMessageCallback() {
#Override
public Message handle(Message message) {
message.getMessageProperties().setHeader("foo", "bar");
return message;
}
});
Receives a message and from that queue; adds a header and returns the same messsage to the reply queue. received will be false if there was no message to receive (and reply to).
This code:
Message receive = this.template.receive();
receives the reply.
This test is a bit contrived because the reply is sent to the same queue as the request. We can't use sendAndReceive() on the client side in this test because the thread would block waiting for the reply (and we need to execute the receiveAndReply()).
Another test in that class has a more realistic example where it does the sendAndReceive()s on different threads and the receiveAndReply()s on the main thread.
Note that that test uses a listener container on the client side for replies; that is generally no longer needed since the rabbit broker now supports direct reply-to.
receiveAndReply() was added for symmetry - in most cases, people use a listener container and listener adapter for server-side RPC.