Spring cloud stream to support routing messages dynamically - java

i want to create a common project (using spring cloud stream) to route messages to different (consumer) projects dynamically according to message content. (rabbitmq as the message broker)
does spring cloud stream support it? if not, any proposed way to accomplish that? thx

You can achieve that by setting spring.cloud.stream.dynamicDestinations property to a list of destination names (if you know the name beforehand) or keeping it as empty. The BinderAwareChannelResolver takes care of dynamically creating/binding the outbound channel for these dynamic destinations.
There is an out of the box router application available which does the similar thing.

You can use StreamBridge with topicname and spring-cloud will bind it with destination automatically in runtime.
#Autowired
private final StreamBridge streamBridge;
public void sendDynamically(Message message, String topicName) {
streamBridge.send(route, topicName);
}
https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/spring-cloud-stream.html#_streambridge_and_dynamic_destinations

Related

How does Kafka Schema registration happen in Spring Cloud Stream?

I am trying to understand how to use Spring Cloud Streams with the Kafka Binder.
Currently, I am trying to register an AVRO schema with my Confluent Schema Registry and send messages to a topic.
I am unable to understand how the schema registration is being done by Spring Cloud Streams behind the scenes.
Lets take this example from the Spring Cloud Stream samples.
The AVRO schema is located in src/resources/avro
When the mvn:compile goal is run the POJO for the AVRO schema is generated and the producer can post data.
But what I am not able to understand is how Spring Cloud Stream is doing the schema registration to AVRO ?
#Autowired
StreamBridge streamBridge;
#Bean
public Supplier<Sensor> supplier() {
return () -> {
Sensor sensor = new Sensor();
sensor.setId(UUID.randomUUID().toString() + "-v1");
sensor.setAcceleration(random.nextFloat() * 10);
sensor.setVelocity(random.nextFloat() * 100);
sensor.setTemperature(random.nextFloat() * 50);
return sensor;
};
}
#Bean
public Consumer<Sensor> receiveAndForward() {
return s -> streamBridge.send("sensor-out-0", s);
}
#Bean
Consumer<Sensor> receive() {
return s -> System.out.println("Received Sensor: " + s);
}
Is it done when the beans are created ?
Or is it done when the first message is sent ? If so then how does Spring Stream know where to find the .avsc file from ?
Basically what is happening under the hood ?
There seems to be no mention about this is in the docs.
Thanks.
Your serialization strategy (in this case, AVRO) is always handled in the serializers (for producers) and deserializers (for consumers).
You can have Avro (de)serialized keys and/or Avro (de)serialized values. Which means one should pass in KafkaAvroSerializer.class/KafkaAvroDeserializer.class to the producer/consumer configs, respectively. On top of this, one must pass in the schema.registry.url to the clients config as well.
So behind the scenes, spring cloud stream makes your application avro compatible when it creates your producers/consumers (using the configs found in application.properties or else where). Your clients will connect to the schema registry (logs will tell you if failed to connect) on start up, but does not do any schema registration out of the box.
Schema registration is done on the first message that gets sent. If you haven't already, you'll see that the generated POJOs contain the schemas already, so spring cloud stream doesn't need the .avsc files at all. For example, my last generated Avro pojo contained (line 4) :
#org.apache.avro.specific.AvroGenerated
public class AvroBalanceMessage extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
private static final long serialVersionUID = -539731109258473824L;
public static final org.apache.avro.Schema SCHEMA$ = new org.apache.avro.Schema.Parser().parse({\"type\":\"record\",\"name\":\"AvroBalanceMessage\",\"namespace\":\"tech.nermindedovic\",\"fields\"[{\"name\":\"accountNumber\",\"type\":\"long\",\"default\":0},{\"name\":\"routingNumber\",\"type\":\"long\",\"default\":0},{\"name\":\"balance\",\"type\":{\"type\":\"string\",\"avro.java.string\":\"String\"},\"default\":\"0.00\"},{\"name\":\"errors\",\"type\":\"boolean\",\"default\":false}]}");
public static org.apache.avro.Schema getClassSchema() { return SCHEMA$; }
.......
When producers send this pojo, it communicates to the registry about the current version of the schema. If the schema is not in the registry, then the registry will store it and identify it by ID. The producer sends the message with its schema ID to the Kafka broker. On the other hand, the consumer will get this message and check if its seen the ID (stored in cache so you don't always have to retrieve the schema from the Registry) and if it hasnt, it will communicate with the registry to get such information about the message.
A bit outside of the scope of spring cloud stream, but one can also use the REST API for SR to manually register schemas.

Spring Cloud #StreamListener condition deprecated what is the alternative

We have multiple applications consumer listening to the same kafka topic and a producer sets the message header when sending message to the topic so specific instance can evaluate the header and process the message. eg
#StreamListener(target=ITestSink.CHANNEL_NAME,condition="headers['franchiseName'] == 'sydney'")
public void fullfillOrder(#Payload TestObj message) {
log.info("sydney order request received message is {}",message.getName());
}
In Spring Cloud Stream 3.0.0 the #StreamListener is deprecated and I could not find the equivalent of the condition property in Function.
Any suggestion?
Though I was not able to find the equivalent for the functional approach either, I do have a suggestion.
The #StreamListener annotations condition does not stop the fact that the application must consume the message, read its header, and filter out specific records before passing it to the listener (fullfillOrder()). So it's safe to assume you're consuming every message that hits the topic regardless (by the event receiver that Spring Cloud has implemented for us under the hood), but the listener only gets executed when header == sydney.
If there was a way to configure the event receiver that Spring Cloud uses (to discard message before hitting listener), I would suggest looking into that. If not, would resort to filtering out any messages (non-sydney) before doing any processing. If you're familiar with Spring Cloud's functional approach, would look something like this:
#Bean
public Consumer<Message<TestObj>> fulfillOrder() {
return msg -> {
// to get header - msg.getHeaders().get(key, valueType);
// filter out bad messages
}
}
or
#Bean
public Consumer<ConsumerRecord<?, TestObj>> fulfillOrder() {
return msg -> {
// msg.headers().lastHeader("franchiseName").value() -> filter em out
}
}
Other:
^ my code assumes you're integrating the kafka-client API with Spring cloud stream via spring-cloud-stream-binder-kafka. based on tags listed, i will note Spring Cloud Stream has two versions of binders for Kafka - one for the kafka client library, and one for kafka streams library.
Without considering Spring Cloud / Frameworks, the high-lvl DSL in kafka streams doesn't give you access to headers, but the low-level Processor API does. From the example, it seems like you're leveraging the client binder and not spring-cloud-stream-binder-kafka-streams / kafka streams binder. I haven't seen an implementation of spring cloud stream + kafka streams binder using the low-level processor API, so i can't tell if that was the aim.

Publish / Subscribe MQTT using SmallRye reactive messaging dynamically

We try to publish and subscribe to MQTT protocol using smallrye reactive messaging. We managed to actually publish a message into a specific topic/channel through the following simple code
import io.smallrye.mutiny.Multi;
import org.eclipse.microprofile.reactive.messaging.Outgoing;
import javax.enterprise.context.ApplicationScoped;
import java.time.Duration;
#ApplicationScoped
public class Publish {
#Outgoing("pao")
public Multi<String> generate() {
return Multi.createFrom().ticks().every(Duration.ofSeconds(1))
.map(x -> "A Message in here");
}
}
What we want to do is to call whenever we want the generate() method somehow with a dynamic topic, where the user will define it. That one was our problem but then we found these classes from that repo in github. Package name io.smallrye.reactive.messaging.mqtt
For example we found that there is a class that says it makes a publish call to a MQTT broker(Mosquitto server up).
Here in that statement SendingMqttMessage<String> message = new SendingMqttMessage<String>("myTopic","A message in here",0,false);
We get the a red underline under the SendingMqttMessage<String> saying 'SendingMqttMessage(java.lang.String, java.lang.String, io.netty.handler.codec.mqtt.MqttQoS, boolean)' is not public in 'io.smallrye.reactive.messaging.mqtt.SendingMqttMessage'. Cannot be accessed from outside package
UPDATE(Publish done)
Finally made a Publish request to the mqtt broker(a mosquitto server) and all this with a dynamic topic configured from user. As we found out the previous Class SendingMqttMessage was not supposed to be used at all. And we found out that we also needed and emitter to actually make a publish request with a dynamic topic.
#Inject
#Channel("panatha")
Emitter<String> emitter;
#POST
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public Response createUser(Device device) {
System.out.println("New Publish request: message->"+device.getMessage()+" & topic->"+device.getTopic());
emitter.send(MqttMessage.of(device.getTopic(), device.getMessage()));
return Response.ok().status(Response.Status.CREATED).build();
}
Now we need to find out about making a Subscription to a topic dynamically.
first to sett us to the same page:
Reactive messaging does not work with topics, but with channels.
That is important to note, because you can exclusively read or write to a channel. So if you want to provide both, you need to configure two channels pointing at the same topic, one incoming and one outgoing
To answer your question:
You made a pretty good start with Emitters, but you still lack the dynamic nature you'd like.
In your example, you acquired that Emitter thru CDI.
Now that is all we need, to make this dynamic, since we cann dynamically inject Beans at runtime using CDI like this:
Sending Messages
private Emitter<byte[]> dynamicEmitter(String topic){
return CDI.current().select(new TypeLiteral<Emitter<byte[]>>() {}, new ChannelAnnotation(topic)).get();
}
please also note, that i am creating a Emitter of type byte[], as this is the only currently supportet type of the smallrye-mqtt connector (version 3.4.0) according to its documentation.
Receiving Messages
To read messages from a reactive messaging channel, you can use the counterpart of the Emitter, which is the Publisher.
It can be used analog:
private Publisher<byte[]> dynamicReceiver(String topic){
return CDI.current().select(new TypeLiteral<Publisher<byte[]>>() {}, new ChannelAnnotation(topic)).get();
}
You can then process these Date in any way you like.
As demo, it hung it on a simple REST Endpoint
#GET
#Produces(MediaType.SERVER_SENT_EVENTS)
public Multi<String> stream(#QueryParam("topic") String topic) {
return Multi.createFrom().publisher(dynamicReceiver(topic)).onItem().transform(String::new);
}
#GET
#Path("/publish")
public boolean publish(#QueryParam("msg") String msg, #QueryParam("topic") String topic) {
dynamicEmitter(topic).send(msg.getBytes());
return true;
}
One more Thing
When creating this solution I hit a few pitfalls you should know about:
Quarkus removes any CDI-Beans that are "unused". So if you want to inject them dynamically, you need to exclude those, or turne off that feature.
All channels injected that way must be configured. Otherwise the injection will fail.
For some Reason, (even with removal completely disabled) I was unable to inject Emitters dynamically, unless they are ever injected elsewhere.

Spring RabbitTemplate - How to create queues automatically upon send

I am using RabbitMQ together with Spring's RabbitTemplate.
When sending messages to queues using the template send methods, I want the queue to automatically be created/declared if it is not already exists.
It is very important since according to our business logic queue names are generated on run-time and I cannot declare them in advance.
Previously we have used JmsTemplate and any call to send or receive automatically created the queue.
You can use a RabbitAdmin to automatically declare the exchange, queue, and binding. Check out this thread for more detail. This forum also bit related to your scenario. I have not tried spring with AMQP though, but I believe this would do it.
/**
* Required for executing adminstration functions against an AMQP Broker
*/
#Bean
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(connectionFactory());
}
Keep coding !
Yes, you can use a RabbitAdmin and admin.getQueueProperties() to see if the queue exists and admin.declareQueue(new Queue(...)) to add a queue. You should probably keep track of which one's you've already checked/created in order to avoid the overhead on every send.
You can also add exchanges and bind queues to them with the admin.

graphql-java - How to use subscriptions with spring boot?

In a project I use graphql-java and spring boot with a postgreSQL Database. Now I would like to use the subscription feature published in version 3.0.0. Unfortunately, the information about the application of the subsciption function is not very mature.
How is the approach to achieve real-time functionality using graphql-java with subscriptions?
As of recent graphql-java versions, subscriptions are fully supported. The DataFetcher for a subscription must return a org.reactivestreams.Publisher, and graphql-java will take care of mapping the query function over the results.
The feature is nicely documented and there's a complete example using web sockets available in the official repo.
If you have a reactive data source in place (e.g. Mongo with a reactive driver, or probably anything that R2DBC supports), you're all set. Just use #Tailable and Spring Data will already give you a Flux (which implements Publisher) and there's nothing else you need to do.
As for a more manual Spring specific implementation, I can't imagine it being too hard to use Spring's own event mechanism (a nice tutorial here as well) to underlie the Publisher.
Every time there's an incoming subscription, create and register a new listener with the application context: context.addApplicationListener(listener) that will publish to the correct Publisher. E.g. in the DataFetcher:
// Somehow create a publisher, probably using Spring's Reactor project. Or RxJava.
Publisher<ResultObject> publisher = ...;
//The listener reacts on application events and pushes new values through the publisher
ApplicationListener listener = createListener(publisher);
context.addApplicationListener(listener);
return publisher;
When the web socket disconnects or you somehow know the event stream is finished, you must make sure to remove the listener.
I haven't actually tried any of this, mind you, I'm just thinking aloud.
Another option is to use Reactor directly (with or without Spring WebFlux). There's a sample using Reactor and WebSocket (through GraphQL SPQR Spring Boot Starter) here.
You create a Publisher like this:
//This is really just a thread-safe wrapper around Map<String, Set<FluxSink<Task>>>
private final ConcurrentMultiRegistry<String, FluxSink<Task>> subscribers = new ConcurrentMultiRegistry<>();
#GraphQLSubscription
public Publisher<Task> taskStatusChanged(String taskId) {
return Flux.create(subscriber -> subscribers.add(taskId, subscriber.onDispose(() -> subscribers.remove(taskId, subscriber))), FluxSink.OverflowStrategy.LATEST);
}
And then push new values from elsewhere (probably a related mutation or a reactive storage) like this:
subscribers.get(taskId).forEach(subscriber -> subscriber.next(task));
E.g.
#GraphQLMutation
public Task updateTask(#GraphQLNonNull String taskId, #GraphQLNonNull Status status) {
Task task = repo.byId(taskId); //find the task
task.setStatus(status); //update the task
repo.save(task); //persist the task
//Notify all the subscribers following this task
subscribers.get(taskId).forEach(subscriber -> subscriber.next(task));
return task;
}
With SPQR Spring Starter, this is all that's needed to get you an Apollo-compatible subscription implementation.
I got the same issue where I was spiking on the lib to integrate with spring boot. I found graphql-java, however, it seems it only support 'subscription' on schema level, it is not perform any transnational support for this feature. Meaning you might need to implement it your self.
Please refer to https://github.com/graphql-java/graphql-java/blob/master/docs/schema.rst#subscription-support
For the record: here is another very nice, compact example that implements GraphQLs essential features queries, mutations and subscriptions: https://github.com/npalm/blog-graphql-spring-service

Categories