I know how to define a producer using the imperative programming approach but I cannot find how to define a producer using the functional programming approach.
I read the Spring Cloud Stream Binder documentation about this, but only found how to define consumer, or consumer & producer (for example, get information from the topic, transform the data and send to another topic).
So, I don't know if it's ok to continue use annotations like #Input, #Ouptut to define a single processor or not, I'm very confused at this point because the library indicates these annotations are deprecated, but I cannot find the example or documentation to define a simple producer to send information to a specific topic.
Thanks!
The documentation link:
https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/3.0.10.RELEASE/reference/html/spring-cloud-stream-binder-kafka.html#_kafka_streams_binder
You can define a Supplier<?> #Bean which will be polled on an interval to generate output (like the #InboundChannelAdapter for #Output channels.
https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#spring_cloud_function
Or, you can use a StreamBridge to send arbitrary messages to an output destination.
https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_sending_arbitrary_data_to_an_output_e_g_foreign_event_driven_sources
Related
I am using the io.lettuce.core library and I am having trouble subscribing to a channel using the RedisPubSubReactiveCommands interface.
I have a StatefulRedisPubSubConnection and an active redis cluster which I am attempting to subscribe to.
connection.sync().subscribe("channel") works fine, as does connection.async().subscribe("channel"). However, when I use the reactive 'hot observable' interface provided by lettuce like so:
connection.reactive().subscribe(channels).subscribe();
connection.reactive().observeChannels().doOnNext(this::notifyObservers).subscribe();
It will not register as a subscription action on redis. I feel like I'm following the example given in the lettuce documentation closely.
I'm programming for an interface that accepts a hot Flux Observable and I'm getting close to wrapping the sync or async connection interfaces with my own reactive wrapper and throwing them in the pipe. What am I doing wrong here?
In case anyone else runs into this same problem, it turns out I was passing in a Set<String> Object into a function that accepts a varargs Object... and didn't realize it was treating the entire collection as a single element instead of parsing it as a varargs array.
I'll leave this up for others to learn from my dumb mistake.
From release notes (https://spring.io/blog/2017/11/29/spring-integration-5-0-ga-available):
Reactive Streams support via FluxMessageChannel,
ReactiveStreamsConsumer and direct org.reactivestreams.Subscriber
implementation in the AbstractMessageHandler;
My understanding for Reactor support was e.g. you can return Mono/Flux from a transformer/handler, and Spring Integration will automatically transform it to Messages while respecting back pressure. Unfortunately, I cannot make it work like that, e.g.:
IntegrationFlows.from("input")
.handle((p, h) -> Flux.just(1, 2, 3))
.log("l1")
.channel("output")
.get();
still logs one Message with FluxArray typed payload instead of three Messages with Integer payloads.
2017-12-18 17:12:33.262 INFO 97471 --- [nio-8080-exec-1] l1 : GenericMessage [payload=FluxArray, headers={id=a9701681-9945-f953-8b72-df369c2982a3, timestamp=1513613553262}]
Also, there is nothing in docs according this behaviour and new
FluxMessageChannel,
ReactiveStreamsConsumer and direct org.reactivestreams.Subscriber
implementation in the AbstractMessageHandler
So my question is, do I understand implemented Reactor support correctly, and where can I find any info on that topic?
Since we here in messaging and that really doesn't matter for the message what kind of payload you return from your service, everything is just wrapped to the Message as is. You need a special component to understand this payload. One of them is Splitter. This one determines that your payload is a Reactive Streams Publisher and iterated over that as a Flux.
Another component is WebFluxInboundEndpoint which supports this kind of payloads natively.
Your custom Service Activator might expect Flux as an argument to deal with.
But nothing happens automatically. Spring Integration supports Reactive types, but doesn't do they processing without end-user preferences.
BTW, the splitter should be supplied with the FluxMessageChannel as an output to process the splitted Flux via back--pressure manner.
Feel free to raise A JIRA about documenting FluxMessageChannel. Indeed we have missed that. The ReactiveStreamsConsumer needs more love as well and we have some plans for 5.1 to improve Reactive Streams model and we'll try to make it more flexible or even like an option to turn on it by default. Nothing can promise from today though.
I'm trying to get the msg with the latest offset in kafka. Can this be used to get that? 'KafkaIdempotentRepository'
If not what's the use of it?
In the java doc it says the following. But it's not clear what's the real use of it.
Camel Idempotent Repository implementations are used as consumer to filter out duplicate messages. And KafkaIdempotentRepository is one of the many implementations Camel provide (e.g. others are MemoryIdempotentRepository, FileIdempotentRepository, HazelcastIdempotentRepository, JCacheIdempotentRepository, InfinispanIdempotentRepository, etc...).
For more detailed reading please refer to below links:
https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Fuse/6.2/html/Apache_Camel_Development_Guide/MsgEnd-Idempotent.html
http://people.apache.org/~dkulp/camel/idempotent-consumer.html
Coming back to your questions:
I'm trying to get the msg with the latest offset in kafka. Can this be used to get that? 'KafkaIdempotentRepository' If not what's the use of it?
In my personal opinion, I don't think KafkaIdempotentRepository is meant to serve this use case.
Kafka does guarantee ordering which means message served will have the latest committed offset within a partition.
I am implementing my own custom component and I have found that I am going to need two use cases for consumers:
The first one would be trying to get N number of available messages every so often (Polling Consumer)
The second one would be a subscriber consumer that gets messages when they are available.
My main question is if it possible to implement these two types. I have been trying to write some code, but it seems that if you are developing a PollingConsumer you cannot implement another type. Also, if it is possible, is there any example, article or guide about how to do this? I have been looking for it for nothing came up.
Thanks!
There is two consumer kind in Camel (eg from the EIP book)
Consumer
PollingConsumer
Its the former that is used in the Camel routes. And the latter is used when you use it explicit or when using ConsumerTemplate, to use the receive methods.
A Camel component is able to adapt a Consumer to a PollingConsumer out of the box.
So it depends if you want to build a Camel component that are used in routes, you can just create a consumer. And have it able to do both poll and subscribe. When you have the data, then create an Exchange and call the processor to route it.
For documentation then check the Camel website, and/or chapter 11 in the Camel in Action book which covers creating custom components.
I'm designing a system using comet where there is a common channel where data getting published. I need to filter the data using some conditions based on client subscription details. Can anyone tell how I can do this? I thought I can do this using DataFilter.
Channel.addDataFilter(DataFilter filter);
Is this the correct way? If so any sample code to achieve this please?
There is no Channel.addDataFilter(DataFilter) method, but you can achieve the same results in a different way.
First, have a look at the available DataFilter implementations already available.
Then it's enough that you add a DataFilterMessageListener to the channel you want to filter data on, and specify one or more DataFilter to the DataFilterMessageListener.
You can find an example of this in the CometD demos shipped with the CometD distribution, for example here.
The right way to add the DataFilterMessageListener is during channel initialization, as it is done in the example linked above through a #Configure annotation, or equivalently via ServerChannel.Initializer.
Finally, have a look at how messages are processed on the server from the documentation: http://docs.cometd.org/reference/#concepts_message_processing.
It is important to understand that modifications made by DataFilter are seen by all subscribers.