I am using the io.lettuce.core library and I am having trouble subscribing to a channel using the RedisPubSubReactiveCommands interface.
I have a StatefulRedisPubSubConnection and an active redis cluster which I am attempting to subscribe to.
connection.sync().subscribe("channel") works fine, as does connection.async().subscribe("channel"). However, when I use the reactive 'hot observable' interface provided by lettuce like so:
connection.reactive().subscribe(channels).subscribe();
connection.reactive().observeChannels().doOnNext(this::notifyObservers).subscribe();
It will not register as a subscription action on redis. I feel like I'm following the example given in the lettuce documentation closely.
I'm programming for an interface that accepts a hot Flux Observable and I'm getting close to wrapping the sync or async connection interfaces with my own reactive wrapper and throwing them in the pipe. What am I doing wrong here?
In case anyone else runs into this same problem, it turns out I was passing in a Set<String> Object into a function that accepts a varargs Object... and didn't realize it was treating the entire collection as a single element instead of parsing it as a varargs array.
I'll leave this up for others to learn from my dumb mistake.
Related
I have an application based on Java and Spring (Springboot, Spring-reactive, Spring-Kafka) witch continuously consumes information from a Kafka-topic and store the data with a key in a ConcurrentHashMap (via a simple wrapper). The application also contains an REST-API for fetching streaming information using Reactive Flux.
I would like to come up with a way of where it is possible to call the API for data from the map (using a key), where the response is a stream of the currently associated value from the map together with subsequent changes to the value (as updated from the topic) i.e without closing the stream.
It feels like this should be possible using maby a PropertyChangeListener combined with a Flux.Generate but my reactive skills are to weak to see how I should achieve this. Ive done some tries but I cant see how to get the Generator to emit on PropertyChangeEvents.
Would this be possible?
If anyone could provide me with a example for this, or maby point me to one online it would be much appreciated.
BR
From release notes (https://spring.io/blog/2017/11/29/spring-integration-5-0-ga-available):
Reactive Streams support via FluxMessageChannel,
ReactiveStreamsConsumer and direct org.reactivestreams.Subscriber
implementation in the AbstractMessageHandler;
My understanding for Reactor support was e.g. you can return Mono/Flux from a transformer/handler, and Spring Integration will automatically transform it to Messages while respecting back pressure. Unfortunately, I cannot make it work like that, e.g.:
IntegrationFlows.from("input")
.handle((p, h) -> Flux.just(1, 2, 3))
.log("l1")
.channel("output")
.get();
still logs one Message with FluxArray typed payload instead of three Messages with Integer payloads.
2017-12-18 17:12:33.262 INFO 97471 --- [nio-8080-exec-1] l1 : GenericMessage [payload=FluxArray, headers={id=a9701681-9945-f953-8b72-df369c2982a3, timestamp=1513613553262}]
Also, there is nothing in docs according this behaviour and new
FluxMessageChannel,
ReactiveStreamsConsumer and direct org.reactivestreams.Subscriber
implementation in the AbstractMessageHandler
So my question is, do I understand implemented Reactor support correctly, and where can I find any info on that topic?
Since we here in messaging and that really doesn't matter for the message what kind of payload you return from your service, everything is just wrapped to the Message as is. You need a special component to understand this payload. One of them is Splitter. This one determines that your payload is a Reactive Streams Publisher and iterated over that as a Flux.
Another component is WebFluxInboundEndpoint which supports this kind of payloads natively.
Your custom Service Activator might expect Flux as an argument to deal with.
But nothing happens automatically. Spring Integration supports Reactive types, but doesn't do they processing without end-user preferences.
BTW, the splitter should be supplied with the FluxMessageChannel as an output to process the splitted Flux via back--pressure manner.
Feel free to raise A JIRA about documenting FluxMessageChannel. Indeed we have missed that. The ReactiveStreamsConsumer needs more love as well and we have some plans for 5.1 to improve Reactive Streams model and we'll try to make it more flexible or even like an option to turn on it by default. Nothing can promise from today though.
I'm using Spring Integration 4. I was hoping to define a contract for various integrations whereby an integration needs to implement a generic interface like:
public interface Integration {
Object execute(Map<String, Object> inputs);
}
Then to define an integration you define a gateway:
<int:gateway service-interface="com.whatever.Integration" ... >
I've got this working but am stuck trying to understand how to handle the execute method's return value. The first integration I built sends an email and so doesn't really have a return value, i.e. the last element of the workflow is a non-MessageProducer mail sender: <int-mail:outbound-channel-adapter ... >.
If I change the execute method's return type to void the integration runs fine, but as soon as I change it to Object, the integration runs but never returns. I assume this is because it's waiting for something on the reply channel.
For these type of non-result-producing integrations, is there a way to force a true value to be returned or something? I was thinking of trying something like <int:transformer expression="true"> but I can't put this in my chain after the <int-mail:outbound-channel-adapter> because the later doesn't produce a value and so can't precede anything in the chain.
Thus, I'm a bit confused on how to handle non-MessageProducer elements in general. Any help is much appreciated.
p.s. If anybody has feedback on the integration architecture proposed above, feel free to chime in on that in the comments too.
TBH your client should know if a reply is expected. In which case, add a second method that returns void.
Or add reply-timeout=0 to the gw and you'll get null returned.
The problem with the latter is you can't tell if a reply producer timed out with requires-reply=false or the ultimate consumer would never return a reply.
I'm designing a system using comet where there is a common channel where data getting published. I need to filter the data using some conditions based on client subscription details. Can anyone tell how I can do this? I thought I can do this using DataFilter.
Channel.addDataFilter(DataFilter filter);
Is this the correct way? If so any sample code to achieve this please?
There is no Channel.addDataFilter(DataFilter) method, but you can achieve the same results in a different way.
First, have a look at the available DataFilter implementations already available.
Then it's enough that you add a DataFilterMessageListener to the channel you want to filter data on, and specify one or more DataFilter to the DataFilterMessageListener.
You can find an example of this in the CometD demos shipped with the CometD distribution, for example here.
The right way to add the DataFilterMessageListener is during channel initialization, as it is done in the example linked above through a #Configure annotation, or equivalently via ServerChannel.Initializer.
Finally, have a look at how messages are processed on the server from the documentation: http://docs.cometd.org/reference/#concepts_message_processing.
It is important to understand that modifications made by DataFilter are seen by all subscribers.
I am developing a distributed system which consists of different components (services) which are loosely (asynchronously) coupled via JMS (ActiveMQ).
Thus I do not want to reinvent the wheel, I am looking for a (well) known protocol / library that facilitates remote procedure calls inbetween these components and helps me deal with method interfaces.
So let's decompose the problem I am already solving right now via dirty solutions:
A consumer component want's to call a service, thus it constructs a request string (hand-written and dirty)
The request string is than compressed an put into a JMS message (dirty aswell)
The request message is than transmitted via JMS and routing mechanisms (that part is ok)
The service first of all needs to decompress and parse the request string, to identify the right method (dirty)
The method gets called and the reply goes like #2 - #4.
So that looks pretty much like SOAP, whilst I think that SOAP is to heavy for my application and further I am not using HTTP at all. Given that I was thinking that one might be able to deconstruct the problem into different components.
Part A: HTTP is replaced by JMS (that one is okay)
Part B: XML is replaced by something more lightweight (alright, MessagePack comes in handy here)
Part C: Mechanism to parse request/reply string to identify operation name and parameter values (that one is the real problem here)
I was looking into MessagePack, ProtocolBuffers, Thrift and so forth, but what I don't like about them is that they introduce their own way of handling the actual (TCP) communication. And bypass my already sophisticated JMS infrastructure (which also handles load balancing and stuff).
To further elaborate Part C above, this is how I am currently handling it: Right know I would do something like the following if a consumer would call a service, let's assume the service takes a text and replies keywords. I would have the consumer create a JMS message and transmit it (via ActiveMQ) to the service. The message would contain:
Syntax: OPERATION_NAME [PARAMETERS]
Method: GET_ALL_KEYWORDS [String text] returns [JSON String[] keywords]
Example Request: GET_ALL_KEYWORDS "Hello world, this is my input text..."
Example Reply: ["hello", "world", "text"]
Needless to say that it feels like hacked-together. The problem I see is if I would be to change the method interface by adding or deleting parameters, I would have to check all the request/reply string construction/deconstructions to syncronize the changes. That is pretty errorprone. I'd rather have the library construct the right request/reply syntax by looking at a java interface and throwing real exceptions on runtime if I do mess up stuff, like "Protocol Exception: Mandatory parameter not set" or something...
Any projects/libs known for that?
Requirements would be
It's small and lightweight and fast.
It's implemented in JAVA
It doesn't serve too many purposes (like some fullblown framework e.g. Spring)
I think this Spring package is what you're looking for. See JmsInvokerProxyFactoryBean and related classes.
From the javadoc:
FactoryBean for JMS invoker proxies. Exposes the proxied service for
use as a bean reference, using the specified service interface.
Serializes remote invocation objects and deserializes remote
invocation result objects. Uses Java serialization just like RMI, but
with the JMS provider as communication infrastructure.