Create a not connected PublishSubscribeChannel with subflows - java

Long story short: I need to have something like below.
PublishSubscribeChannel firstChannel = new PublishSubscribeSpec(executor).subscribe(subFlow -> ...).get();
Is there a way to create a pubsub channel with subflows which is not (yet) connected to any other flow?
The snippet is not working because of PublishSubscribeSpec(Executor) has protected access in PublishSubscribeSpec.
I will need to register channels like this dynamically without any information about which flow(s) will be using these channels.

has protected access in PublishSubscribeSpec
That was exactly a reason to make it protected - to avoid an unusual configuration problem like your. The subflow cannot be provided like this in the plain PublishSubscribeChannel definition. It is part of Java DSL parser in the framework to determine such a configuration and register respective beans in the application context. With that explicit get() call you just fully eliminate a hook for Java DSL parser to understand your configuration.
without any information about which flow(s) will be using these channels.
That's not true according your .subscribe(subFlow -> intention. Adding a subflow to the PublishSubscribeSpec is indeed "an information which flow will be using these channel".
Perhaps we need to look into your business requirement from another angle. There is no reason to be stuck with subflows approach when we simply can use a PublishSubscribeChannel from any other place where a MessageChannel is needed as an input. I mean if you just create a plain PublishSubscribeChannel and then use it for example for the IntegrationFlows.from(MessageChannel) factory, you'll get the same runtime result as you would expect from those .subscribe(subFlow -> connections.

Related

Spring AMQP #RabbitListener is not ready to receive messages on #ApplicationReadyEvent. Queues/Bindings declared too slow?

we have a larger multi service java spring app that declares about 100 exchanges and queues in RabbitMQ on startup. Some are declared explicitly via Beans, but most of them are declared implicitly via #RabbitListener Annotations.
#Component
#RabbitListener(
bindings = #QueueBinding(key = {"example.routingkey"},
exchange = #Exchange(value = "example.exchange", type = ExchangeTypes.TOPIC),
value = #Queue(name = "example_queue", autoDelete = "true", exclusive = "true")))
public class ExampleListener{
#RabbitHandler
public void handleRequest(final ExampleRequest request) {
System.out.println("got request!");
}
There are quite a lot of these listeners in the whole application.
The services of the application sometimes talk to each other via RabbitMq, so take a example Publisher that publishes a message to the Example Exchange that the above ExampleListener is bound to.
If that publish happens too early in the application lifecycle (but AFTER all the Spring Lifecycle Events are through, so after ApplicationReadyEvent, ContextStartedEvent), the binding of the Example Queue to the Example Exchange has not yet happend and the very first publish and reply chain will fail. In other words, the above Example Listener would not print "got request".
We "fixed" this problem by simply waiting 3 seconds before we start sending any RabbitMq messages to give it time to declare all queues,exchanges and bindings but this seems like a very suboptimal solution.
Does anyone else have some advice on how to fix this problem? It is quite hard to recreate as I would guess that it only occurs with a large amount of queues/exchanges/bindings that RabbitMq can not create fast enough. Forcing Spring to synchronize this creation process and wait for a confirmation by RabbitMq would probably fix this but as I see it, there is no built in way to do this.
Are you using multiple connection factories?
Or are you setting usePublisherConnection on the RabbitTemplate? (which is recommended, especially for a complex application like yours).
Normally, a single connection is used and all users of it will block until the admin has declared all the elements (it is run as a connection listener).
If the template is using a different connection factory, it will not block because a different connection is used.
If that is the case, and you are using the CachingConnectionFactory, you can call createConnection().close() on the consumer connection factory during initialization, before sending any messages. That call will block until all the declarations are done.

gRPC - Use of ContextPropagatingExecutorService and currentContextExecutor

Since gRPC makes service call on new thread and gRPC context is Thread Local, how can I propagate this gRPC context? I found that Context.currentContextExecutor() and ContextPropagatingExecutorService can be used but I haven't found enough resources or example for these 2 options. Can someone help to implement these?
ClientInterceptors shouldn't change the context instance seen by the application. The Context behavior shouldn't really change whether using blocking, async, or future stubs and a blocking API would not be able to change the current context.
While an interceptor is free to modify a pre-existing (mutable) value in the Context, there's generally little need. It is normally easier to create a new interceptor instance each RPC and communicate with the interceptor directly, or communicate via a custom CallOption.
If you have just a single call site that needs access to response headers, then MetadataUtils.newCaptureMetadataInterceptor() is a convenient (although roundabout) way to get the Metadata. It was designed for testing, but is appropriate for small-scale use outside of testing situations.
AtomicReference<Metadata> headers = new AtomicReference<>();
AtomicReference<Metadata> trailers = new AtomicReference<>();
// Using blocking for simplicity, but applies equally to futures
stub.withInterceptors(MetadataUtils.newCaptureMetadataInterceptor(headers, trailers))
.someRpc();
Metadata headersSeen = headers.get();
If you need to access the same header from multiple callsites, it is better to create a custom interceptor that does what you need.
CustomInterceptor interceptor = new CustomInterceptor();
stub.withInterceptors(interceptor)
.someRpc();
... = interceptor.getWhateverValue();
This is demonstrating a "general" use case. Specific instances commonly can tweak their API further to be more convenient and natural.

Where is the Spring Actuator Controller endpoint and can I call it programmatically with jvm call?

I want to find the actual java class that serves the Spring Actuator endpoint (/actuator).
It's similar to this question in a way, but that person wanted to call it via a network HTTP call. Ideally, I can call it within the JVM to save on the cost of setting up an HTTP connection.
The reason for this is because we have 2 metrics frameworks in our system. We have a legacy metrics framework built on OpenCensus and we migrated to Spring Actuator (Prometheus metrics based on Micrometer). I think the Spring one is better but I didn't realize how much my company built infrastructure around the old one. For example, we leverage internal libraries that use OpenCensus. Infra team is depending on Opencensus-based metrics from our app. So the idea is to try to merge and report both sets of metrics.
I want to create my own metrics endpoint that pulls in data from Opencensus's endpoint and Actuator's endpoint. I could make an HTTP call to each, but I'd rather call them within the JVM to save on resources and reduce latency.
Or perhaps I'm thinking about it wrong. Should I simply be using MeterRegistry.forEachMeter() in my endpoint?
In any case, I thought if I found the Spring Actuator endpoint, I can see an example of how they're doing it and mimic the implementation even if I don't call it directly.
Bonus: I'll need to track down the Opencensus handler that serves its endpoint too and will probably make another post for that, but if you know the answer to that as well, please share!
I figured it out and posting this for anyone else interested.
The key finding: The MeterRegistry that is #Autowired is actually a PrometheusMeterRegistry if you enable the prometheus metrics.
Once you cast it into a PrometheusMeterRegistry, you can call its .scrape() method to return the exact same metrics printout you would when you hit the http endpoint.
I also need to get the same info from OpenCensus and I found a way to do that too.
Here's the snippet of code for getting metrics from both frameworks
Enumeration<MetricFamilySamples> openCensusSamples = CollectorRegistry.defaultRegistry.filteredMetricFamilySamples(ImmutableSet.of());
StringWriter writer = new StringWriter();
TextFormat.write004(writer, openCensusSamples);
String openCensusMetrics = writer.toString();
PrometheusMeterRegistry registry = (PrometheusMeterRegistry) meterRegistry;
String micrometerMetrics = registry.scrape();
return openCensusMetrics.concat(micrometerMetrics);
I found out another interesting way of doing this.
The other answer I gave but it has one issue. It contains duplicate results. When I looked into it, I realized that both OpenCensus and Micrometer were reporting the same result.
Turns out that the PrometheusScrapeEndpoint implementation uses the same CollectorRegistry that OpenCensus does so the both sets of metrics were being added to the same registry.
You just need to make sure to provide these beans
#PostConstruct
public void openCensusStats() {
PrometheusStatsCollector.createAndRegister();
}
#Bean
public CollectorRegistry collectorRegistry() {
return CollectorRegistry.defaultRegistry;
}

Why is my handler method not triggered when defined as a lambda?

I am defining an IntegrationFlow to stream from SFTP to S3 with the DSL syntax this way :
return IntegrationFlows.from(Sftp.inboundStreamingAdapter(remoteFileTemplate)
.remoteDirectory("remoteDirectory"),
e -> e.poller(Pollers.fixedDelay(POLL, TimeUnit.SECONDS)))
.transform(new StreamTransformer())
.handle(s3UploadMessageHandler(outputFolderPath, "headers['file_remoteFile']")) // Upload on S3
.get();
private S3MessageHandler s3UploadMessageHandler(String folderPath, String spelFileName) {
S3MessageHandler s3MessageHandler = new S3MessageHandler(amazonS3, s3ConfigProperties.getBuckets().getCardManagementData());
s3MessageHandler.setKeyExpression(new SpelExpressionParser().parseExpression(String.format("'%s/'.concat(%s)", folderPath, spelFileName)));
s3MessageHandler.setCommand(S3MessageHandler.Command.UPLOAD);
return s3MessageHandler;
}
And it works as intended : the file is well uploaded to my S3 bucket. However, I would like to avoid SPEL syntax, and inject headers from the message to the s3uploadMessageHandler method, this way I could use a simple ValueExpression to set the keyExpression in the s3UploadMessageHandler method.
To do this, I changed
handle(s3UploadMessageHandler(outputFolderPath, "headers['file_remoteFile']")) // Upload on S3
to
handle(m -> s3UploadMessageHandler(outputFolderPath, (String) m.getHeaders().get("file_remoteFile"))) // Upload on S3
But now this handler doesn't seem to be triggered anymore. There is no errors in the logs, and I know from the logs that the SFTP polling is still working.
I tried to find the reason behind this, and I saw that when entering the handle method in IntegrationFlowdefinition.java, the messageHandler class type is different : it's an S3MessageHandler when called without lambda, and a MyCallingClass$lambda when calling with a lambda expression.
What did I miss to make my scenario working ?
There are two ways to handle a message. One is via a MessageHandler implementation - this is the most efficient approach and that's done in the framework for channel adapter implementation, like that S3MessageHandler. Another way is a POJO method invocation - this is the most user-friendly approach when you don't need to worry about any framework interfaces.
So, when you use it like this .handle(s3UploadMessageHandler(...)) you refer to a MessageHandler and the framework knows that a bean for that MessageHandler has to be registered since your s3UploadMessageHandler() is not a #Bean.
When you use it as a lambda, the framework treats it as a POJO method invocation and there is a bean registered for the MethodInvokingMessageHandler, but not your S3MessageHandler.
Anyway, even if you change your s3UploadMessageHandler() to be a #Bean method it is not going to work because you don't let the framework to call the S3MessageHandler.handleMessage(). What you do here is just call that private method at runtime to create an S3MessageHandler instance against every request message: the MethodInvokingMessageHandler calls your lambda in its handleMessage() and that's all - nothing is going to happen with S3.
The ValueExpression cannot help you here because you need to evaluate a destination file against every single request message. Therefore you need a runtime expression. There is indeed nothing wrong with the new SpelExpressionParser().parseExpression(). Just because we don't have a choice and have to have only single stateless S3MessageHandler and don't recreate it at runtime on every request like you try to achieve with that suspicious lambda and ValueExpression.

graphql-java - How to use subscriptions with spring boot?

In a project I use graphql-java and spring boot with a postgreSQL Database. Now I would like to use the subscription feature published in version 3.0.0. Unfortunately, the information about the application of the subsciption function is not very mature.
How is the approach to achieve real-time functionality using graphql-java with subscriptions?
As of recent graphql-java versions, subscriptions are fully supported. The DataFetcher for a subscription must return a org.reactivestreams.Publisher, and graphql-java will take care of mapping the query function over the results.
The feature is nicely documented and there's a complete example using web sockets available in the official repo.
If you have a reactive data source in place (e.g. Mongo with a reactive driver, or probably anything that R2DBC supports), you're all set. Just use #Tailable and Spring Data will already give you a Flux (which implements Publisher) and there's nothing else you need to do.
As for a more manual Spring specific implementation, I can't imagine it being too hard to use Spring's own event mechanism (a nice tutorial here as well) to underlie the Publisher.
Every time there's an incoming subscription, create and register a new listener with the application context: context.addApplicationListener(listener) that will publish to the correct Publisher. E.g. in the DataFetcher:
// Somehow create a publisher, probably using Spring's Reactor project. Or RxJava.
Publisher<ResultObject> publisher = ...;
//The listener reacts on application events and pushes new values through the publisher
ApplicationListener listener = createListener(publisher);
context.addApplicationListener(listener);
return publisher;
When the web socket disconnects or you somehow know the event stream is finished, you must make sure to remove the listener.
I haven't actually tried any of this, mind you, I'm just thinking aloud.
Another option is to use Reactor directly (with or without Spring WebFlux). There's a sample using Reactor and WebSocket (through GraphQL SPQR Spring Boot Starter) here.
You create a Publisher like this:
//This is really just a thread-safe wrapper around Map<String, Set<FluxSink<Task>>>
private final ConcurrentMultiRegistry<String, FluxSink<Task>> subscribers = new ConcurrentMultiRegistry<>();
#GraphQLSubscription
public Publisher<Task> taskStatusChanged(String taskId) {
return Flux.create(subscriber -> subscribers.add(taskId, subscriber.onDispose(() -> subscribers.remove(taskId, subscriber))), FluxSink.OverflowStrategy.LATEST);
}
And then push new values from elsewhere (probably a related mutation or a reactive storage) like this:
subscribers.get(taskId).forEach(subscriber -> subscriber.next(task));
E.g.
#GraphQLMutation
public Task updateTask(#GraphQLNonNull String taskId, #GraphQLNonNull Status status) {
Task task = repo.byId(taskId); //find the task
task.setStatus(status); //update the task
repo.save(task); //persist the task
//Notify all the subscribers following this task
subscribers.get(taskId).forEach(subscriber -> subscriber.next(task));
return task;
}
With SPQR Spring Starter, this is all that's needed to get you an Apollo-compatible subscription implementation.
I got the same issue where I was spiking on the lib to integrate with spring boot. I found graphql-java, however, it seems it only support 'subscription' on schema level, it is not perform any transnational support for this feature. Meaning you might need to implement it your self.
Please refer to https://github.com/graphql-java/graphql-java/blob/master/docs/schema.rst#subscription-support
For the record: here is another very nice, compact example that implements GraphQLs essential features queries, mutations and subscriptions: https://github.com/npalm/blog-graphql-spring-service

Categories