graphql-java - How to use subscriptions with spring boot? - java

In a project I use graphql-java and spring boot with a postgreSQL Database. Now I would like to use the subscription feature published in version 3.0.0. Unfortunately, the information about the application of the subsciption function is not very mature.
How is the approach to achieve real-time functionality using graphql-java with subscriptions?

As of recent graphql-java versions, subscriptions are fully supported. The DataFetcher for a subscription must return a org.reactivestreams.Publisher, and graphql-java will take care of mapping the query function over the results.
The feature is nicely documented and there's a complete example using web sockets available in the official repo.
If you have a reactive data source in place (e.g. Mongo with a reactive driver, or probably anything that R2DBC supports), you're all set. Just use #Tailable and Spring Data will already give you a Flux (which implements Publisher) and there's nothing else you need to do.
As for a more manual Spring specific implementation, I can't imagine it being too hard to use Spring's own event mechanism (a nice tutorial here as well) to underlie the Publisher.
Every time there's an incoming subscription, create and register a new listener with the application context: context.addApplicationListener(listener) that will publish to the correct Publisher. E.g. in the DataFetcher:
// Somehow create a publisher, probably using Spring's Reactor project. Or RxJava.
Publisher<ResultObject> publisher = ...;
//The listener reacts on application events and pushes new values through the publisher
ApplicationListener listener = createListener(publisher);
context.addApplicationListener(listener);
return publisher;
When the web socket disconnects or you somehow know the event stream is finished, you must make sure to remove the listener.
I haven't actually tried any of this, mind you, I'm just thinking aloud.
Another option is to use Reactor directly (with or without Spring WebFlux). There's a sample using Reactor and WebSocket (through GraphQL SPQR Spring Boot Starter) here.
You create a Publisher like this:
//This is really just a thread-safe wrapper around Map<String, Set<FluxSink<Task>>>
private final ConcurrentMultiRegistry<String, FluxSink<Task>> subscribers = new ConcurrentMultiRegistry<>();
#GraphQLSubscription
public Publisher<Task> taskStatusChanged(String taskId) {
return Flux.create(subscriber -> subscribers.add(taskId, subscriber.onDispose(() -> subscribers.remove(taskId, subscriber))), FluxSink.OverflowStrategy.LATEST);
}
And then push new values from elsewhere (probably a related mutation or a reactive storage) like this:
subscribers.get(taskId).forEach(subscriber -> subscriber.next(task));
E.g.
#GraphQLMutation
public Task updateTask(#GraphQLNonNull String taskId, #GraphQLNonNull Status status) {
Task task = repo.byId(taskId); //find the task
task.setStatus(status); //update the task
repo.save(task); //persist the task
//Notify all the subscribers following this task
subscribers.get(taskId).forEach(subscriber -> subscriber.next(task));
return task;
}
With SPQR Spring Starter, this is all that's needed to get you an Apollo-compatible subscription implementation.

I got the same issue where I was spiking on the lib to integrate with spring boot. I found graphql-java, however, it seems it only support 'subscription' on schema level, it is not perform any transnational support for this feature. Meaning you might need to implement it your self.
Please refer to https://github.com/graphql-java/graphql-java/blob/master/docs/schema.rst#subscription-support

For the record: here is another very nice, compact example that implements GraphQLs essential features queries, mutations and subscriptions: https://github.com/npalm/blog-graphql-spring-service

Related

Spring Cloud #StreamListener condition deprecated what is the alternative

We have multiple applications consumer listening to the same kafka topic and a producer sets the message header when sending message to the topic so specific instance can evaluate the header and process the message. eg
#StreamListener(target=ITestSink.CHANNEL_NAME,condition="headers['franchiseName'] == 'sydney'")
public void fullfillOrder(#Payload TestObj message) {
log.info("sydney order request received message is {}",message.getName());
}
In Spring Cloud Stream 3.0.0 the #StreamListener is deprecated and I could not find the equivalent of the condition property in Function.
Any suggestion?
Though I was not able to find the equivalent for the functional approach either, I do have a suggestion.
The #StreamListener annotations condition does not stop the fact that the application must consume the message, read its header, and filter out specific records before passing it to the listener (fullfillOrder()). So it's safe to assume you're consuming every message that hits the topic regardless (by the event receiver that Spring Cloud has implemented for us under the hood), but the listener only gets executed when header == sydney.
If there was a way to configure the event receiver that Spring Cloud uses (to discard message before hitting listener), I would suggest looking into that. If not, would resort to filtering out any messages (non-sydney) before doing any processing. If you're familiar with Spring Cloud's functional approach, would look something like this:
#Bean
public Consumer<Message<TestObj>> fulfillOrder() {
return msg -> {
// to get header - msg.getHeaders().get(key, valueType);
// filter out bad messages
}
}
or
#Bean
public Consumer<ConsumerRecord<?, TestObj>> fulfillOrder() {
return msg -> {
// msg.headers().lastHeader("franchiseName").value() -> filter em out
}
}
Other:
^ my code assumes you're integrating the kafka-client API with Spring cloud stream via spring-cloud-stream-binder-kafka. based on tags listed, i will note Spring Cloud Stream has two versions of binders for Kafka - one for the kafka client library, and one for kafka streams library.
Without considering Spring Cloud / Frameworks, the high-lvl DSL in kafka streams doesn't give you access to headers, but the low-level Processor API does. From the example, it seems like you're leveraging the client binder and not spring-cloud-stream-binder-kafka-streams / kafka streams binder. I haven't seen an implementation of spring cloud stream + kafka streams binder using the low-level processor API, so i can't tell if that was the aim.

Where is the Spring Actuator Controller endpoint and can I call it programmatically with jvm call?

I want to find the actual java class that serves the Spring Actuator endpoint (/actuator).
It's similar to this question in a way, but that person wanted to call it via a network HTTP call. Ideally, I can call it within the JVM to save on the cost of setting up an HTTP connection.
The reason for this is because we have 2 metrics frameworks in our system. We have a legacy metrics framework built on OpenCensus and we migrated to Spring Actuator (Prometheus metrics based on Micrometer). I think the Spring one is better but I didn't realize how much my company built infrastructure around the old one. For example, we leverage internal libraries that use OpenCensus. Infra team is depending on Opencensus-based metrics from our app. So the idea is to try to merge and report both sets of metrics.
I want to create my own metrics endpoint that pulls in data from Opencensus's endpoint and Actuator's endpoint. I could make an HTTP call to each, but I'd rather call them within the JVM to save on resources and reduce latency.
Or perhaps I'm thinking about it wrong. Should I simply be using MeterRegistry.forEachMeter() in my endpoint?
In any case, I thought if I found the Spring Actuator endpoint, I can see an example of how they're doing it and mimic the implementation even if I don't call it directly.
Bonus: I'll need to track down the Opencensus handler that serves its endpoint too and will probably make another post for that, but if you know the answer to that as well, please share!
I figured it out and posting this for anyone else interested.
The key finding: The MeterRegistry that is #Autowired is actually a PrometheusMeterRegistry if you enable the prometheus metrics.
Once you cast it into a PrometheusMeterRegistry, you can call its .scrape() method to return the exact same metrics printout you would when you hit the http endpoint.
I also need to get the same info from OpenCensus and I found a way to do that too.
Here's the snippet of code for getting metrics from both frameworks
Enumeration<MetricFamilySamples> openCensusSamples = CollectorRegistry.defaultRegistry.filteredMetricFamilySamples(ImmutableSet.of());
StringWriter writer = new StringWriter();
TextFormat.write004(writer, openCensusSamples);
String openCensusMetrics = writer.toString();
PrometheusMeterRegistry registry = (PrometheusMeterRegistry) meterRegistry;
String micrometerMetrics = registry.scrape();
return openCensusMetrics.concat(micrometerMetrics);
I found out another interesting way of doing this.
The other answer I gave but it has one issue. It contains duplicate results. When I looked into it, I realized that both OpenCensus and Micrometer were reporting the same result.
Turns out that the PrometheusScrapeEndpoint implementation uses the same CollectorRegistry that OpenCensus does so the both sets of metrics were being added to the same registry.
You just need to make sure to provide these beans
#PostConstruct
public void openCensusStats() {
PrometheusStatsCollector.createAndRegister();
}
#Bean
public CollectorRegistry collectorRegistry() {
return CollectorRegistry.defaultRegistry;
}

Create a not connected PublishSubscribeChannel with subflows

Long story short: I need to have something like below.
PublishSubscribeChannel firstChannel = new PublishSubscribeSpec(executor).subscribe(subFlow -> ...).get();
Is there a way to create a pubsub channel with subflows which is not (yet) connected to any other flow?
The snippet is not working because of PublishSubscribeSpec(Executor) has protected access in PublishSubscribeSpec.
I will need to register channels like this dynamically without any information about which flow(s) will be using these channels.
has protected access in PublishSubscribeSpec
That was exactly a reason to make it protected - to avoid an unusual configuration problem like your. The subflow cannot be provided like this in the plain PublishSubscribeChannel definition. It is part of Java DSL parser in the framework to determine such a configuration and register respective beans in the application context. With that explicit get() call you just fully eliminate a hook for Java DSL parser to understand your configuration.
without any information about which flow(s) will be using these channels.
That's not true according your .subscribe(subFlow -> intention. Adding a subflow to the PublishSubscribeSpec is indeed "an information which flow will be using these channel".
Perhaps we need to look into your business requirement from another angle. There is no reason to be stuck with subflows approach when we simply can use a PublishSubscribeChannel from any other place where a MessageChannel is needed as an input. I mean if you just create a plain PublishSubscribeChannel and then use it for example for the IntegrationFlows.from(MessageChannel) factory, you'll get the same runtime result as you would expect from those .subscribe(subFlow -> connections.

Implement Reactive Kafka Listener in Spring Boot application

I'm trying to implement reactive kafka consumer in my Spring boot application and I'm looking at these examples:
https://github.com/reactor/reactor-kafka/blob/master/reactor-kafka-samples/src/main/java/reactor/kafka/samples/SampleScenarios.java
and it looks like there is no support for Spring in reactive kafka yet
I understand how kafka listeners work in non-reactive kafka API in Spring: simplest solution is to configure beans for ConcurrentKafkaListenerContainerFactory and ConsumerFactory, then use #KafkaListener annotation and voila
But I'm not sure how to properly use reactive kafka in Spring right now.
Basically I need a listener for topic. Should I create some kind of loop or scheduler of my own?
Or maybe I'm missing something. Can anyone share their knowledge and best practices?
I don't have a ready solution yet but i'm trying this (Kotlin code, Spring Boot). Someone published part of this code snippet here https://github.com/reactor/reactor-kafka/issues/100
#EventListener(ApplicationStartedEvent::class)
fun onSomeEvent() {
kafkaReceiver
.receive()
.doOnNext { record ->
val myEvent = record.value()
processMyEvent(myEvent).thenEmpty {
record.receiverOffset().acknowledge()
}
}
.doOnError {
/* todo */
}
.subscribe()
}
Look into other stack overflow questions. There is not much there, but maybe will give you some ideas
Using onErrorResume to handle problematic payloads posted to Kafka using Reactor Kafka
Continue consuming subsequent records in reactor kafka after deserialization exception

Lagom Publish message with Kafka

Only one way of publishing is described here.
There is another way?
The example I need to make a publication with dynamic topic id and custom event without persistentEntityRegistry?
And how do I can publish the event with eventId?
#Override
default Descriptor descriptor() {
return named("helloservice").withCalls(
pathCall("/api/hello/:id", this::hello),
pathCall("/api/event/:id", this::pushEventWithId) // id - eventId
)
.withTopics(
topic(GREETINGS_TOPIC, this::greetingsTopic)
)
.withAutoAcl(true);
}
Processing request.
public ServiceCall<RequestMessage, NotUsed> pushEventWithId(String eventId) {
return message -> {
// Here I need push this message to kafka with eventId. Another service should be subscribed on this eventId
}
}
Lagom version: 1.3.10
This is not currently supported. What you can do is instantiate the Kafka client directly yourself (this is not hard to do) to publish messages imperatively like that.
While support will be added for publishing messages imperatively in the future, one reason Lagom hasn't added support yet is that very often when people want to do this, they're actually introducing anti patterns into their system, such as the opportunity for inconsistency. For example, if you have service that updates some database, then publishes a message to Kafka, you've got a problem, because if the database update succeeds, but the message publish fails, then nothing is going to get that update, and your system will be in an inconsistent state. What this presentation for a detailed look into why this is a problem, and how publishing events from an event log solves it.

Categories