Lagom Publish message with Kafka - java

Only one way of publishing is described here.
There is another way?
The example I need to make a publication with dynamic topic id and custom event without persistentEntityRegistry?
And how do I can publish the event with eventId?
#Override
default Descriptor descriptor() {
return named("helloservice").withCalls(
pathCall("/api/hello/:id", this::hello),
pathCall("/api/event/:id", this::pushEventWithId) // id - eventId
)
.withTopics(
topic(GREETINGS_TOPIC, this::greetingsTopic)
)
.withAutoAcl(true);
}
Processing request.
public ServiceCall<RequestMessage, NotUsed> pushEventWithId(String eventId) {
return message -> {
// Here I need push this message to kafka with eventId. Another service should be subscribed on this eventId
}
}
Lagom version: 1.3.10

This is not currently supported. What you can do is instantiate the Kafka client directly yourself (this is not hard to do) to publish messages imperatively like that.
While support will be added for publishing messages imperatively in the future, one reason Lagom hasn't added support yet is that very often when people want to do this, they're actually introducing anti patterns into their system, such as the opportunity for inconsistency. For example, if you have service that updates some database, then publishes a message to Kafka, you've got a problem, because if the database update succeeds, but the message publish fails, then nothing is going to get that update, and your system will be in an inconsistent state. What this presentation for a detailed look into why this is a problem, and how publishing events from an event log solves it.

Related

Spring AMQP #RabbitListener is not ready to receive messages on #ApplicationReadyEvent. Queues/Bindings declared too slow?

we have a larger multi service java spring app that declares about 100 exchanges and queues in RabbitMQ on startup. Some are declared explicitly via Beans, but most of them are declared implicitly via #RabbitListener Annotations.
#Component
#RabbitListener(
bindings = #QueueBinding(key = {"example.routingkey"},
exchange = #Exchange(value = "example.exchange", type = ExchangeTypes.TOPIC),
value = #Queue(name = "example_queue", autoDelete = "true", exclusive = "true")))
public class ExampleListener{
#RabbitHandler
public void handleRequest(final ExampleRequest request) {
System.out.println("got request!");
}
There are quite a lot of these listeners in the whole application.
The services of the application sometimes talk to each other via RabbitMq, so take a example Publisher that publishes a message to the Example Exchange that the above ExampleListener is bound to.
If that publish happens too early in the application lifecycle (but AFTER all the Spring Lifecycle Events are through, so after ApplicationReadyEvent, ContextStartedEvent), the binding of the Example Queue to the Example Exchange has not yet happend and the very first publish and reply chain will fail. In other words, the above Example Listener would not print "got request".
We "fixed" this problem by simply waiting 3 seconds before we start sending any RabbitMq messages to give it time to declare all queues,exchanges and bindings but this seems like a very suboptimal solution.
Does anyone else have some advice on how to fix this problem? It is quite hard to recreate as I would guess that it only occurs with a large amount of queues/exchanges/bindings that RabbitMq can not create fast enough. Forcing Spring to synchronize this creation process and wait for a confirmation by RabbitMq would probably fix this but as I see it, there is no built in way to do this.
Are you using multiple connection factories?
Or are you setting usePublisherConnection on the RabbitTemplate? (which is recommended, especially for a complex application like yours).
Normally, a single connection is used and all users of it will block until the admin has declared all the elements (it is run as a connection listener).
If the template is using a different connection factory, it will not block because a different connection is used.
If that is the case, and you are using the CachingConnectionFactory, you can call createConnection().close() on the consumer connection factory during initialization, before sending any messages. That call will block until all the declarations are done.

Publish / Subscribe MQTT using SmallRye reactive messaging dynamically

We try to publish and subscribe to MQTT protocol using smallrye reactive messaging. We managed to actually publish a message into a specific topic/channel through the following simple code
import io.smallrye.mutiny.Multi;
import org.eclipse.microprofile.reactive.messaging.Outgoing;
import javax.enterprise.context.ApplicationScoped;
import java.time.Duration;
#ApplicationScoped
public class Publish {
#Outgoing("pao")
public Multi<String> generate() {
return Multi.createFrom().ticks().every(Duration.ofSeconds(1))
.map(x -> "A Message in here");
}
}
What we want to do is to call whenever we want the generate() method somehow with a dynamic topic, where the user will define it. That one was our problem but then we found these classes from that repo in github. Package name io.smallrye.reactive.messaging.mqtt
For example we found that there is a class that says it makes a publish call to a MQTT broker(Mosquitto server up).
Here in that statement SendingMqttMessage<String> message = new SendingMqttMessage<String>("myTopic","A message in here",0,false);
We get the a red underline under the SendingMqttMessage<String> saying 'SendingMqttMessage(java.lang.String, java.lang.String, io.netty.handler.codec.mqtt.MqttQoS, boolean)' is not public in 'io.smallrye.reactive.messaging.mqtt.SendingMqttMessage'. Cannot be accessed from outside package
UPDATE(Publish done)
Finally made a Publish request to the mqtt broker(a mosquitto server) and all this with a dynamic topic configured from user. As we found out the previous Class SendingMqttMessage was not supposed to be used at all. And we found out that we also needed and emitter to actually make a publish request with a dynamic topic.
#Inject
#Channel("panatha")
Emitter<String> emitter;
#POST
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public Response createUser(Device device) {
System.out.println("New Publish request: message->"+device.getMessage()+" & topic->"+device.getTopic());
emitter.send(MqttMessage.of(device.getTopic(), device.getMessage()));
return Response.ok().status(Response.Status.CREATED).build();
}
Now we need to find out about making a Subscription to a topic dynamically.
first to sett us to the same page:
Reactive messaging does not work with topics, but with channels.
That is important to note, because you can exclusively read or write to a channel. So if you want to provide both, you need to configure two channels pointing at the same topic, one incoming and one outgoing
To answer your question:
You made a pretty good start with Emitters, but you still lack the dynamic nature you'd like.
In your example, you acquired that Emitter thru CDI.
Now that is all we need, to make this dynamic, since we cann dynamically inject Beans at runtime using CDI like this:
Sending Messages
private Emitter<byte[]> dynamicEmitter(String topic){
return CDI.current().select(new TypeLiteral<Emitter<byte[]>>() {}, new ChannelAnnotation(topic)).get();
}
please also note, that i am creating a Emitter of type byte[], as this is the only currently supportet type of the smallrye-mqtt connector (version 3.4.0) according to its documentation.
Receiving Messages
To read messages from a reactive messaging channel, you can use the counterpart of the Emitter, which is the Publisher.
It can be used analog:
private Publisher<byte[]> dynamicReceiver(String topic){
return CDI.current().select(new TypeLiteral<Publisher<byte[]>>() {}, new ChannelAnnotation(topic)).get();
}
You can then process these Date in any way you like.
As demo, it hung it on a simple REST Endpoint
#GET
#Produces(MediaType.SERVER_SENT_EVENTS)
public Multi<String> stream(#QueryParam("topic") String topic) {
return Multi.createFrom().publisher(dynamicReceiver(topic)).onItem().transform(String::new);
}
#GET
#Path("/publish")
public boolean publish(#QueryParam("msg") String msg, #QueryParam("topic") String topic) {
dynamicEmitter(topic).send(msg.getBytes());
return true;
}
One more Thing
When creating this solution I hit a few pitfalls you should know about:
Quarkus removes any CDI-Beans that are "unused". So if you want to inject them dynamically, you need to exclude those, or turne off that feature.
All channels injected that way must be configured. Otherwise the injection will fail.
For some Reason, (even with removal completely disabled) I was unable to inject Emitters dynamically, unless they are ever injected elsewhere.

Corda RPC: A hot observable returned from an RPC was never subscribed to

I build a CorDapp based on the Java Template. On top of that, I created a React front-end. Now, I want to start a flow from my front-end. To do so, I modified the template server, so that the controller starts my flow:
#GetMapping(value = "/templateendpoint", produces = "text/plain")
private String templateendpoint() {
proxy.startTrackedFlowDynamic(issueTokens.class, 30, "O=Bob, L=Berlin, C=DE");
return "The flow was started";
}
This operation does start the flow that issues 30 tokens to Bob. I can see that the flow was successful, by querying Bob's vault. However, I get the following error on the template server:
RPCClientProxyHandler.onRemoval - A hot observable returned from an RPC was never subscribed to.
This wastes server-side resources because it was queueing observations for retrieval.
It is being closed now, but please adjust your code to call .notUsed() on the observable to close it explicitly. (Java users: subscribe to it then unsubscribe).
If you aren't sure where the leak is coming from, set -Dnet.corda.client.rpc.trackRpcCallSites=true on the JVM command line and you will get a stack trace with this warning.
After this first transaction, I cannot start another flow. The .notUsed() method only works for Kotlin. However, I couldn't find a working way to subscribe and then unsubscribe from the observable.
Could anyone give me an example on how to implement this with the Corda flow? Moreover, what is the most practical way to pass information from the front-end to the controller class, in order to use that as flow arguments?
The reason that the error appears is that the Observable on the client-side gets garbage collected.
The solution is provided has been provided in the bracket-
(Java users: subscribe to it then unsubscribe)
So in your case, you can do something like this:
Subscription subs = updates.subscribe();
subs.unsubscribe();
Probably a more practical way is to keep the observable instance as a private attribute - such that it won't get garbage-collected. ie.
private Observable observable;
Ref: https://docs.corda.net/docs/corda-os/4.4/clientrpc.html#observables

graphql-java - How to use subscriptions with spring boot?

In a project I use graphql-java and spring boot with a postgreSQL Database. Now I would like to use the subscription feature published in version 3.0.0. Unfortunately, the information about the application of the subsciption function is not very mature.
How is the approach to achieve real-time functionality using graphql-java with subscriptions?
As of recent graphql-java versions, subscriptions are fully supported. The DataFetcher for a subscription must return a org.reactivestreams.Publisher, and graphql-java will take care of mapping the query function over the results.
The feature is nicely documented and there's a complete example using web sockets available in the official repo.
If you have a reactive data source in place (e.g. Mongo with a reactive driver, or probably anything that R2DBC supports), you're all set. Just use #Tailable and Spring Data will already give you a Flux (which implements Publisher) and there's nothing else you need to do.
As for a more manual Spring specific implementation, I can't imagine it being too hard to use Spring's own event mechanism (a nice tutorial here as well) to underlie the Publisher.
Every time there's an incoming subscription, create and register a new listener with the application context: context.addApplicationListener(listener) that will publish to the correct Publisher. E.g. in the DataFetcher:
// Somehow create a publisher, probably using Spring's Reactor project. Or RxJava.
Publisher<ResultObject> publisher = ...;
//The listener reacts on application events and pushes new values through the publisher
ApplicationListener listener = createListener(publisher);
context.addApplicationListener(listener);
return publisher;
When the web socket disconnects or you somehow know the event stream is finished, you must make sure to remove the listener.
I haven't actually tried any of this, mind you, I'm just thinking aloud.
Another option is to use Reactor directly (with or without Spring WebFlux). There's a sample using Reactor and WebSocket (through GraphQL SPQR Spring Boot Starter) here.
You create a Publisher like this:
//This is really just a thread-safe wrapper around Map<String, Set<FluxSink<Task>>>
private final ConcurrentMultiRegistry<String, FluxSink<Task>> subscribers = new ConcurrentMultiRegistry<>();
#GraphQLSubscription
public Publisher<Task> taskStatusChanged(String taskId) {
return Flux.create(subscriber -> subscribers.add(taskId, subscriber.onDispose(() -> subscribers.remove(taskId, subscriber))), FluxSink.OverflowStrategy.LATEST);
}
And then push new values from elsewhere (probably a related mutation or a reactive storage) like this:
subscribers.get(taskId).forEach(subscriber -> subscriber.next(task));
E.g.
#GraphQLMutation
public Task updateTask(#GraphQLNonNull String taskId, #GraphQLNonNull Status status) {
Task task = repo.byId(taskId); //find the task
task.setStatus(status); //update the task
repo.save(task); //persist the task
//Notify all the subscribers following this task
subscribers.get(taskId).forEach(subscriber -> subscriber.next(task));
return task;
}
With SPQR Spring Starter, this is all that's needed to get you an Apollo-compatible subscription implementation.
I got the same issue where I was spiking on the lib to integrate with spring boot. I found graphql-java, however, it seems it only support 'subscription' on schema level, it is not perform any transnational support for this feature. Meaning you might need to implement it your self.
Please refer to https://github.com/graphql-java/graphql-java/blob/master/docs/schema.rst#subscription-support
For the record: here is another very nice, compact example that implements GraphQLs essential features queries, mutations and subscriptions: https://github.com/npalm/blog-graphql-spring-service

Messages dispatching sytem design in Java

I am looking for a lightweight and efficient solution for the following use case:
The gateway module receives resources to deliver for different acceptors.
The resources queued (by order of arrive) for each acceptor.
A purge process scans those queues, if resources are available for some acceptor then he bundles them under some tag (unique id) and sends a notification that a new bundle is available.
System characteristics:
The number of acceptors is dynamic.
No limitations on number of resources in one bundle.
The module will be used in Tomcat 7 under Java 7 (not clustered).
I considered the following solutions:
JMS - dymanic queue configuration for each acceptor, is it possible to consume all available messages in a queue? Threads configuration per queue (not scalable)?
AKKA Actors. Didn't find a suitable pattern for usage.
Naive pure Java implementaion, where queues will be scanned by one thread (round robin).
I think that this is the right place to discuss about available solutions for this problem.
Please share your ideas when considering the following points:
Suitable third parties frameworks.
Resources queues scalable scanning.
Thanks in Advance.
You can use various technologies eg:
JMS dynamic queues
Extended LMAX disruptor ( eg. https://github.com/hicolour/disruptor-ext)
but from the high availability and scalability reasons you should use Akka
Akka
The starting point for your implementation will be Consistent Hashing routing algorithm built into Akka - in simple words this type of routing logic selects consistent route based on a provided key. Routes comparing to your problem description are acceptors.
Router actor comes in two distinct flavors, which gives you flexible mechanism to deploy new acceptors in your infrastructure.
Pool - The router creates routees as child actors and removes them from the router if they terminate.
Group - The routee actors are created externally to the router and the router sends messages to the specified path using actor selection, without watching for termination.
First of all please read Akka routing documentation to get better understanding of routing implementation in the Akka framework:
http://doc.akka.io/docs/akka/2.3.7/java/routing.html
You can also check this article about scalable and high available systems design:
http://prochera.com/blog/2014/07/15/building-a-scalable-and-highly-available-reactive-applications-with-akka-load-balancing-revisited/
Q1 Is it possible for Actor to know his route (his hash key)?
Actor may know what key is currently handled, because it may be just part of the message - but you shouldn't build cross-messages logic/state based on this key.
Message:
import akka.routing.ConsistentHashingRouter.ConsistentHashable
class Message(key : String) extends ConsistentHashable with Serializable {
override def consistentHashKey(): AnyRef = key
}
Actor:
import akka.actor.{Actor, ActorLogging}
class EchoActor extends Actor with ActorLogging {
log.info("Actor created {}", self.path.name)
def receive = {
case message: Message =>
log.info("Received message {} in actor {}", message.consistentHashKey(), self.path.name)
case _ => log.error("Received unsupported message");
}
}
Q2 Can Actor manage a state except his mailbox?
Actors states can be changed only through the messages sent between them.
If you will initialize actor containing reference to the classic java/spring/.. bean, it will be able to interact with non-actor world/state eg. dao layer, but this type of integration should be limited as possible and treated as anti pattern.
Q3 Is there a way to use configuration that is collision resistant?
As an API consumer, you need to define on your own hand collision resistant model, but once again Akka gives infrastructure required to do it.
In most cases key will be part of the domain eg. auction id , customer id
If key needs to be generated on demand you can use an ClusterSingleton
with Persistence extension.
Generator may be an Actor responsible for the generation of the unique ID, other actor may obtain new id using ask pattern.
ClusterSingleton is initialized using ClusterSingletonManager and obtained using ClusterSingletonProxy
system.actorOf(ClusterSingletonManager.props(
singletonProps = Props(classOf[Generator]),
singletonName = "gnerator",
terminationMessage = End,
role = Some("generator")),
name = "singleton")
system.actorOf(ClusterSingletonProxy.props(
singletonPath = "/user/singleton/generator",
role = Some("generator")),
name = "generatorProxy")
I think for your problem JMS will the proper solution. You can go with RabbitMQ which has routers which route messages to different queue as per the key and provides built in solution for message flow and message acknowledgement mechanism.
You could use Apache Camel for this. Its lightweight and supports alot of enterprise integration patterns. Particularly Content Based Router is a possible solution.

Categories