I've two microservices interacting with each other via Kafka, that is the one publishes messages while the other consumes them. Both the publisher and the consumer run on Quarkus (1.12.0.Final) and use reactive messaging and Mutiny.
Producer:
package myproducer;
import myavro.MyAvro;
import io.smallrye.mutiny.Uni;
import org.eclipse.microprofile.reactive.messaging.Channel;
import org.eclipse.microprofile.reactive.messaging.Emitter;
import org.eclipse.microprofile.reactive.messaging.Message;
import javax.enterprise.context.ApplicationScoped;
import javax.inject.Inject;
import java.util.concurrent.CompletableFuture;
#ApplicationScoped
public class Publisher {
#Channel("mytopic")
#Inject
public Emitter<MyAvro> myTopic;
#Override
public Uni<Void> publish(MyModel model) {
MyAvro avro = MyModelMapper.INSTANCE.modelToAvro(model);
return Uni.createFrom().emitter(e -> myTopic.send(Message.of(avro)
.addMetadata(toOutgoingKafkaRecordMetadata(avro))
.withAck(() -> {
e.complete(null);
return CompletableFuture.completedFuture(null);
})));
}
}
Consumer:
package myconsumer;
import myavro.MyAvro;
import io.smallrye.mutiny.Uni;
import io.smallrye.reactive.messaging.kafka.IncomingKafkaRecord;
import org.eclipse.microprofile.reactive.messaging.Incoming;
import javax.enterprise.context.ApplicationScoped;
#ApplicationScoped
public class Consumer {
#Incoming("mytopic")
public Uni<Void> consume(IncomingKafkaRecord<String, MyAvro> message) {
MyModel model = MyModelMapper.INSTANCE.avroToModel(message.getPayload());
return ...;
}
}
Dependencies:
include among others the artefacts
quarkus-smallrye-reactive-messaging-kafka
quarkus-resteasy-mutiny
quarkus-smallrye-opentracing
quarkus-mutiny
opentracing-kafka-client
Quarkus configuration (application.properties):
includes among others
quarkus.jaeger.service-name=myservice
quarkus.jaeger.sampler-type=const
quarkus.jaeger.sampler-param=1
quarkus.log.console.format=%d{yyyy-MM-dd HH:mm:ss} %-5p traceId=%X{traceId}, spanId=%X{spanId}, sampled=%X{sampled} [%c{2.}] (%t) %s%e%n
mp.messaging.incoming.mytopic.topic=abc
mp.messaging.incoming.mytopic.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
mp.messaging.incoming.mytopic.value.deserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer
...
mp.messaging.incoming.mytopic.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor
With this setup no traceId or spanId is logged at all (even though they should according to Quarkus' "Using OpenTracing" guide). Only after adding #org.eclipse.microprofile.opentracing.Traced a traceId and a spanId is set, but both are completely unrelated to each other on the producer and the consumer.
I checked my opentracing configuration against the beforementioned Quarkus' guide "Using OpenTracing" but found no hints for a misconfiguration on my side.
After reading discussions about issues in some Quarkus extensions relying on ThreadLocals when using with Mutiny I added the artefact quarkus-smallrye-context-propagation to my dependencies, but to no avail.
I suspect that the issue might be related to https://github.com/quarkusio/quarkus/issues/15182, though there it's about reactive routes instead of reactive messaging.
Any ideas?
This issue is not easy to solve, first I will try to explain what happens.
OpenTracing has the concepts of transactions and spans. A span is a block of execution (a method, a database call, a send to a Kafka topic), whereas a transaction is a distributed process that span multiple components (a group of spans).
The issue here is that, each time a span is created, it didn't find any OpenTracing transaction so it creates a new one. This is why none of your spans are correlated to each others.
In OpenTracing, when you create a span, you'll create it based on a span context. Each OpenTracing integration will creates a span context based on the extension technology (I didn't find a better term), for example, HTTP span context is based on HTTP headers and Kafka span context is based on Kafka Headers.
So, to correlate two spans, you need to have the span context created with some context from the underlying technology providing the right OpenTracing ID.
For example, to correlate two Kafka spans, you need to have a uber-trace-id header (this is the default name of the OpenTracing id in Jaeger) with the trace identifier (see tracespan-identity for the format of this header).
Knowing this, there is multiple things to do.
First, you need to add an uber-trace-id Kafka header inside your outgoing message in your #Traced method to correlate the span from the method with the span created inside the Kafka producer interceptor.
Tracer tracer = GlobalTracer.get(); // you can also inject it
JaegerSpanContext spanCtx = ((JaegerSpan)tracer.activeSpan()).context();
// uber-trace-id format: {trace-id}:{span-id}:{parent-span-id}:{flags}
//see https://www.jaegertracing.io/docs/1.21/client-libraries/#tracespan-identity
var uberTraceId = spanCtx.getTraceId() + ":" +
Long.toHexString(spanCtx.getSpanId()) + ":" +
Long.toHexString(spanCtx.getParentId()) + ":" +
Integer.toHexString(spanCtx.getFlags());
headers.add("uber-trace-id", openTracingId.getBytes());
Then, you need to correlate your #Traced method with the span from the incoming message, if any. For this, the easiest way is to add a CDI interceptor that will try to create a span context for all methods annotated with #Traced based on the method parameters (it will search for a Message parameter). For this to work, this interceptor needs to be executed before the OpenTracing interceptor, and sets the span context in the interceptor context.
This is our interceptor implementation, feel free to use it or adapt it for your needs.
public class KafkaRecordOpenTracingInterceptor {
#AroundInvoke
public Object propagateSpanCtx(InvocationContext ctx) throws Exception {
for (int i = 0 ; i < ctx.getParameters().length ; i++) {
Object parameter = ctx.getParameters()[i];
if (parameter instanceof Message) {
Message message = (Message) parameter;
Headers headers = message.getMetadata(IncomingKafkaRecordMetadata.class)
.map(IncomingKafkaRecordMetadata::getHeaders)
.get();
SpanContext spanContext = getSpanContext(headers);
ctx.getContextData().put(OpenTracingInterceptor.SPAN_CONTEXT, spanContext);
}
}
return ctx.proceed();
}
private SpanContext getSpanContext(Headers headers) {
return TracingKafkaUtils.extractSpanContext(headers, GlobalTracer.get());
}
}
This code uses both the Quarkus OpenTracing extension and the Kafka OpenTracing contrib library.
With both the correlation of outgoing message thanks to the addition of the OpenTracing Kafka Header created from the current span context, and the creation of a context from incoming message's header, the correlation should happens in any case.
Related
We have multiple applications consumer listening to the same kafka topic and a producer sets the message header when sending message to the topic so specific instance can evaluate the header and process the message. eg
#StreamListener(target=ITestSink.CHANNEL_NAME,condition="headers['franchiseName'] == 'sydney'")
public void fullfillOrder(#Payload TestObj message) {
log.info("sydney order request received message is {}",message.getName());
}
In Spring Cloud Stream 3.0.0 the #StreamListener is deprecated and I could not find the equivalent of the condition property in Function.
Any suggestion?
Though I was not able to find the equivalent for the functional approach either, I do have a suggestion.
The #StreamListener annotations condition does not stop the fact that the application must consume the message, read its header, and filter out specific records before passing it to the listener (fullfillOrder()). So it's safe to assume you're consuming every message that hits the topic regardless (by the event receiver that Spring Cloud has implemented for us under the hood), but the listener only gets executed when header == sydney.
If there was a way to configure the event receiver that Spring Cloud uses (to discard message before hitting listener), I would suggest looking into that. If not, would resort to filtering out any messages (non-sydney) before doing any processing. If you're familiar with Spring Cloud's functional approach, would look something like this:
#Bean
public Consumer<Message<TestObj>> fulfillOrder() {
return msg -> {
// to get header - msg.getHeaders().get(key, valueType);
// filter out bad messages
}
}
or
#Bean
public Consumer<ConsumerRecord<?, TestObj>> fulfillOrder() {
return msg -> {
// msg.headers().lastHeader("franchiseName").value() -> filter em out
}
}
Other:
^ my code assumes you're integrating the kafka-client API with Spring cloud stream via spring-cloud-stream-binder-kafka. based on tags listed, i will note Spring Cloud Stream has two versions of binders for Kafka - one for the kafka client library, and one for kafka streams library.
Without considering Spring Cloud / Frameworks, the high-lvl DSL in kafka streams doesn't give you access to headers, but the low-level Processor API does. From the example, it seems like you're leveraging the client binder and not spring-cloud-stream-binder-kafka-streams / kafka streams binder. I haven't seen an implementation of spring cloud stream + kafka streams binder using the low-level processor API, so i can't tell if that was the aim.
We try to publish and subscribe to MQTT protocol using smallrye reactive messaging. We managed to actually publish a message into a specific topic/channel through the following simple code
import io.smallrye.mutiny.Multi;
import org.eclipse.microprofile.reactive.messaging.Outgoing;
import javax.enterprise.context.ApplicationScoped;
import java.time.Duration;
#ApplicationScoped
public class Publish {
#Outgoing("pao")
public Multi<String> generate() {
return Multi.createFrom().ticks().every(Duration.ofSeconds(1))
.map(x -> "A Message in here");
}
}
What we want to do is to call whenever we want the generate() method somehow with a dynamic topic, where the user will define it. That one was our problem but then we found these classes from that repo in github. Package name io.smallrye.reactive.messaging.mqtt
For example we found that there is a class that says it makes a publish call to a MQTT broker(Mosquitto server up).
Here in that statement SendingMqttMessage<String> message = new SendingMqttMessage<String>("myTopic","A message in here",0,false);
We get the a red underline under the SendingMqttMessage<String> saying 'SendingMqttMessage(java.lang.String, java.lang.String, io.netty.handler.codec.mqtt.MqttQoS, boolean)' is not public in 'io.smallrye.reactive.messaging.mqtt.SendingMqttMessage'. Cannot be accessed from outside package
UPDATE(Publish done)
Finally made a Publish request to the mqtt broker(a mosquitto server) and all this with a dynamic topic configured from user. As we found out the previous Class SendingMqttMessage was not supposed to be used at all. And we found out that we also needed and emitter to actually make a publish request with a dynamic topic.
#Inject
#Channel("panatha")
Emitter<String> emitter;
#POST
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public Response createUser(Device device) {
System.out.println("New Publish request: message->"+device.getMessage()+" & topic->"+device.getTopic());
emitter.send(MqttMessage.of(device.getTopic(), device.getMessage()));
return Response.ok().status(Response.Status.CREATED).build();
}
Now we need to find out about making a Subscription to a topic dynamically.
first to sett us to the same page:
Reactive messaging does not work with topics, but with channels.
That is important to note, because you can exclusively read or write to a channel. So if you want to provide both, you need to configure two channels pointing at the same topic, one incoming and one outgoing
To answer your question:
You made a pretty good start with Emitters, but you still lack the dynamic nature you'd like.
In your example, you acquired that Emitter thru CDI.
Now that is all we need, to make this dynamic, since we cann dynamically inject Beans at runtime using CDI like this:
Sending Messages
private Emitter<byte[]> dynamicEmitter(String topic){
return CDI.current().select(new TypeLiteral<Emitter<byte[]>>() {}, new ChannelAnnotation(topic)).get();
}
please also note, that i am creating a Emitter of type byte[], as this is the only currently supportet type of the smallrye-mqtt connector (version 3.4.0) according to its documentation.
Receiving Messages
To read messages from a reactive messaging channel, you can use the counterpart of the Emitter, which is the Publisher.
It can be used analog:
private Publisher<byte[]> dynamicReceiver(String topic){
return CDI.current().select(new TypeLiteral<Publisher<byte[]>>() {}, new ChannelAnnotation(topic)).get();
}
You can then process these Date in any way you like.
As demo, it hung it on a simple REST Endpoint
#GET
#Produces(MediaType.SERVER_SENT_EVENTS)
public Multi<String> stream(#QueryParam("topic") String topic) {
return Multi.createFrom().publisher(dynamicReceiver(topic)).onItem().transform(String::new);
}
#GET
#Path("/publish")
public boolean publish(#QueryParam("msg") String msg, #QueryParam("topic") String topic) {
dynamicEmitter(topic).send(msg.getBytes());
return true;
}
One more Thing
When creating this solution I hit a few pitfalls you should know about:
Quarkus removes any CDI-Beans that are "unused". So if you want to inject them dynamically, you need to exclude those, or turne off that feature.
All channels injected that way must be configured. Otherwise the injection will fail.
For some Reason, (even with removal completely disabled) I was unable to inject Emitters dynamically, unless they are ever injected elsewhere.
I have recently started with Apache Camel, and we are looking into creating custom components to abstract a lot of logic and simplify routes, but some of this logic involves http requests and other portions that have an existing camel component we want to utilize.
Is it possible to call other components (e.g. the http component) from within our custom component's producer?
I did see this question (Can a custom Camel component use routes and other components internally?) that mentions using the camel context, but how to you replicate the route call outside of a RouteBuilder?
You need to import CamelContext, Exchange, ProducerTemplate and ExchangeBuilder.
import org.apache.camel.CamelContext;
import org.apache.camel.Exchange;
import org.apache.camel.ProducerTemplate;
import org.apache.camel.builder.ExchangeBuilder;
You then need to create instances of the producer template and camel context. I am using spring boot, so I can just inject the dependencies.
#Autowired
private ProducerTemplate producer;
#Autowired
private CamelContext camelContext;
In your method definition, you need to create an exchange request with the ExchangeBuilder. You can create a body and add headers to you exchange message here.
Exchange exchangeRequest = ExchangeBuilder.anExchange(camelContext)
.withBody("Hello World!")
.withHeader("username", "jdoe")
.withHeader("password", "pass")
.build();
You can then call the send method on the producer object to tap into your route and capture the response.
Exchange exchangeResponse = producer.send("direct:startRoute", exchangeRequest)
I'm kind of new to Apache Camel and am testing it to use it on my application (replacing already implemented Spring Integration).
I've been searching all over the web, Apache Camel's documentation site and stackoverflow for the last days but can't seem to find an answer on how to configure thread's names in Apache Camel via Java DSL.
I only got to see this and this other question but it only says how to do it via Spring DSL. Same on Apache Camel documentation page.
To give some context:
Right now I'm building two operation flows (first and second) and each one has it's one route.
Both routes read from different ActiveMQ queues, process the messages in a different way and the send the response back to different queues.
I already managed to configure different concurrentConsumers and maxConcurrentConsumers for each route (via properties file).
I would like to assign thread names (or at least patterns, since I have many consumers on each route); in a way that I could have something like "FirstOp-X" and "SecondOp-X" (where X is the thread number).
Here's the code snippet:
public class SampleCamelRouter extends RouteBuilder {
/**
* The first operation name
*/
public static final String FIRST_NAME = "first";
/**
* The second operation name
*/
public static final String SECOND_NAME = "second";
/**
* The ActiveMQ outbound queue format
*/
public static final String OUTBOUND_QUEUE_FORMAT = "activemq:queue:aq.%1$s.response";
/**
* The ActiveMQ inbound queue format
*/
public static final String INBOUND_QUEUE_FORMAT = "activemq:queue:aq.%1$s.request"
+ "?concurrentConsumers={{queue.%1$s.concurrentConsumers}}"
+ "&maxConcurrentConsumers={{queue.%1$s.maxConcurrentConsumers}}";
/*
* (non-Javadoc)
* #see org.apache.camel.builder.RouteBuilder#configure()
*/
#Override
public void configure() throws Exception {
from(String.format(INBOUND_QUEUE_FORMAT, FIRST_NAME))
.unmarshal().json(JsonLibrary.Jackson, FirstRequestMessage.class)
.bean(TestBean.class, "doFirst")
.marshal().json(JsonLibrary.Jackson)
.to(String.format(OUTBOUND_QUEUE_FORMAT, FIRST_NAME));
from(String.format(INBOUND_QUEUE_FORMAT, SECOND_NAME))
.unmarshal().json(JsonLibrary.Jackson, SecondRequestMessage.class)
.bean(TestBean.class, "doSecond")
.marshal().json(JsonLibrary.Jackson)
.to(String.format(OUTBOUND_QUEUE_FORMAT, SECOND_NAME));
}
I used to do it in Spring Integration with something like this (per flow):
#Bean
public static IntegrationFlow setUpFirstFlow() {
final DefaultMessageListenerContainer messageListenerContainer = new DefaultMessageListenerContainer();
messageListenerContainer.setBeanName("FirstOp");
messageListenerContainer.setDestinationName("aq.first.request");
messageListenerContainer.setConcurrentConsumers(concurrentConsumers);
messageListenerContainer.setMaxConcurrentConsumers(maxConcurrentConsumers);
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(messageListenerContainer))
.transform(new JsonToObjectTransformer(FirstRequestMessage.class))
.handle(TestBean.class, "doFirst")
.transform(new ObjectToJsonTransformer())
.handle(Jms.outboundAdapter(.......)).get();
}
So basically: I created different message listener containers, and that way, I have different thread names per flow.
At any time if a thread is stoped, blocked, inside a thread-dump (or even as simple as printing a log) I could easily see, wich flow that thread belongs to.
I see Apache Camel has some kind of workaround (but not per route, but per camelContext) but only implemented using Spring DSL not Java.
I wouldn't mind to change my configuration to XML files if this per-route-configuration is possible only with Spring.
Please help me, it's kind of a tie-breaker for me right now. For the application I'm building, it's very important to be able to identify each thread isolated; and don't really like the default thread names (Camel (camel-1) thread #27 - JmsConsumer[aa.first.request]) :'-(.
You can set an ExecutorServiceManager per Camel context via org.apache.camel.impl.DefaultCamelContext#setExecutorServiceManager - if you use a DefaultExecutorServiceManager you can set a threadNamePattern.
Alternatively, you can use the Threads DSL and assign a thread pool to a route by using org.apache.camel.model.ProcessorDefinition#threads(int, int, java.lang.String), e.g.
from(String.format(INBOUND_QUEUE_FORMAT, FIRST_NAME))
.threads(1, 2, "first")
...
from(String.format(INBOUND_QUEUE_FORMAT, SECOND_NAME))
.threads(1, 2, "second")
...
Note that using the threads() method would effectively mean that you would be using an asynchronous processing model in Camel.
So according to the documents of Spring it will publish metrics on a REST endpoint and a message channel.
The REST endpoint works fine as I get the expected result. However I would like to handle each change in the metrics. So it says it will by default publish messages to a channel called "metricsChannel"
I tried to create the following class which would listen to this channel, but it does not seem to fire. Everything else has been kept default for the Spring Boot application.
package services.core;
import org.springframework.stereotype.Service;
import org.springframework.integration.annotation.ServiceActivator;
#Service
public class MetricService {
#ServiceActivator(inputChannel = "metricsChannel")
public void handleMessage(org.springframework.messaging.Message<?> message) {
System.out.println("Message [" + message.toString() + "] is received");
}
}
I've just tested that and works well:
#Bean
#ServiceActivator(inputChannel = "metricsChannel")
public MessageHandler metricsHandler() {
return System.out::println;
}
I've done that in our web-sockets sample on the server part.
Added this:
compile 'org.springframework.boot:spring-boot-starter-actuator'
to that project Gradle config.
And I see this in console, when I started the client app:
GenericMessage [payload=Metric [name=gauge.response.time.star-star, value=26.0, timestamp=Tue Apr 14 16:03:53 EEST 2015], headers={metricName=gauge.response.time.star-star, id=08697a97-83c1-5000-f031-65f6797c0cd8, timestamp=1429016633672}]
GenericMessage [payload=Metric [name=counter.status.101.time.star-star, value=1, timestamp=Tue Apr 14 16:03:53 EEST 2015], headers={metricName=counter.status.101.time.star-star, id=8d070cb4-88e8-f5a7-6b83-6b27edf75bfc, timestamp=1429016633674}]
But, yes: your code is good as well.
To clarify: my code did actually work, but for me it felt like a gotcha.
Quote from Spring docs:
If the ‘Spring Messaging’ jar is on your classpath a MessageChannel
called metricsChannel is automatically created (unless one already
exists). All metric update events are additionally published as
‘messages’ on that channel. Additional analysis or actions can be
taken by clients subscribing to that channel.
So by "all metric update events", I thought that the system metrics (memory usage, cpu load etc.) would fall under these events. Actually they are not, they are just being published whenever your custom counters change or for example the number of requests for a certain endpoint.
Initially I was waiting for a message each second or so after bootup, to no avail. Ultimately started to call the metrics endpoint and suddenly the messages starting popping in the console/channel each time I called it.