Micrometer throws exception when creating a second kafka-consumer - java

Exception occurred when upgrading to spring-boot 2.3.0. Exception is as follows:
java.lang.IllegalArgumentException: Prometheus requires that all meters with the same name have the same set of tag keys. There is already an existing meter named 'kafka_consumer_fetch_manager_records_consumed_total' containing tag keys [client_id, kafka_version, product, spring_id, topic]. The meter you are attempting to register has keys [client_id, kafka_version, product, spring_id].
at io.micrometer.prometheus.PrometheusMeterRegistry.lambda$applyToCollector$17(PrometheusMeterRegistry.java:429)
at java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1932)
at io.micrometer.prometheus.PrometheusMeterRegistry.applyToCollector(PrometheusMeterRegistry.java:413)
at io.micrometer.prometheus.PrometheusMeterRegistry.newFunctionCounter(PrometheusMeterRegistry.java:247)
at io.micrometer.core.instrument.MeterRegistry$More.lambda$counter$1(MeterRegistry.java:884)
at io.micrometer.core.instrument.MeterRegistry.lambda$registerMeterIfNecessary$5(MeterRegistry.java:559)
at io.micrometer.core.instrument.MeterRegistry.getOrCreateMeter(MeterRegistry.java:612)
at io.micrometer.core.instrument.MeterRegistry.registerMeterIfNecessary(MeterRegistry.java:566)
at io.micrometer.core.instrument.MeterRegistry.registerMeterIfNecessary(MeterRegistry.java:559)
at io.micrometer.core.instrument.MeterRegistry.access$600(MeterRegistry.java:76)
at io.micrometer.core.instrument.MeterRegistry$More.counter(MeterRegistry.java:884)
at io.micrometer.core.instrument.FunctionCounter$Builder.register(FunctionCounter.java:122)
at io.micrometer.core.instrument.binder.kafka.KafkaMetrics.registerCounter(KafkaMetrics.java:189)
at io.micrometer.core.instrument.binder.kafka.KafkaMetrics.bindMeter(KafkaMetrics.java:174)
at io.micrometer.core.instrument.binder.kafka.KafkaMetrics.lambda$checkAndBindMetrics$1(KafkaMetrics.java:161)
at java.base/java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1603)
at java.base/java.util.Collections$UnmodifiableMap.forEach(Collections.java:1505)
at io.micrometer.core.instrument.binder.kafka.KafkaMetrics.checkAndBindMetrics(KafkaMetrics.java:137)
at io.micrometer.core.instrument.binder.kafka.KafkaMetrics.bindTo(KafkaMetrics.java:93)
at io.micrometer.core.instrument.binder.kafka.KafkaClientMetrics.bindTo(KafkaClientMetrics.java:39)
at org.springframework.kafka.core.MicrometerConsumerListener.consumerAdded(MicrometerConsumerListener.java:74)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:301)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:242)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:212)
at org.springframework.kafka.core.ConsumerFactory.createConsumer(ConsumerFactory.java:67)
at org.springframework.kafka.core.ConsumerFactory.createConsumer(ConsumerFactory.java:54)
at org.springframework.kafka.core.ConsumerFactory.createConsumer(ConsumerFactory.java:43)
This exception occurs when I attempt to create a consumer through ConsumerFactory.createConsumer.
There is another consumer in the app which is created by using spring-kafka through annotating the method with #KafkaListener(topics = TOPICS, groupId = GROUP_ID).
In io.micrometer.core.instrument.binder.kafka.KafkaMetrics line 146-147, I read
//Kafka has metrics with lower number of tags (e.g. with/without topic or partition tag)
//Remove meters with lower number of tags
Which means that the new metric will be discarded as it lacks the topic-tag.
Are there any reasons to why the different ways of creating consumers causes a deviation in tags? If so, is it possible to append the topic-tag to the metric created through ConsumerFactory.createConsumer?

After some debugging, we found this:
I'll look around some more, but seems that when a consumer is started (#KafkaListener) it also adds some metics with the topic it's assigned to?
Just a hypothesis so far.
Another example with less stack - Seems to register topic when scheduled task starts in KafkaMetrics.bindTo -> scheduler.scheduleAtFixedRate(() -> checkAndBindMetrics(registry), getRefreshIntervalInMillis(), getRefreshIntervalInMillis(), TimeUnit.MILLISECONDS);

Related

Spring Micrometer: How to access tag related information

Sorry for this stupid question, but I cannot understand how to extract tag-specific information from Micrometer.
We are using Spring Boot 2.7.6. When using multiple counters with tags like this:
private Map<String, Counter> errorCounters = new HashMap<>();
...
tenantList.forEach(tenant ->
{
final Counter counter = Counter.builder("company.publishing_errors")
.description("Total number of failed tries to publish an object.")
.tag("tenant", tenant)
.register(registry);
errorCounters.put(tenant, counter);
});
...
errorCounters.get(tenant).increment();
which results in one counter per tenant. In the debugger I can clearly see that they are counted independently.
Under http://127.0.0.1:8080/actuator/metrics/company.publishing_errors I see the following JSON:
{
"name":"company.publishing_errors",
"description":"Total number of failed tries to publish an object.",
"baseUnit":null,
"measurements":[{
"statistic":"COUNT",
"value":6.0
}],
"availableTags":[{
"tag":"tenant",
"values":[
"tenant1",
"tenant2"
]}
]}
I tried multiple variations to extract the tag specific data, but failed. All tutorials and guides I found either stop at this point, or just import the data into e.g. Grafana which extracts the data itself. Do I have to specify the version, e.g. Micrometer-Prometheus? We are only using the Spring Boot standard, which includes the micrometer-core-library. Do I need to set commonTags?
Thanks to input from a colleague, I found the answer.
According to the Spring documentation, you can view tag specific data with tag=KEY:VALUE query parameters. So in my case the URL would be
http://127.0.0.1:8080/actuator/metrics/company.publishing_errors?tag=tenant:tenant1

Kafka Stream - How to use the Suppress function?

As I implemented my business rule for my project, I need to reduce the number of events produced by the stream application to save the resource and to make the processor as fast as possible.
I figured out that Kafka offers the ability to suppress intermediate events base on either their RecordTime or WindowEndTime. My code with the usage of suppress:
KTable<Long, ProductWithMatchRecord> productWithCompetitorMatchKTable = competitorProductMatchWithLinkInfo.groupBy(
(linkMatchProductRecordId, linkMatchWithProduct) -> KeyValue.pair(linkMatchWithProduct.linkMatch().tikiProductId(), linkMatchWithProduct),
Grouped.with(longPayloadJsonSerde, linkMatchWithProductJSONSerde).withName("group-match-record-by-product-id")
).aggregate(
ProductWithMatchRecord::new,
(tikiProductId, linkMatchWithProduct, aggregate) -> aggregate.addLinkMatch(linkMatchWithProduct),
(tikiProductId, linkMatchWithProduct, aggregate) -> aggregate.removeLinkMatch(linkMatchWithProduct),
Named.as("aggregate-match-record-by-product-id"),
Materialized
.<Long, ProductWithMatchRecord, KeyValueStore<Bytes, byte[]>>as("match-record-by-product-id")
.withKeySerde(longPayloadJsonSerde)
.withValueSerde(productWithMatchRecordJSONSerde)
)
.suppress(Suppressed.untilTimeLimit(Duration.ofSeconds(10), null));
Basically, it is just a KTable that take the input from other KTable, aggregation, join,....
and then Suppress
The problem is I expect for 1 event of 1 given key, if there is no event for this key in the next 10 seconds, the corresponding data in productWithCompetitorMatchKTable will be produced.
However, after 10 seconds (or more), no event of the given is fired, until I made another event for this key.
Please help me to fix the problem or refer to some source of documentation that I can understand more about the suppress feature of Kafka stream application.
I have tried to debug and the code and change many configurations of the Suppressed.untilTimeLimit function, however, it wwas not working as I expected.
You need new events, to trigger the "time check".
Have a look at "punctuate".

Spring Integration: start JPA polling only when all the results of last polling has been processed

I have a following flow that I would like to implement using Spring Integration Java DSL:
Poll a table in a database every 2 hours which returns id of documents that need to be processed
For each id, process a document through an HTTP gateway
Store a response in a database
I have a working Java code that does exactly these steps. An additional requirement that I'm struggling with is that the polling for the next round of documents shouldn't happen until all the documents from the last polling has been processed and stored in the database.
Is there any pattern in Spring Integration that I could use for this additional requirement?
Here is a simplified code - it will get more complex and I'll split processing of the documents (HTTP outbound and persisting) into separate classes / flows:
return IntegrationFlows.from(Jpa.inboundAdapter(this.targetEntityManagerFactory)
.entityClass(ProcessingMetadata.class)
.jpaQuery("select max(p.modifiedDate) from ProcessingMetadata p " +
"where p.status = com.test.ProcessingStatus.PROCESSED")
.maxResults(1)
.expectSingleResult(true),
e -> e.poller(Pollers.fixedDelay(Duration.ofSeconds(10))))
.handle(Jpa.retrievingGateway(this.sourceEntityManagerFactory)
.entityClass(DocumentHeader.class)
.jpaQuery("from DocumentHeader d where d.modified > :modified")
.parameterExpression("modified", "payload"))
.handle(Http.outboundGateway(uri)
.httpMethod(HttpMethod.POST)
.expectedResponseType(String.class))
.handle(Jpa.outboundAdapter(this.targetEntityManagerFactory)
.entityClass(ProcessingMetadata.class)
.persistMode(PersistMode.PERSIST),
e -> e.transactional(true))
.get();
UPDATE
Following Artem's suggestion, I'm trying to implement it using a SimpleActiveIdleMessageSourceAdvice
class WaitUntilCompleted extends SimpleActiveIdleMessageSourceAdvice {
public WaitUntilCompleted(DynamicPeriodicTrigger trigger) {
super(trigger);
}
#Override
public boolean beforeReceive(MessageSource<?> source) {
return false;
}
}
If I understand it correctly, above code would stop polling. Now I have no idea how to attach this Advice to the Jpa.inboundAdapter... It doesn't seem to have a proper method (neither Advice nor Spec Handler). Do I miss something obvious here? I've tried attaching the Advice to the Jpa.retrievingGateway but it doesn't change the flow at all.
UPDATE2
Check this question for a complete solution: Spring Integration: how to unit test an advice
I have answered today for similar question: How to poll from a queue 1 message at a time after downstream flow is completed in Spring Integration.
You also may have a trick on database level do not let to see new records in the table while others are locked. Or you can have some UPDATE in the end of flow while your SELECT won't see appropriate records until they are updated respectively.
But anyway any of those approaches I suggest for that question should be applied here as well.
Also you indeed can consider to rely on the SimpleActiveIdleMessageSourceAdvice since your solution is already based on a MessageSource implementation.
UPDATE
For your use-case it is probably would be better to extend that SimpleActiveIdleMessageSourceAdvice and override its beforeReceive() to check some state that you are able to read more data or not. The idlePollPeriod and activePollPeriod could be the same value: doesn't look like it make sense to change it in between since you are going to the idle state just after reading the next set of data.
For the state to check it really might be a simple AtomicBoolean bean which you should change after you process the current set of documents. That might be something after an aggregator or anything else you can use in your solution.
UPDATE 2
To use a WaitUntilCompleted for your Jpa.inboundAdapter you should have a configuration like this:
IntegrationFlows.from(Jpa.inboundAdapter(this.targetEntityManagerFactory)
.entityClass(ProcessingMetadata.class)
.jpaQuery("select max(p.modifiedDate) from ProcessingMetadata p " +
"where p.status = com.test.ProcessingStatus.PROCESSED")
.maxResults(1)
.expectSingleResult(true),
e -> e.poller(Pollers.fixedDelay(Duration.ofSeconds(10)).advice(waitUntilCompleted())))
Pay attention to the .advice(waitUntilCompleted()) which is a part of the pller configuration and points to your advice bean.

Getting MultiLangDaemon Exception

I implemented a kinesis stream consumer client using node wrapper and getting this MultiLangDaemon execution error as shown below.
Starting MultiLangDaemon ... java.lang.IllegalArgumentException: No enum constant software.amazon.kinesis.common.InitialPositionInStream.TRIM_HORIZON at java.lang.Enum.valueOf(Enum.java:238) at software.amazon.kinesis.common.InitialPositionInStream.valueOf(InitialPositionInStream.java:21) at software.amazon.kinesis.multilang.config.MultiLangDaemonConfiguration$2.convert(MultiLangDaemonConfiguration.java:208) at org.apache.commons.beanutils.ConvertUtilsBean.convert(ConvertUtilsBean.java:491) at org.apache.commons.beanutils.BeanUtilsBean.setProperty(BeanUtilsBean.java:1007) at software.amazon.kinesis.multilang.config.KinesisClientLibConfigurator.lambda$getConfiguration$0(KinesisClientLibConfigurator.java:65) at java.lang.Iterable.forEach(Iterable.java:75) at java.util.Collections$SynchronizedCollection.forEach(Collections.java:2064) at software.amazon.kinesis.multilang.config.KinesisClientLibConfigurator.getConfiguration(KinesisClientLibConfigurator.java:63) at software.amazon.kinesis.multilang.MultiLangDaemonConfig.<init>(MultiLangDaemonConfig.java:101) at software.amazon.kinesis.multilang.MultiLangDaemonConfig.<init>(MultiLangDaemonConfig.java:74) at software.amazon.kinesis.multilang.MultiLangDaemonConfig.<init>(MultiLangDaemonConfig.java:58) at software.amazon.kinesis.multilang.MultiLangDaemon.buildMultiLangDaemonConfig(MultiLangDaemon.java:171) at software.amazon.kinesis.multilang.MultiLangDaemon.main(MultiLangDaemon.java:220) No enum constant software.amazon.kinesis.common.InitialPositionInStream.TRIM_HORIZON
I already cross checked properties file with details shown below listed there
initialPositionInStream, processingLanguage,streamName,executableName etc.
TRIM_HORIZON is set as value for initialPositionInStream Not sure why software.amazon.kinesis.common.InitialPositionInStream this object missing this value?
I am using node consumer client as mentioned here
https://docs.aws.amazon.com/streams/latest/dev/kinesis-record-processor-implementation-app-nodejs.html
Just want to help community by answering this specific error fix. It was .properties file update for that respective parameter key name. Although I am getting couple of other java.lang.RuntimeException that will try to resolve and ask for resolution in separate threads.

Error creating queue with WebSphere MQ API

I trying to create queues using PCF command in the WebSphere API as detailed in $MQM_HOME/samp/pcf/samples/PCF_CreateQeue.java. The creation fails when i add a description
command.addParameter(PCFConstants.MQCA_Q_DESC, "Created using MQMonitor");
I get the error: com.ibm.mq.pcf.PCFException: MQJE001: Completion Code 2, Reason 3015 : MQRCCF_CFST_PARM_ID_ERROR
Is there another way of setting the description, i'm using version 6 of the API.
The Commands page in the PCF manual states that:
The required parameters and the
optional parameters are listed. On
platforms other than z/OSĀ®, the
parameters must occur in the order:
All required parameters, in the order stated, followed by
Optional parameters as required, in any order, unless specifically
noted in the PCF definition.
The section Change, Copy and Create Queue lists the required parameters in the following order:
MQCA_Q_NAME
MQIA_Q_TYPE
Optional parameters, including QDesc
The same manual provides required parameters and their order for all PCF commands so no need to play hide-and-seek trying out parms and orders in the future.
It turns out the addParameter on the PCFMessage should in a certain sequence (stumbled on it). If i change the add parameters if works. This is not just for creating queues, but for channels as well.
command.addParameter(PCFConstants.MQCA_Q_NAME, qname);
command.addParameter(PCFConstants.MQIA_Q_TYPE, PCFConstants.MQQT_LOCAL);
command.addParameter(PCFConstants.MQCA_Q_DESC, qdesc);
command.addParameter(PCFConstants.MQIA_DEF_PERSISTENCE, PCFConstants.MQPER_PERSISTENT);
the above will execute without error.
command.addParameter(PCFConstants.MQCA_Q_NAME, qname);
command.addParameter(PCFConstants.MQCA_Q_DESC, qdesc);
command.addParameter(PCFConstants.MQIA_Q_TYPE, PCFConstants.MQQT_LOCAL);
command.addParameter(PCFConstants.MQIA_DEF_PERSISTENCE, PCFConstants.MQPER_PERSISTENT);
the above will fail after moving around the description.
I haven't seen it documented in the Java docs, and if thats the case i looks forward to some hide and seek.

Categories