Akka Persistence - issue with Recovery - how to diagnose? - java

I am using the java version of akka-persistence 2.3.4 where I have several actors which extend PersistentActor and which handle both RecoveryCompleted and RecoveryFailure. I am using the default journaling and snapshot plugins.
I'm running into this an issue with recovery where on some of my actors I get neither a RecoveryCompleted nor RecoveryFailure message and the actor gets stuck.
What kind of diagnostics can I use to figure out why this may be happening?
I have tried turning on debug logging but no akka logging shows up there.

Related

Configuring open telemetry for tracing service to service calls ONLY

I am experimenting with different instrumentation libraries but primarily spring-cloud-sleuth and open-telemetry ( OT) are the ones I liked the most. Spring-cloud-sleuth is simple but it will not work for a non-spring ( Jax-RS)project , so I diverted my attention to open telemetry.
I am able to export the metrics using OT, but there is just too much data which I do not need. Spring sleuth gave the perfect solution wherein it just traces the call across microservices and links all the spans with one traceId.
My question is - How to configure OT to get an output similar to spring-sleuth? I tried various configuration and few worked but still the information is huge.
My configuration
-Dotel.traces.exporter=zipkin -Dotel.instrumentation.[jdbc].enabled=false -Dotel.instrumentation.[methods].enabled=false -Dotel.instrumentation.[jdbc-datasource].enabled=false
However, this still gives me method calls and other data. Also, one big pain is am not able to SHUT DOWN metrics data.
gets error like below
ERROR io.opentelemetry.exporter.internal.grpc.OkHttpGrpcExporter - Failed to export metrics. The request could not be executed. Full error message: Failed to connect to localhost/0:0:0:0:0:0:0:1:4317
Anyhelp will be appreciated -
There are 2 ways to configure the open telemetry agent(otel).
Environment variable
Java system property
you can either set
export OTEL_METRICS_EXPORTER=none
or
java -Dotel.metrics.exporter=none app.jar
Reference
https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md

Axon Config - Kafka retry policies after #eventhandlers thrown exception

I've started to use Axon 4.3.1 (latest version) in my project and having a problem.
Where can I config the kafka retry policies after #eventhandler throw an exception?
OBS: I'm using SubscribingEventProcessor type as event processor (both projects). I'm using separate projects! Command model use mongo and publish events on Kafka. Query model consume events from Kafka (eventbus). In this way, using separate JVMs.
#processinggroup(event-processor) is configured to class with event-handler method. I'd like to have a config to Kafka auto retry after some time in error cases (from query model project).
Can I use some default Axon component? Could I use something like spring-retry or internal kafka configs itself?
I've found something like that (documentation):
https://docs.axoniq.io/reference-guide/configuring-infrastructure-components/event-processing/event-processors#error-handling
"Based on the provided ErrorContext object, you can decide to ignore the error, schedule retries, perform dead-letter-queue delivery or rethrow the exception."
How can I config (for example, schedule retries) on #eventhandler after errors?
Could you help me?
Thanks.
The current implementation of Axon's Kafka Extension (version 4.0-M2) does not support setting a retry policy when it comes to event handling.
I'd argue your best approach right now is to set up something like that on Kafka, if that's even possible. Otherwise, forcing a replay of the events through Kafka would be your best approach.

How does logging component work

I use HSQLDB on OSGI framework. And it is common solution to use pax-logging that support many logging frameworks (java logging, slf4j, jboss logging etc).
I don't have problems with pax-logging, however, I have problems with HSQLDB logging messages. HSQLDB logging component is very tricky - some messages go to pax-logging system, some go to console.
Could anyone explain what messages where must go and why.
There are separate logging components in HSQLDB.
The Server uses separate writers for log and error messages. The logs default to stdout and stderr but you can set each one to use a custom PrintWriter.
The optional SQL log is always a file. It can be turned on and off live for checking the SQL statements being executed.
The optional event log is a file or an external logging framework. The latter is used when the database is in-process in an application. In both configurations, it reports general persistence events at different levels of detail selected by the user.

Is there a way to turn off messages from zookeeper?

I am using zookeeper successfully. It keeps printing status updates and warnings to the shell, which is actually making it harder to debug my program (which is not working as well as zookeeper). Is there an easy way to turn that off in zookeeper without going into the source? Or is there a way to run a java program so that only the executing program gets to print to the shell?
Isn't 'logging' chapter of Zookeeper administrator's guide what you actually want?
ZooKeeper uses log4j so it is pretty standard logging approach with lot of configuration flexibility available.
By default zookeeper emits INFO or higher severity level messages and it uses log4j for logging. So define logging level to a higher severity in your log4j.properties (assuming you provided the path to the .properties or it's in the working directory)
there is a similar post on avoiding ZooKeeper log messages - like this:
zoo_set_log_stream(fopen("NULL", "w"));
This will turn of all output from ZooKeeper

Spring Integration Channels on Content

I have used Spring Integration in my current successfully for some of the needs. Awesome..
There is some weird behavior observed on a heavy load where-in the same message seems to be processed more than once. I can confirm that because there are multiple rows in the database which is typically the last command on the chain that is configured over the channel.
Digging into the manual further, it looks seems like load-balancing is done automatically by spring. The manual says that the message is balanced between multiple message handlers.
Question is:
How many handlers are present on a channel by default? The spring XML that gets loaded does not seem to have that configuration. All i do is this (per the recommendation in the manual):
<int:channel id="SwPath.Channel"/>
<int:chain id="SwPath.chain" input-channel="SwPath.Channel">
</int:chain>
I can disable the fail-over but I am curious to know how many are present by default.
It's been a while since I worked on those load balancers, but I remember that the default number of threads in the thread pool was somewhere between 2 and 10.
It is possible that you have found a concurrency bug.
If you turn on TRACE logging the load balancer will give you a lot of information, but that could easily hide the problem.
If you would create a JIRA issue with a JUnit test case, I'm sure it would be much easier to figure out what happens exactly.

Categories