I can see in the logs that Spring retry is sending 2 requests to the remote server and both requests return successful responses.
I am not able to get the reason behind the same.
Code:
Class StatusClient{
#CircuitBreaker(maxAttemptsExpression = "#{${remote.broadridge.circuitBreaker.maxAttempts}}",
openTimeoutExpression = "#{${remote.broadridge.circuitBreaker.openTimeout}}", resetTimeoutExpression = "#{${remote.broadridge.circuitBreaker.resetTimeout}}")
public Optional<JobStatusResponseDTO> getStatus(String account, String jobNumber) {
client.post()
.uri(PATH)
.body(BodyInserters.fromValue(request))
.exchangeToMono(response -> {
if (response.statusCode() == HttpStatus.NO_CONTENT) {
return Mono.empty();
} else if (isClientOrServerError(response)) {
return Mono.error(new RemoteClientException(String.format("status is not received: %s", response.statusCode())));
}
stopWatch.stop();
log.info("time taken by the getStatus=[{}] for {}", (stopWatch.getTotalTimeMillis()), request);
return response.bodyToMono(JobStatusResponseDTO.class);
})
.block();
return Optional.ofNullable(block);}
}
Class status{
#Retryable(maxAttemptsExpression = "#{${remote.retry.maxAttempts}}", backoff = #Backoff(delayExpression = "#{${remote.retry.delay}}"))
public Optional<JobStatusResponseDTO> getStatus(String jobNumber, String accountNumber) {
return statusClient.getStatus(accountNumber, jobNumber);
}
}
Config in application.yml
circuitBreaker:
maxAttempts: 3 # defalut 3
openTimeout: 5000 # defalut 5000
resetTimeout: 20000 # defalut 20000
retry:
maxAttempts: 3 # defalut 3
delay: 1000 # defalut 1000
Logs:
792 <14>1 2021-10-26T16:26:32.978917+00:00 - 2021-10-26 16:26:32.978 INFO [batch,ec40b8fe1f6a4cfb,06052e092b3f8e66] : time taken by the getStatus=[582] for JobStatusRequestDTO(account=12
456, jobNumber=S123456)
792 <14>1 2021-10-26T16:26:18.263121+00:00 2021-10-26 16:26:18.262 INFO [batch,ec40b8fe1f6a4cfb,21202725a0002bde] : time taken by the getStatus=[592] for JobStatusRequestDTO(account=12
456, jobNumber=S123456)
Both the request are a few seconds apart.
Edit 1:
changed circuit breaker to the max attempt to 1. Now it is retrying 3 times. There is still an issue. It seems it is calling the remote server only once and not calling after.
The remote call is wrapped in a circuit breaker.
1st Attempt log:
status is not received: 503 SERVICE_UNAVAILABLE
2nd Attempt log:
org.springframework.retry.ExhaustedRetryException: Retry exhausted after last attempt with no recovery path;
3rd Attempt log:
org.springframework.retry.ExhaustedRetryException: Retry exhausted after last attempt with no recovery path;
circuitBreaker:
maxAttempts: 1
openTimeout: 5000 # defalut 5000
resetTimeout: 20000 # defalut 20000
retry:
maxAttempts: 3 # defalut 3
delay: 1000 # defalut 1000
This is because you have placed the default retry.maxAttempts to 3 with a delay of 1000ms. Spring will auto-retry if there is no response within the mentioned delay time. So, replace the retry.maxAttemps to 2 then it won't give multiple responses.
you can simply paste below lines in application.properties.
retry.maxAttempts=2
retry.maxDelay=100
Also, I suggest you go through this.
Related
Set<String> graphNames = JanusGraphFactory.getGraphNames();
for(String name:graphNames) {
System.out.println(name);
}
The above snippet produces the following exception
java.lang.IllegalStateException: Gremlin Server must be configured to use the JanusGraphManager.
at com.google.common.base.Preconditions.checkState(Preconditions.java:173)
at org.janusgraph.core.JanusGraphFactory.getGraphNames(JanusGraphFactory.java:175)
at com.JanusTest.controllers.JanusController.getPersonDetail(JanusController.java:66)
my.properties
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=cql
storage.hostname=127.0.0.1
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.5
index.search.backend=elasticsearch
index.search.hostname=127.0.0.1
gremlin-server.yaml
host: 0.0.0.0
port: 8182
scriptEvaluationTimeout: 30000
channelizer: org.apache.tinkerpop.gremlin.server.channel.WebSocketChannelizer
graphManager: org.janusgraph.graphdb.management.JanusGraphManager
graphs: {
ConfigurationManagementGraph: conf/my.properties,
}
plugins:
- janusgraph.imports
scriptEngines: {
gremlin-groovy: {
imports: [java.lang.Math],
staticImports: [java.lang.Math.PI],
scripts: [scripts/empty-sample.groovy]}}
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoLiteMessageSerializerV1d0, config: {ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
processors:
- { className: org.apache.tinkerpop.gremlin.server.op.session.SessionOpProcessor, config: { sessionTimeout: 28800000 }}
- { className: org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor, config: { cacheExpirationTime: 600000, cacheMaxSize: 1000 }}
metrics: {
consoleReporter: {enabled: true, interval: 180000},
csvReporter: {enabled: true, interval: 180000, fileName: /tmp/gremlin-server-metrics.csv},
jmxReporter: {enabled: true},
slf4jReporter: {enabled: true, interval: 180000},
gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST},
graphiteReporter: {enabled: false, interval: 180000}}
maxInitialLineLength: 4096
maxHeaderSize: 8192
maxChunkSize: 8192
maxContentLength: 65536
maxAccumulationBufferComponents: 1024
resultIterationBatchSize: 64
writeBufferLowWaterMark: 32768
writeBufferHighWaterMark: 65536
This answer to this is similar to this other question.
The call to JanusGraphFactory.getGraphNames() needs to be sent to the remote server. If you're working in the Gremlin Console, first establish a remote sessioned connection then set remote console mode.
gremlin> :remote connect tinkerpop.server conf/remote.yaml session
==>Configured localhost/127.0.0.1:8182
gremlin> :remote console
==>All scripts will now be sent to Gremlin Server - [localhost:8182]-[5206cdde-b231-41fa-9e6c-69feac0fe2b2] - type ':remote console' to return to local mode
Then as described in the JanusGraph docs for "Listing the Graphs":
ConfiguredGraphFactory.getGraphNames() will return a set of graph names for which you have created configurations using the ConfigurationManagementGraph APIs.
JanusGraphFactory.getGraphNames() on the other hand returns a set of graph names for which you have instantiated and the references are stored inside the JanusGraphManager.
If you are not using the Gremlin Console, then you should be using a remote client, such as the TinkerPop gremlin-driver (Java), to send your requests to the Gremlin Server.
I have a system with HTTP POST requests and it runs with Spring 5 (standalone tomcat). In short it looks like this:
client (Apache AB) ----> micro service (java or golang) --> RabbitMQ --> Core(spring + tomcat).
The thing is, when I use my Java (Spring) service, it is ok. AB shows this output:
ab -n 1000 -k -s 2 -c 10 -s 60 -p test2.sh -A 113:113 -T 'application/json' https://127.0.0.1:8449/SecureChat/chat/v1/rest-message/send
This is ApacheBench, Version 2.3 <$Revision: 1807734 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 100 requests
...
Completed 1000 requests
Finished 1000 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8449
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256
Document Path: /rest-message/send
Document Length: 39 bytes
Concurrency Level: 10
Time taken for tests: 434.853 seconds
Complete requests: 1000
Failed requests: 0
Keep-Alive requests: 0
Total transferred: 498000 bytes
Total body sent: 393000
HTML transferred: 39000 bytes
Requests per second: 2.30 [#/sec] (mean)
Time per request: 4348.528 [ms] (mean)
Time per request: 434.853 [ms] (mean, across all concurrent
requests)
Transfer rate: 1.12 [Kbytes/sec] received
0.88 kb/s sent
2.00 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 4 14 7.6 17 53
Processing: 1110 4317 437.2 4285 8383
Waiting: 1107 4314 437.2 4282 8377
Total: 1126 4332 436.8 4300 8403
That is through TLS.
But when I try to use my Golang service I get timeout:
Benchmarking 127.0.0.1 (be patient)...apr_pollset_poll: The timeout specified has expired (70007)
Total of 92 requests completed
And this output:
ab -n 100 -k -s 2 -c 10 -s 60 -p test2.sh -T 'application/json' http://127.0.0.1:8089/
This is ApacheBench, Version 2.3 <$Revision: 1807734 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)...^C
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8089
Document Path: /
Document Length: 39 bytes
Concurrency Level: 10
Time taken for tests: 145.734 seconds
Complete requests: 92
Failed requests: 1
(Connect: 0, Receive: 0, Length: 1, Exceptions: 0)
Keep-Alive requests: 91
Total transferred: 16380 bytes
Total body sent: 32200
HTML transferred: 3549 bytes
Requests per second: 0.63 [#/sec] (mean)
Time per request: 15840.663 [ms] (mean)
Time per request: 1584.066 [ms] (mean, across all concurrent requests)
Transfer rate: 0.11 [Kbytes/sec] received
0.22 kb/s sent
0.33 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 1229 1494 1955.9 1262 20000
Waiting: 1229 1291 143.8 1262 2212
Total: 1229 1494 1955.9 1262 20000
That is through plane tcp.
I guess I have some mistakes in my code. I made it in one file
func initAmqp(rabbitUrl string) {
var err error
conn, err = amqp.Dial(rabbitUrl)
failOnError(err, "Failed to connect to RabbitMQ")
}
func main() {
err := gcfg.ReadFileInto(&cfg, "config.gcfg")
if err != nil {
log.Fatal(err);
}
PrintConfig(cfg)
if cfg.Section_rabbit.RabbitUrl != "" {
initAmqp(cfg.Section_rabbit.RabbitUrl);
}
mux := http.NewServeMux();
mux.Handle("/", NewLimitHandler(1000, newTestHandler()))
server := http.Server {
Addr: cfg.Section_basic.Port,
Handler: mux,
ReadTimeout: 20 * time.Second,
WriteTimeout: 20 * time.Second,
}
defer conn.Close();
log.Println(server.ListenAndServe());
}
func NewLimitHandler(maxConns int, handler http.Handler) http.Handler {
h := &limitHandler{
connc: make(chan struct{}, maxConns),
handler: handler,
}
for i := 0; i < maxConns; i++ {
h.connc <- struct{}{}
}
return h
}
func newTestHandler() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
handler(w, r);
})
}
func handler(w http.ResponseWriter, r *http.Request) {
if b, err := ioutil.ReadAll(r.Body); err == nil {
fmt.Println("message is ", string(b));
res := publishMessages(string(b))
w.Write([]byte(res))
w.WriteHeader(http.StatusOK)
counter ++;
}else {
w.WriteHeader(http.StatusInternalServerError)
w.Write([]byte("500 - Something bad happened!"))
}
}
func publishMessages(payload string) string {
ch, err := conn.Channel()
failOnError(err, "Failed to open a channel")
q, err = ch.QueueDeclare(
"", // name
false, // durable
false, // delete when unused
true, // exclusive
false, // noWait
nil, // arguments
)
failOnError(err, "Failed to declare a queue")
msgs, err := ch.Consume(
q.Name, // queue
"", // consumer
true, // auto-ack
false, // exclusive
false, // no-local
false, // no-wait
nil, // args
)
failOnError(err, "Failed to register a consumer")
corrId := randomString(32)
log.Println("corrId ", corrId)
err = ch.Publish(
"", // exchange
cfg.Section_rabbit.RabbitQeue, // routing key
false, // mandatory
false, // immediate
amqp.Publishing{
DeliveryMode: amqp.Transient,
ContentType: "application/json",
CorrelationId: corrId,
Body: []byte(payload),
Timestamp: time.Now(),
ReplyTo: q.Name,
})
failOnError(err, "Failed to Publish on RabbitMQ")
defer ch.Close();
result := "";
for d := range msgs {
if corrId == d.CorrelationId {
failOnError(err, "Failed to convert body to integer")
log.Println("result = ", string(d.Body))
return string(d.Body);
}else {
log.Println("waiting for result = ")
}
}
return result;
}
Can someone help?
EDIT
here are my variables
type limitHandler struct {
connc chan struct{}
handler http.Handler
}
var conn *amqp.Connection
var q amqp.Queue
EDIT 2
func (h *limitHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
select {
case <-h.connc:
fmt.Println("ServeHTTP");
h.handler.ServeHTTP(w, req)
h.connc <- struct{}{}
default:
http.Error(w, "503 too busy", http.StatusServiceUnavailable)
}
}
EDIT 3
func failOnError(err error, msg string) {
if err != nil {
log.Fatalf("%s: %s", msg, err)
panic(fmt.Sprintf("%s: %s", msg, err))
}
}
I faced a strange issue with my Kafka producer. I use kafka-0.11 server/client version.
I have one zookeper and one kafka broker node. Also, I created 'events' topic with 3 partitions:
Topic:events PartitionCount:3 ReplicationFactor:1 Configs:
Topic: events Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: events Partition: 1 Leader: 0 Replicas: 0 Isr: 0
Topic: events Partition: 2 Leader: 0 Replicas: 0 Isr: 0
In my java code I create producer with the following properties:
Properties props = new Properties();
props.put(BOOTSTRAP_SERVERS_CONFIG, brokerUrl);
props.put(MAX_BLOCK_MS_CONFIG, 30000);
props.put(KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(PARTITIONER_CLASS_CONFIG, KafkaCustomPartitioner.class);
this.producer = new KafkaProducer<>(props);
Also, I'have added a callback to Producer#send() method that adds failed message to the queue that is iterated by the other "re-sending" thread in a loop:
this.producer.send(producerRecord, new ProducerCallback(producerRecord.value(), topic));
private class ProducerCallback implements Callback {
private final String message;
private final String topic;
public ProducerCallback(String message, String topic) {
this.message = message;
this.topic = topic;
}
#Override
public void onCompletion(RecordMetadata metadata, Exception ex) {
if (ex != null) {
logger.error("Kafka producer error. Topic: " + topic +
".Message will be added into failed messages queue.", ex);
failedMessagesQueue.enqueue(SerializationUtils.serialize(new FailedMessage(topic, message)));
}
}
}
private class ResenderThread extends Thread {
private volatile boolean running = true;
public void stopGracefully() {
running = false;
}
#Override
public void run() {
while (running) {
try {
byte[] val = failedMessagesQueue.peek();
if (val != null) {
FailedMessage failedMessage = SerializationUtils.deserialize(val);
ProducerRecord<String, String> record;
if (topic.equals(failedMessage.getTopic())) {
String messageKey = generateMessageKey(failedMessage.getMessage());
record = createProducerRecordWithKey(failedMessage.getMessage(), messageKey, failedMessage.getTopic());
} else {
record = new ProducerRecord<>(failedMessage.getTopic(), failedMessage.getMessage());
}
try {
this.producer.send(record).get();
failedMessagesQueue.dequeue();
} catch (Exception e) {
logger.debug("Kafka message resending attempt was failed. Topic " + failedMessage.getTopic() +
" Partition. " + record.partition() + ". " + e.getMessage());
}
}
Thread.sleep(200);
} catch (Exception e) {
logger.error("Error resending an event", e);
break;
}
}
}
}
Everything works fine until I decided to test Kafka broker kill/re-start scenario:
I've killed my Kafka broker node, and sent a 5 messages using my Kafka producer. The following message were logged by my producer app:
....the application works fine....
// kafka broker was killed
2017-11-10 09:20:44,594 WARN [org.apache.kafka.clients.NetworkClient] - <Connection to node 0 could not be established. Broker may not be available.>
2017-11-10 09:20:44,646 WARN [org.apache.kafka.clients.NetworkClient] - <Connection to node 0 could not be established. Broker may not be available.>
2017-11-10 09:20:44,700 WARN [org.apache.kafka.clients.NetworkClient] - <Connection to node 0 could not be established. Broker may not be available.>
2017-11-10 09:20:44,759 WARN [org.apache.kafka.clients.NetworkClient] - <Connection to node 0 could not be established. Broker may not be available.>
2017-11-10 09:20:44,802 WARN [org.apache.kafka.clients.NetworkClient] - <Connection to node 0 could not be established. Broker may not be available.>
// sent 5 message using producer. message were put to the failedMessagesQueue and "re-sender" thread started resending
2017-11-10 09:20:44,905 ERROR [com.inq.kafka.KafkaETLService] - <Kafka producer error. Topic: events.Message will be added into failed messages queue.>
....
2017-11-10 09:20:45,070 WARN [org.apache.kafka.clients.NetworkClient] - <Connection to node 0 could not be established. Broker may not be available.>
2017-11-10 09:20:45,129 WARN [org.apache.kafka.clients.NetworkClient] - <Connection to node 0 could not be established. Broker may not be available.>
2017-11-10 09:20:45,170 WARN [org.apache.kafka.clients.NetworkClient] - <Connection to node 0 could not be established. Broker may not be available.>
2017-11-10 09:20:45,217 WARN [org.apache.kafka.clients.NetworkClient] - <Connection to node 0 could not be established. Broker may not be available.>
// kafka broker was restarted, some strange errors were logged
2017-11-10 09:20:51,103 WARN [org.apache.kafka.clients.NetworkClient] - <Error while fetching metadata with correlation id 29 : {events=INVALID_REPLICATION_FACTOR}>
2017-11-10 09:20:51,205 WARN [org.apache.kafka.clients.NetworkClient] - <Error while fetching metadata with correlation id 31 : {events=INVALID_REPLICATION_FACTOR}>
2017-11-10 09:20:51,308 WARN [org.apache.kafka.clients.NetworkClient] - <Error while fetching metadata with correlation id 32 : {events=INVALID_REPLICATION_FACTOR}>
2017-11-10 09:20:51,114 WARN [org.apache.kafka.clients.producer.internals.Sender] - <Received unknown topic or partition error in produce request on partition events-0. The topic/partition may not exist or the user may not have Describe access to it>
2017-11-10 09:20:51,114 ERROR [com.inq.kafka.KafkaETLService] - <Kafka message resending attempt was failed. Topic events. org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.>
2017-11-10 09:20:52,485 WARN [org.apache.kafka.clients.NetworkClient] - <Error while fetching metadata with correlation id 33 : {events=INVALID_REPLICATION_FACTOR}>
// messages were succesfully re-sent and received by consumer..
How can I get rid of these logs (that logs every 100ms when Kafka broker is down):
[org.apache.kafka.clients.NetworkClient] - <Connection to node 0 could not be established. Broker may not be available.>
Why do I receive the following errors after Kafka broker startup(I didn't change any server props, and didn't alter the topic). It seems to me that these errors are result of some syncrhonization process between zookeeper and kafka during broker startup, because after some time procuder succesfully resent my messages. Am I wrong?:
[org.apache.kafka.clients.NetworkClient] - <Error while fetching metadata with correlation id 29 : {events=INVALID_REPLICATION_FACTOR}>
Received unknown topic or partition error in produce request on partition events-0. The topic/partition may not exist or the user may not have Describe access to it.
bin/kafka-console-consumer.sh --bootstrap-server tt01.my.tech:9092,tt02.my.tech:9092,tt03.my.tech:9092 --topic wallet-test-topic1 --from-beginning
new message from topic1
hello
hello world
123
hello again
123
what do i publish ?
[2020-02-09 16:57:21,142] WARN [Consumer clientId=consumer-1, groupId=console-consumer-93672] Connection to node 2 (tt02.my.tech/192.168.35.118:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-02-09 16:57:25,999] WARN [Consumer clientId=consumer-1, groupId=console-consumer-93672] Connection to node 2 (tt02.my.tech/192.168.35.118:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-02-09 16:57:58,902] WARN [Consumer clientId=consumer-1, groupId=console-consumer-93672] Connection to node 2 (tt02.my.tech/192.168.35.118:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-02-09 16:57:59,024] WARN [Consumer clientId=consumer-1, groupId=console-consumer-93672] Connection to node 3 (tt03.my.tech/192.168.35.126:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
^CProcessed a total of 7 messages
On the consumer side, if there is no message read after the poll , this warning is thrown.
Basically, a call to .poll has a reference to
handleTimedOutRequests(responses, updatedNow);
If there are no message read in this poll and there is a timeout, then processDisconnection will throw the warning.
private void handleTimedOutRequests(List<ClientResponse> responses, long now) {
List<String> nodeIds = this.inFlightRequests.nodesWithTimedOutRequests(now);
for (String nodeId : nodeIds) {
// close connection to the node
this.selector.close(nodeId);
log.debug("Disconnecting from node {} due to request timeout.", nodeId);
processDisconnection(responses, nodeId, now, ChannelState.LOCAL_CLOSE);
}
// we disconnected, so we should probably refresh our metadata
if (!nodeIds.isEmpty())
metadataUpdater.requestUpdate();
}
This exact case-match in processDisconnection throws this warning:
case NOT_CONNECTED:
log.warn("Connection to node {} ({}) could not be established. Broker may not be available.", nodeId, disconnectState.remoteAddress());
In short, everything will work fine from producer-consumer perspective. And you should treat the message as any other WARN
I'm trying to find out why my jHipster app is so slow when querying a database.
This is one of my services using spring-data's PagingAndSortingRepository
#Transactional(readOnly = true)
public Page<Center> findAll(Pageable pageable) {
log.debug("Request to get all Centers");
return centerRepository.findAll(pageable);
}
I've used JHipsters' LoggingAspect and added a timer to log the performance of each method.
#Around("loggingPointcut()")
public Object logAround(ProceedingJoinPoint joinPoint) throws Throwable {
Stopwatch stopwatch = null;
if (log.isDebugEnabled()) {
log.debug("Enter: {}.{}() with argument[s] = {}", joinPoint.getSignature().getDeclaringTypeName(),
joinPoint.getSignature().getName(), Arrays.toString(joinPoint.getArgs()));
stopwatch = Stopwatch.createStarted();
}
try {
Object result = joinPoint.proceed();
if (log.isDebugEnabled()) {
log.debug("Exit: {}.{}() [took {} ms] with result = {}", joinPoint.getSignature().getDeclaringTypeName(),
joinPoint.getSignature().getName(), stopwatch.elapsed(MILLISECONDS), result);
}
return result;
} catch (IllegalArgumentException e) {
log.error("Illegal argument: {} in {}.{}()", Arrays.toString(joinPoint.getArgs()),
joinPoint.getSignature().getDeclaringTypeName(), joinPoint.getSignature().getName());
throw e;
}
}
I configured hibernate to generate the statictics:
spring.jpa.properties.hibernate.generate_statistics=true
If I change the log-levels of org.hibernate.stat.internal.ConcurrentStatisticsImpl and org.hibernate.engine.internal.StatisticalLoggingSessionEventListener I see the following logs:
2016-10-13 11:00:00,640 DEBUG [http-nio-8080-exec-8] LoggingAspect: Enter: com.fluidda.broncholab.service.CenterService.findAll() with argument[s] = [Page request [number: 0, size 20, sort: id: ASC]]
2016-10-13 11:00:00,643 DEBUG [http-nio-8080-exec-8] CenterService: Request to get all Centers
2016-10-13 11:00:02,238 DEBUG [http-nio-8080-exec-8] ConcurrentStatisticsImpl: HHH000117: HQL: select count(generatedAlias0) from Center as generatedAlias0, time: 1ms, rows: 1
2016-10-13 11:00:02,241 DEBUG [http-nio-8080-exec-8] ConcurrentStatisticsImpl: HHH000117: HQL: select generatedAlias0 from Center as generatedAlias0 order by generatedAlias0.id asc, time: 2ms, rows: 3
2016-10-13 11:00:02,242 DEBUG [http-nio-8080-exec-8] LoggingAspect: Exit: com.fluidda.broncholab.service.CenterService.findAll() [took 1601 ms] with result = Page 1 of 1 containing com.fluidda.broncholab.domain.Center instances
2016-10-13 11:00:02,243 INFO [http-nio-8080-exec-8] StatisticalLoggingSessionEventListener: Session Metrics {
568512 nanoseconds spent acquiring 1 JDBC connections;
0 nanoseconds spent releasing 0 JDBC connections;
92324 nanoseconds spent preparing 2 JDBC statements;
992105 nanoseconds spent executing 2 JDBC statements;
0 nanoseconds spent executing 0 JDBC batches;
34717 nanoseconds spent performing 3 L2C puts;
0 nanoseconds spent performing 0 L2C hits;
0 nanoseconds spent performing 0 L2C misses;
0 nanoseconds spent executing 0 flushes (flushing a total of 0 entities and 0 collections);
2943 nanoseconds spent executing 2 partial-flushes (flushing a total of 0 entities and 0 collections)
If you take a look at the timings (not the timestamp when the item was logged), you'll see a big difference between:
ConcurrentStatisticsImpl: time: 2ms, rows: 3
LoggingAspect: [took 1601 ms]
StatisticalLoggingSessionEventListener: 992105 nanoseconds spent executing 2 JDBC statements;
The strange is, that these performance issue's do not occur all the time! This is the dropwizard statistic:
Service name Count Mean Min p50 p75 p95 p99 Max
....web.rest.CenterResource.getAllCenters 5 13 10 16 16 16 16 1,612
Does anyone know what may cause these performance drops?
Does anyone know how I can investigate any further?
After adding extra logging, and not using an ASYNC logger, I've find out, the server was losing time parsing the HQL.
I've created another question about this: Hibernate QueryTranslatorImpl HQL AST parsing performance
I'm trying to execute a program in eclipse and I when I click run I see this in the console output:
[] is an unknown syslog facility. Defaulting to [USER].
/
"Failed"
Any ideas?
It looks like that error is coming from org.apache.log4j.net.SyslogAppender, and that you've tried to set a bad facility name. Go take a look at your appenders and how you are setting them up.
414 public
415 void setFacility(String facilityName) {
416 if(facilityName == null)
417 return;
418
419 syslogFacility = getFacility(facilityName);
420 if (syslogFacility == -1) {
421 System.err.println("["+facilityName +
422 "] is an unknown syslog facility. Defaulting to [USER].");
423 syslogFacility = LOG_USER;
424 }
425
426 this.initSyslogFacilityStr();
427
428 // If there is already a sqw, make it use the new facility.
429 if(sqw != null) {
430 sqw.setSyslogFacility(this.syslogFacility);
431 }
432 }