I'm using Infinispan HOTROD into java application running on IBM Liberty app server with JDK8.
HOTROD client (lib 12.1.11.Final-redhat-00001 version) is implemented via `
org.infinispan.jcache.remote.JCachingProvider :
#PostConstruct
private void setUp() {
LOGGER.info("START [setUp] CACHE");
File conf = new File(System.getenv("CLIENT_HOTROD_FILE_PATH"));
URI uri = conf.toURI();
// Retrieve the system wide cache manager via org.infinispan.jcache.remote.JCachingProvider
javax.cache.CacheManager cacheManager = Caching.getCachingProvider("org.infinispan.jcache.remote.JCachingProvider")
.getCacheManager(uri, this.getClass().getClassLoader(), null);
this.cache = cacheManager.getCache(DATAGRIDKEY);
LOGGER.info("END [setUp] cache " + this.cache.getName() );
}
HOT-ROD client config file:
infinispan.client.hotrod.server_list=server1.x.xx.xxx:11222;server2.x.xx.xxx:11222;server3..x.xx.xxx:11222;server4..x.xx.xxx:11222
infinispan.client.hotrod.auth_username=user-app
infinispan.client.hotrod.auth_password=password
infinispan.client.hotrod.auth_realm=default
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512
infinispan.client.hotrod.client_intelligence=HASH_DISTRIBUTION_AWARE
HOT-ROD client config properties available
Otherwise the config server REDHAT Data Grid (8.2.3 version) is the followed:
{
"distributed-cache": {
"mode": "ASYNC",
"remote-timeout": 17500,
"state-transfer": {
"timeout": 60000
},
"encoding": {
"key": {
"media-type": "text/plain"
},
"value": {
"media-type": "application/x-protostream"
}
},
"locking": {
"concurrency-level": 1000,
"acquire-timeout": 15000,
"striping": false
},
"statistics": true
}
}
Sometimes when the application has some requests into the log application side, occurs the java.net.SocketTimeoutException: GetOperation issue as the followed stacktrace:
[2/15/22 17:24:39:445 CET] 00000573 HOTROD W org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation handleException ISPN004098: Closing connection [id: 0x9a1a8fb2, L:/10.0.20.160:55295 ! R:10.0.18.97/10.0.18.97:11222] due to transport error java.net.SocketTimeoutException: GetOperation{Cache-Name-Test, key=[B0x4A6C636F6D2E6475636174692E77612E..[110], flags=1, connection=10.0.18.97/10.0.18.97:11222} timed out after 60000 ms
at org.infinispan.client.hotrod.impl.operations.HotRodOperation.run(HotRodOperation.java:185)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:822)
The issue is more frequently when the app receives a massive GET KEY requests, but sometimes the issue is occurred durung a unmassive load requests.
Could you know how to resolve the issue ?
Do you know any suggestion ?
Thanks
I tried to edit the client config HOT-ROD to understand the behavior of the client, below I want to summarize the new configuration changes :
infinispan.client.hotrod.server_list=server1:11222;server2:11222;server3:11222;server4:11222
infinispan.client.hotrod.auth_username=user-app
infinispan.client.hotrod.auth_password=password
infinispan.client.hotrod.auth_realm=default
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512
infinispan.client.hotrod.socket_timeout=15000
infinispan.client.hotrod.connect_timeout=60000
infinispan.client.hotrod.max_retries=5
# Connection pool
infinispan.client.hotrod.connection_pool.max_active=50
infinispan.client.hotrod.connection_pool.exhausted_action=WAIT
infinispan.client.hotrod.connection_pool.max_wait=20000
infinispan.client.hotrod.connection_pool.min_idle=10
#default is 1800000
infinispan.client.hotrod.connection_pool.min_evictable_idle_time=600000
infinispan.client.hotrod.connection_pool.max_pending_requests=846
I noticed that changing infinispan.client.hotrod.socket_timeout=15000
millisecond.
Whatching app log file I seen the same error, but currently, instead 60000 ms, is to 15000 ms the socketTimeoutException. With the new setting the thread request successfully completing the thread request even if the exception is logged.
I think to understand that into the pool sometimes it seems there are connections died and the client search and try another connection alive into the pool retrying the GetOperation.
As written inorg.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation class code into library hot-rod:
/**
* Base class for all the operations that need retry logic: if the operation fails due to connection problems, try with
* another available connection.
*
* #author Mircea.Markus#jboss.com
* #since 4.1
*/
In a nutshell I need to keep alive connections into the pool, I don't know which is the property for Data Grid client HOT-ROD.
The next step I would like to try to set infinispan.client.hotrod.tcp_keep_alive=true
to test if the behavior is changed, after that I will update the post.
I tried to change some configuration properties, considering the reasonable trade-off beetween retry operations and timeouts.
HOT-ROD client config:
infinispan.client.hotrod.server_list=server1:11222;server2:11222;server3:11222;server4:11222
infinispan.client.hotrod.auth_username=app-user
infinispan.client.hotrod.auth_password=password
infinispan.client.hotrod.auth_realm=default
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512
infinispan.client.hotrod.socket_timeout=5000
infinispan.client.hotrod.connect_timeout=20000
infinispan.client.hotrod.max_retries=5
# Connection pool
infinispan.client.hotrod.connection_pool.max_active=100
infinispan.client.hotrod.connection_pool.exhausted_action=WAIT
infinispan.client.hotrod.connection_pool.max_wait=20000
infinispan.client.hotrod.connection_pool.min_idle=50
#default is 1800000
infinispan.client.hotrod.connection_pool.min_evictable_idle_time=600000
infinispan.client.hotrod.connection_pool.max_pending_requests=846
infinispan.client.hotrod.client_intelligence=HASH_DISTRIBUTION_AWARE
infinispan.client.hotrod.force_return_values=false
#default is false
infinispan.client.hotrod.tcp_keep_alive=true
The changes have been done are:
infinispan.client.hotrod.socket_timeout reduced to 5000ms
infinispan.client.hotrod.connection_pool.min_idle incremented to 50
infinispan.client.hotrod.connect_timeout reduced to 20000ms
infinispan.client.hotrod.max_retries reduced to 5
infinispan.client.hotrod.connection_pool.max_active incremented to 100
Tested in QA env and from log application, I can say the HOT-ROD client rarely after 4 RetryOnFailureOperations obtained
java.net.SocketTimeoutException: GetOperation`timed out after 5000 ms
occurs the updateTopologyInfo action client HOT-ROD side.
This HOT-ROD client policy it seems to be only the log generated from the Async thread runs in background to check the sockets are still alive.
[2/23/22 9:06:12:923 CET] 000001ab HOTROD W org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation handleException ISPN004098: Closing connection [id: 0xa5345361, L:/10.0.20.160
:44021 - R:10.0.18.102/10.0.18.102:11222] due to transport error
java.net.SocketTimeoutException: GetOperation{MY-CACHE, key=[B0x4A4B636F6D2E6475636174692E77612E..[77], flags=1, connection=10.0.18.102/10.0.18.102:11222} timed out a
fter 5000 ms
at org.infinispan.client.hotrod.impl.operations.HotRodOperation.run(HotRodOperation.java:185)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:822)
[2/23/22 9:06:17:924 CET] 000001a5 HOTROD W org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation handleException ISPN004098: Closing connection [id: 0x54dacb9b, L:/10.0.20.160:42115 - R:10.0.18.97/10.0.18.97:11222] due to transport error
java.net.SocketTimeoutException: GetOperation{MY-CACHE, key=[B0x4A4B636F6D2E6475636174692E77612E..[77], flags=1, connection=10.0.18.97/10.0.18.97:11222} timed out after 5000 ms
at org.infinispan.client.hotrod.impl.operations.HotRodOperation.run(HotRodOperation.java:185)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:822)
[2/23/22 9:06:22:925 CET] 000001a5 HOTROD W org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation handleException ISPN004098: Closing connection [id: 0xe3e6fd98, L:/10.0.20.160:43629 - R:10.0.18.101/10.0.18.101:11222] due to transport error
java.net.SocketTimeoutException: GetOperation{MY-CACHE, key=[B0x4A4B636F6D2E6475636174692E77612E..[77], flags=1, connection=10.0.18.101/10.0.18.101:11222} timed out after 5000 ms
at org.infinispan.client.hotrod.impl.operations.HotRodOperation.run(HotRodOperation.java:185)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:822)
[2/23/22 9:06:27:926 CET] 000001ab HOTROD W org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation handleException ISPN004098: Closing connection [id: 0x52a4bd4a, L:/10.0.20.160:32912 - R:10.0.18.96/10.0.18.96:11222] due to transport error
java.net.SocketTimeoutException: GetOperation{MY-CACHE, key=[B0x4A4B636F6D2E6475636174692E77612E..[77], flags=1, connection=10.0.18.96/10.0.18.96:11222} timed out after 5000 ms
at org.infinispan.client.hotrod.impl.operations.HotRodOperation.run(HotRodOperation.java:185)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:822)
[2/23/22 9:06:27:927 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(server4:11222), adding to the pool.
[2/23/22 9:06:27:927 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(server2:11222), adding to the pool.
[2/23/22 9:06:27:928 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(server3:11222), adding to the pool.
[2/23/22 9:06:27:928 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(server1:11222), adding to the pool.
[2/23/22 9:06:27:928 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.101:11222), removing from the pool.
[2/23/22 9:06:27:928 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.102:11222), removing from the pool.
[2/23/22 9:06:27:928 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.96:11222), removing from the pool.
[2/23/22 9:06:27:929 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.97:11222), removing from the pool.
[2/23/22 9:06:27:929 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.101:11222), removing from the pool.
[2/23/22 9:06:27:929 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.102:11222), removing from the pool.
[2/23/22 9:06:27:929 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.96:11222), removing from the pool.
[2/23/22 9:06:27:929 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.97:11222), removing from the pool.
[2/23/22 9:06:28:059 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.protocol.Codec20 readNewTopologyAndHash ISPN004006: Server sent new topology view (id=108, age=0) containing 4 addresses: [10.0.18.101:11222, 10.0.18.102:11222, 10.0.18.96:11222, 10.0.18.97:11222]
[2/23/22 9:06:28:060 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(10.0.18.101:11222), adding to the pool.
[2/23/22 9:06:28:062 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(10.0.18.102:11222), adding to the pool.
[2/23/22 9:06:28:063 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(10.0.18.96:11222), adding to the pool.
[2/23/22 9:06:28:065 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(10.0.18.97:11222), adding to the pool.
[2/23/22 9:06:28:082 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(server4:11222), removing from the pool.
[2/23/22 9:06:28:083 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(server2:11222), removing from the pool.
[2/23/22 9:06:28:083 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(server3:11222), removing from the pool.
[2/23/22 9:06:28:083 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(server1:11222), removing from the pool.
16576429:[2/23/22 9:06:33:056 CET] 000001ab HOTROD W org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation handleException ISPN004098: Closing connection [id: 0x4299851e, L:/10.0.20.160:45567 ! R:server4/10.0.18.102:11222] due to transport error
16576703- java.net.SocketTimeoutException: GetOperation{MY-CACHE, key=[B0x4A4B636F6D2E6475636174692E77612E..[77], flags=1, connection=server4/10.0.18.102:11222} timed out after 5000 ms
16576925- at org.infinispan.client.hotrod.impl.operations.HotRodOperation.run(HotRodOperation.java:185)
16577020- at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
16577090- at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170)
16577173- at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
16577268- at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
16577371- at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
16577442- at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
16577539- at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
16577617- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
16577701- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
16577785- at java.lang.Thread.run(Thread.java:822)
NOTE
Character after HOTROD word indicates the LOG level
Example:
[2/23/22 9:06:28:083 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory
I = INFO
W = WARNING
D = DEBUG
E = ERROR
Related
I have been using STOMP and sockjs for some time now, but the error below happens at times.
12:57:30.976 [48] [tcp-client-loop-nio-4] INFO org.springframework.messaging.simp.stomp.StompBrokerRelayMessageHandler - TCP connection failure in session w6s21o5d: failed to forward DISCONNECT session=w6s21o5d
reactor.netty.channel.AbortedException: io.netty.channel.StacklessClosedChannelException
at reactor.netty.FutureMono$FutureSubscription.wrapError(FutureMono.java:186) ~[reactor-netty-core-1.0.7.jar!/:1.0.7]
at reactor.netty.FutureMono$FutureSubscription.operationComplete(FutureMono.java:177) [reactor-netty-core-1.0.7.jar!/:1.0.7]
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:1021) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:882) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:709) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:792) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:702) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:304) [netty-handler-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1071) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:497) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: io.netty.channel.StacklessClosedChannelException
The disconnection message is send from frontend as shown below before page reload.
window.addEventListener("beforeunload", function () {
client.disconnect(function () {
log("disconnect websocket..");
});
});
No logic is written in backend to handle this spring-stomp handles rest of it.
I have gone through resources in STOMP and there is still no explanation on why or exactly on what scenarios this is happening. What could be the possible reasons for this error.
Spring boot version: 2.3.7.RELEASE
Mongo driver: 4.0.5
Amazon DocumentDB 4.0
Netty dependency version will along with Spring boot version (4.1.55.Final)
I am getting the error below when trying to connect amazon DocumentDB from spring boot with ca-bundle cert.
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'noneService' defined in file [/app/classes/ai/teemo/teemoservice/service/NoneService.class]: Invocation of init method failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1#48af5f38. Client view of cluster state is {type=REPLICA_SET, servers=[{address=twinci-docdb-prod.cluster-cxiyqoolhj3e.ap-southeast-1.docdb.amazonaws.com:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {io.netty.handler.ssl.SslClosedEngineException: SSLEngine closed already}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1#48af5f38. Client view of cluster state is {type=REPLICA_SET, servers=[{address=twinci-docdb-prod.cluster-cxiyqoolhj3e.ap-southeast-1.docdb.amazonaws.com:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {io.netty.handler.ssl.SslClosedEngineException: SSLEngine closed already}}]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1794)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:594)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:516)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:324)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:322)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1307)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1227)
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:886)
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:790)
... 34 common frames omitted
Caused by: org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1#48af5f38. Client view of cluster state is {type=REPLICA_SET, servers=[{address=twinci-docdb-prod.cluster-cxiyqoolhj3e.ap-southeast-1.docdb.amazonaws.com:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {io.netty.handler.ssl.SslClosedEngineException: SSLEngine closed already}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1#48af5f38. Client view of cluster state is {type=REPLICA_SET, servers=[{address=twinci-docdb-prod.cluster-cxiyqoolhj3e.ap-southeast-1.docdb.amazonaws.com:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {io.netty.handler.ssl.SslClosedEngineException: SSLEngine closed already}}]
at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:88)
I'm trying to upload 250 files which size is approximately 5mb using my Azure Storage Client in version 12.4.0 on my Spring Boot app. Right after performing this upload with JMeter I receive the error, then waiting a few minutes most of the files are uploaded, some fail with error. I've added all logs and code below.
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-storage-blob</artifactId>
<version>12.4.0</version>
</dependency>
Client Code
public class AzureStorageClient implements StorageClient {
private final Logger LOGGER = LoggerFactory.getLogger(AzureStorageClient.class);
private final BlobContainerClient containerClient;
public AzureStorageClient(final AzureStorageServiceProvider provider, String storageContainer) {
var serviceClient = provider.createServiceClient();
this.containerClient = serviceClient.getBlobContainerClient(storageContainer);
if (!this.containerClient.exists()) {
throw new StorageClientInitException("No container exists with name " + storageContainer);
}
}
#Override
public void store(final String blobPath, final String originalFileName, final byte[] bytes) {
final BlobClient blobClient = containerClient.getBlobClient(blobPath);
final BlockBlobClient blockBlobClient = blobClient.getBlockBlobClient();
try (ByteArrayInputStream inputStream = new ByteArrayInputStream(bytes)) {
LOGGER.info("Started: upload file {} with original name {}", blobPath, originalFileName);
blockBlobClient.upload(inputStream, bytes.length, true);
LOGGER.info("Finished: file {} with original name {}", blobPath, originalFileName);
} catch (BlobStorageException | IOException exc) {
throw new StorageException(exc);
}
}
}
Errors Right After Upload:
2020-04-06 21:55:00.374 WARN 1700 --- [ctor-http-nio-3] r.netty.http.client.HttpClientConnect : [id: 0x441e3068, L:/192.168.0.11:55049 - R:somestorageaccount.blob.core.windows.net/11.11.111.11:443] The connection observed an error
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler$5.run(SslHandler.java:2001) ~[netty-handler-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
2020-04-06 21:55:01.717 WARN 1700 --- [ctor-http-nio-3] r.netty.http.client.HttpClientConnect : [id: 0x16c7448d, L:/192.168.0.11:55085 - R:somestorageaccount.blob.core.windows.net/11.11.111.11:443] The connection observed an error
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler$5.run(SslHandler.java:2001) ~[netty-handler-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
2020-04-06 21:55:03.028 WARN 1700 --- [ctor-http-nio-6] r.netty.http.client.HttpClientConnect : [id: 0x13bee15d, L:/192.168.0.11:55064 - R:somestorageaccount.blob.core.windows.net/11.11.111.11:443] The connection observed an error
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler$5.run(SslHandler.java:2001) ~[netty-handler-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
2020-04-06 21:55:03.957 WARN 1700 --- [tor-http-nio-12] r.netty.http.client.HttpClientConnect : [id: 0x196d50c9, L:/192.168.0.11:55118 - R:somestorageaccount.blob.core.windows.net/11.11.111.11:443] The connection observed an error
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler$5.run(SslHandler.java:2001) ~[netty-handler-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
2020-04-06 21:55:13.369 WARN 1700 --- [ctor-http-nio-8] r.netty.http.client.HttpClientConnect : [id: 0x5a704036, L:0.0.0.0/0.0.0.0:55054] The connection observed an error
reactor.netty.http.client.PrematureCloseException: Connection has been closed BEFORE response, while sending request body
After some time files are uploaded, but not all:
020-04-06 21:58:38.989 INFO 1700 --- [io-8080-exec-34] c.c.f.cloud.AzureStorageClient : Finished: file 40fcbd8fc41b447f91fc9e01c11a79fb with original name oswietlenie.pdf
2020-04-06 21:58:51.835 WARN 1700 --- [ctor-http-nio-8] r.netty.http.client.HttpClientConnect : [id: 0xfc719110, L:0.0.0.0/0.0.0.0:55102] The connection observed an error
reactor.netty.http.client.PrematureCloseException: Connection has been closed BEFORE response, while sending request body
2020-04-06 21:58:56.033 WARN 1700 --- [tor-http-nio-12] r.netty.http.client.HttpClientConnect : [id: 0x870e83db, L:0.0.0.0/0.0.0.0:55082] The connection observed an error
reactor.netty.http.client.PrematureCloseException: Connection has been closed BEFORE response, while sending request body
2020-04-06 21:59:00.580 INFO 1700 --- [io-8080-exec-62] c.c.f.cloud.AzureStorageClient : Finished: file b007415bfd114a15b9fcbd3591f0b35a with original name oswietlenie.pdf
I would like to test on my system if the database is down with Spring Boot. I have written a rest call for testing. When the database is up, it return "test" which is correct result. But the database is down, it return nothing.
The Goal is to make it to return "test" even the database is down. Below is the error
2020-04-27 17:27:00.003 ERROR 9536 [] --- [pool-1-thread-1] o.s.s.s.TaskUtils$LoggingErrorHandler : Unexpected error occurred in scheduled task.
org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection
at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:450)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:378)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:137)
at org.springframework.session.jdbc.JdbcOperationsSessionRepository.cleanUpExpiredSessions(JdbcOperationsSessionRepository.java:589)
at org.springframework.session.jdbc.config.annotation.web.http.JdbcHttpSessionConfiguration.lambda$configureTasks$0(JdbcHttpSessionConfiguration.java:194)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection
at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:48)
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:111)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:97)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:109)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getPhysicalConnection(LogicalConnectionManagedImpl.java:136)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getConnectionForTransactionManagement(LogicalConnectionManagedImpl.java:254)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.begin(LogicalConnectionManagedImpl.java:262)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.begin(JdbcResourceLocalTransactionCoordinatorImpl.java:214)
at org.hibernate.engine.transaction.internal.TransactionImpl.begin(TransactionImpl.java:56)
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.beginTransaction(HibernateJpaDialect.java:162)
at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:400)
... 13 common frames omitted
Caused by: java.sql.SQLTransientConnectionException: HikariCP - Connection is not available, request timed out after 60000ms.
at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:669)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:183)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:148)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:128)
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122)
at org.hibernate.internal.NonContextualJdbcConnectionAccess.obtainConnection(NonContextualJdbcConnectionAccess.java:35)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:106)
... 20 common frames omitted
Caused by: org.postgresql.util.PSQLException: Connection to X:X:X:X:xxxx refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:280)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195)
at org.postgresql.Driver.makeConnection(Driver.java:454)
at org.postgresql.Driver.connect(Driver.java:256)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:117)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:123)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:365)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:194)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:460)
at com.zaxxer.hikari.pool.HikariPool.access$100(HikariPool.java:71)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:699)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:685)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 common frames omitted
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.postgresql.core.PGStream.<init>(PGStream.java:70)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:91)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192)
... 16 common frames omitted
And the test controller is in below
#RestController
public class TestController {
#GetMapping(value = "/test")
public String test() {
System.out.println("test");
return "Test";
}
}
the application.properties is like below
spring.datasource.hikari.maximumPoolSize=10
spring.datasource.hikari.idleTimeout=50000
spring.datasource.hikari.maxLifetime=2000000
spring.datasource.hikari.connectionTimeout=60000
spring.datasource.continue-on-error=true
spring.datasource.initialization-mode=never
spring.datasource.hikari.pool-name=KlmsHikariCP
spring.datasource.hikari.connection-test-query=SELECT 1
spring.datasource.hikari.initialization-fail-timeout= -1
spring.datasource.hikari.isolate-internal-queries=true
spring.datasource.hikari.allow-pool-suspension=true
spring.datasource.test-while-idle=true
spring.datasource.test-on-borrow=true
spring.datasource.test-on-return=false
spring.datasource.validation-interval=40000
spring.datasource.validation-query=SELECT 1 FROM DUAL
spring.datasource.time-between-eviction-runs-millis=5000
spring.datasource.min-evictable-idle-time-millis=5000
spring.datasource.remove-abandoned=true
spring.datasource.remove-abandoned-timeout=3
spring.datasource.log-abandoned=true
spring.datasource.log-validation-errors=true
################### Hibernate Configuration ##########################
spring.jpa.hibernate.ddl-auto=none
spring.jpa.show-sql=false
spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults=true
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
Do you mean during the startup of Spring or while the application is running?
Your RestController currently does not access the DB so I fail to see how the DB connection could result in errors and if the DB is down the controller should return "Test".
If you mean during startup you can try:
spring.datasource.continue-on-error=true
Check that the hostname and port are correct and that the postmaster
is accepting TCP/IP connections.
Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Cause of error if Caused by: org.postgresql.util.PSQLException: Connection to X:X:X:X:xxxx refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Please make sure the database host and port in your application.properties file are mentioned and are correct and also the database instance is up and running in mentioned host.
I have my code in Jedis 2.7.x for several months. But i found I have the same issue as https://github.com/xetorthio/jedis/issues/1625 (JedisPool.getResource block when redis server restart). So I switched from 2.7.x to 2.9.x. I am using the same code to create JedisCluster. But it failed to connect and get data from Jedis.
I created a singleton JedisCluster instance and every time i just use JedisCluster.get() to get data. I don't close connections for cluster.
More info: My redis is 3.2.8. I am using the one single host as JedisCluster and there is no slave. The reason why i want to use JedisCluster mode instead of Jedis mode is that I have another system need to call a JedisCluster with multi-hosts. I want to test the code in the current single-host cluster first and then deploy the code to that system
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:1
cluster_current_epoch:0
cluster_my_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0
Same code worked in 2.7.x but not 2.9.x.
#Singleton
JedisCluster jedisCluster(){
Set<HostAndPort> jedisClusterNodes = new HashSet<HostAndPort>();
jedisClusterNodes.add(new HostAndPort("10.xx.xx.xx", 6379));
GenericObjectPoolConfig jedisPoolConfig = new GenericObjectPoolConfig();
jedisPoolConfig.setMaxTotal(16);
jedisPoolConfig.setMaxIdle(16);
return new JedisCluster(jedisClusterNodes, 2000, 1000, 5, jedisPoolConfig);
}
//When using cluster
jedisCluster.set("foo", "bar");
String value = jedisCluster.get("foo");
Below code worked in 2.9.x
Jedis jedis = new Jedis("xx.xx.xxx.xx",6379);
System.out.println(jedis.get("foo"));
I ssh to my Redis host and run the cli
127.0.0.1:6379> get foo
"bar
Exception that i am getting
redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
at redis.clients.util.Pool.getResource(Pool.java:53) ~[jedis-2.9.x.jar:?]
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:226) ~[jedis-2.9.x.jar:?]
at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnectionFromSlot(JedisSlotBasedConnectionHandler.java:66) ~[jedis-2.9.x.jar:?]
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:116) ~[jedis-2.9.x.jar:?]
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:141) ~[jedis-2.9.x.jar:?]
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:141) ~[jedis-2.9.x.jar:?]
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:141) ~[jedis-2.9.x.jar:?]
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:141) ~[jedis-2.9.x.jar:?]
at redis.clients.jedis.JedisClusterCommand.run(JedisClusterCommand.java:31) ~[jedis-2.9.x.jar:?]
at redis.clients.jedis.JedisCluster.get(JedisCluster.java:124) ~[jedis-2.9.x.jar:?]
.........
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: java.net.ConnectException: Connection refused (Connection refused)
at redis.clients.jedis.Connection.connect(Connection.java:207) ~[jedis-2.9.x.jar:?]
at redis.clients.jedis.BinaryClient.connect(BinaryClient.java:93) ~[jedis-2.9.x.jar:?]
at redis.clients.jedis.BinaryJedis.connect(BinaryJedis.java:1767) ~[jedis-2.9.x.jar:?]
at redis.clients.jedis.JedisFactory.makeObject(JedisFactory.java:106) ~[jedis-2.9.x.jar:?]
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:836) ~[commons-pool2-2.2.jar:2.2]
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:434) ~[commons-pool2-2.2.jar:2.2]
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) ~[commons-pool2-2.2.jar:2.2]
at redis.clients.util.Pool.getResource(Pool.java:49) ~[jedis-2.9.x.jar:?]
... 45 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.8.0_144]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[?:1.8.0_144]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[?:1.8.0_144]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[?:1.8.0_144]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.8.0_144]
at java.net.Socket.connect(Socket.java:589) ~[?:1.8.0_144]
at redis.clients.jedis.Connection.connect(Connection.java:184) ~[jedis-2.9.x.jar:?]
at redis.clients.jedis.BinaryClient.connect(BinaryClient.java:93) ~[jedis-2.9.x.jar:?]
at redis.clients.jedis.BinaryJedis.connect(BinaryJedis.java:1767) ~[jedis-2.9.x.jar:?]
at redis.clients.jedis.JedisFactory.makeObject(JedisFactory.java:106) ~[jedis-2.9.x.jar:?]
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:836) ~[commons-pool2-2.2.jar:2.2]
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:434) ~[commons-pool2-2.2.jar:2.2]
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:361) ~[commons-pool2-2.2.jar:2.2]
at redis.clients.util.Pool.getResource(Pool.java:49) ~[jedis-2.9.x.jar:?]
... 45 more
log in redis side
3088:M 21 Mar 21:42:24.187 - Accepted xx.xx.xxx.xxx:36064
3088:M 21 Mar 21:42:24.197 - Accepted xx.xx.xxx.xxx:36065
3088:M 21 Mar 21:42:24.212 - Reading from client: Connection reset by peer
3088:M 21 Mar 21:42:28.481 - DB 0: 12407076 keys (0 volatile) in 16777216 slots HT.
3088:M 21 Mar 21:42:28.482 - 1 clients connected (0 slaves), 1852164712 bytes in use
https://github.com/xetorthio/jedis/issues/1792
My question got answered by Marcos Nils in jedis github - https://github.com/xetorthio/jedis/issues/1792 which pointed me to https://github.com/xetorthio/jedis/pull/1211
Previously I didn't create cluster in my single host and I could directly use it as a cluster in 2.7.x. I ran the below command which solved my problem.
for i in {0..16383}; do bin/redis-cli -h <ipOfHost1> CLUSTER ADDSLOTS $i; done
cluster meet <ip> 6379