Azure storage stress test fails - java

I'm trying to upload 250 files which size is approximately 5mb using my Azure Storage Client in version 12.4.0 on my Spring Boot app. Right after performing this upload with JMeter I receive the error, then waiting a few minutes most of the files are uploaded, some fail with error. I've added all logs and code below.
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-storage-blob</artifactId>
<version>12.4.0</version>
</dependency>
Client Code
public class AzureStorageClient implements StorageClient {
private final Logger LOGGER = LoggerFactory.getLogger(AzureStorageClient.class);
private final BlobContainerClient containerClient;
public AzureStorageClient(final AzureStorageServiceProvider provider, String storageContainer) {
var serviceClient = provider.createServiceClient();
this.containerClient = serviceClient.getBlobContainerClient(storageContainer);
if (!this.containerClient.exists()) {
throw new StorageClientInitException("No container exists with name " + storageContainer);
}
}
#Override
public void store(final String blobPath, final String originalFileName, final byte[] bytes) {
final BlobClient blobClient = containerClient.getBlobClient(blobPath);
final BlockBlobClient blockBlobClient = blobClient.getBlockBlobClient();
try (ByteArrayInputStream inputStream = new ByteArrayInputStream(bytes)) {
LOGGER.info("Started: upload file {} with original name {}", blobPath, originalFileName);
blockBlobClient.upload(inputStream, bytes.length, true);
LOGGER.info("Finished: file {} with original name {}", blobPath, originalFileName);
} catch (BlobStorageException | IOException exc) {
throw new StorageException(exc);
}
}
}
Errors Right After Upload:
2020-04-06 21:55:00.374 WARN 1700 --- [ctor-http-nio-3] r.netty.http.client.HttpClientConnect : [id: 0x441e3068, L:/192.168.0.11:55049 - R:somestorageaccount.blob.core.windows.net/11.11.111.11:443] The connection observed an error
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler$5.run(SslHandler.java:2001) ~[netty-handler-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
2020-04-06 21:55:01.717 WARN 1700 --- [ctor-http-nio-3] r.netty.http.client.HttpClientConnect : [id: 0x16c7448d, L:/192.168.0.11:55085 - R:somestorageaccount.blob.core.windows.net/11.11.111.11:443] The connection observed an error
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler$5.run(SslHandler.java:2001) ~[netty-handler-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
2020-04-06 21:55:03.028 WARN 1700 --- [ctor-http-nio-6] r.netty.http.client.HttpClientConnect : [id: 0x13bee15d, L:/192.168.0.11:55064 - R:somestorageaccount.blob.core.windows.net/11.11.111.11:443] The connection observed an error
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler$5.run(SslHandler.java:2001) ~[netty-handler-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
2020-04-06 21:55:03.957 WARN 1700 --- [tor-http-nio-12] r.netty.http.client.HttpClientConnect : [id: 0x196d50c9, L:/192.168.0.11:55118 - R:somestorageaccount.blob.core.windows.net/11.11.111.11:443] The connection observed an error
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler$5.run(SslHandler.java:2001) ~[netty-handler-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) ~[netty-transport-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.45.Final.jar:4.1.45.Final]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
2020-04-06 21:55:13.369 WARN 1700 --- [ctor-http-nio-8] r.netty.http.client.HttpClientConnect : [id: 0x5a704036, L:0.0.0.0/0.0.0.0:55054] The connection observed an error
reactor.netty.http.client.PrematureCloseException: Connection has been closed BEFORE response, while sending request body
After some time files are uploaded, but not all:
020-04-06 21:58:38.989 INFO 1700 --- [io-8080-exec-34] c.c.f.cloud.AzureStorageClient : Finished: file 40fcbd8fc41b447f91fc9e01c11a79fb with original name oswietlenie.pdf
2020-04-06 21:58:51.835 WARN 1700 --- [ctor-http-nio-8] r.netty.http.client.HttpClientConnect : [id: 0xfc719110, L:0.0.0.0/0.0.0.0:55102] The connection observed an error
reactor.netty.http.client.PrematureCloseException: Connection has been closed BEFORE response, while sending request body
2020-04-06 21:58:56.033 WARN 1700 --- [tor-http-nio-12] r.netty.http.client.HttpClientConnect : [id: 0x870e83db, L:0.0.0.0/0.0.0.0:55082] The connection observed an error
reactor.netty.http.client.PrematureCloseException: Connection has been closed BEFORE response, while sending request body
2020-04-06 21:59:00.580 INFO 1700 --- [io-8080-exec-62] c.c.f.cloud.AzureStorageClient : Finished: file b007415bfd114a15b9fcbd3591f0b35a with original name oswietlenie.pdf

Related

Stomp gives TCP connection failure and disconnection error

I have been using STOMP and sockjs for some time now, but the error below happens at times.
12:57:30.976 [48] [tcp-client-loop-nio-4] INFO org.springframework.messaging.simp.stomp.StompBrokerRelayMessageHandler - TCP connection failure in session w6s21o5d: failed to forward DISCONNECT session=w6s21o5d
reactor.netty.channel.AbortedException: io.netty.channel.StacklessClosedChannelException
at reactor.netty.FutureMono$FutureSubscription.wrapError(FutureMono.java:186) ~[reactor-netty-core-1.0.7.jar!/:1.0.7]
at reactor.netty.FutureMono$FutureSubscription.operationComplete(FutureMono.java:177) [reactor-netty-core-1.0.7.jar!/:1.0.7]
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:1021) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:882) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:709) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:792) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:702) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:304) [netty-handler-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1071) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:497) [netty-transport-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.65.Final.jar!/:4.1.65.Final]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: io.netty.channel.StacklessClosedChannelException
The disconnection message is send from frontend as shown below before page reload.
window.addEventListener("beforeunload", function () {
client.disconnect(function () {
log("disconnect websocket..");
});
});
No logic is written in backend to handle this spring-stomp handles rest of it.
I have gone through resources in STOMP and there is still no explanation on why or exactly on what scenarios this is happening. What could be the possible reasons for this error.

Infinispan HOTROD client throwing intermittent java.net.SocketTimeoutException: GetOperation

I'm using Infinispan HOTROD into java application running on IBM Liberty app server with JDK8.
HOTROD client (lib 12.1.11.Final-redhat-00001 version) is implemented via `
org.infinispan.jcache.remote.JCachingProvider :
#PostConstruct
private void setUp() {
LOGGER.info("START [setUp] CACHE");
File conf = new File(System.getenv("CLIENT_HOTROD_FILE_PATH"));
URI uri = conf.toURI();
// Retrieve the system wide cache manager via org.infinispan.jcache.remote.JCachingProvider
javax.cache.CacheManager cacheManager = Caching.getCachingProvider("org.infinispan.jcache.remote.JCachingProvider")
.getCacheManager(uri, this.getClass().getClassLoader(), null);
this.cache = cacheManager.getCache(DATAGRIDKEY);
LOGGER.info("END [setUp] cache " + this.cache.getName() );
}
HOT-ROD client config file:
infinispan.client.hotrod.server_list=server1.x.xx.xxx:11222;server2.x.xx.xxx:11222;server3..x.xx.xxx:11222;server4..x.xx.xxx:11222
infinispan.client.hotrod.auth_username=user-app
infinispan.client.hotrod.auth_password=password
infinispan.client.hotrod.auth_realm=default
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512
infinispan.client.hotrod.client_intelligence=HASH_DISTRIBUTION_AWARE
HOT-ROD client config properties available
Otherwise the config server REDHAT Data Grid (8.2.3 version) is the followed:
{
"distributed-cache": {
"mode": "ASYNC",
"remote-timeout": 17500,
"state-transfer": {
"timeout": 60000
},
"encoding": {
"key": {
"media-type": "text/plain"
},
"value": {
"media-type": "application/x-protostream"
}
},
"locking": {
"concurrency-level": 1000,
"acquire-timeout": 15000,
"striping": false
},
"statistics": true
}
}
Sometimes when the application has some requests into the log application side, occurs the java.net.SocketTimeoutException: GetOperation issue as the followed stacktrace:
[2/15/22 17:24:39:445 CET] 00000573 HOTROD W org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation handleException ISPN004098: Closing connection [id: 0x9a1a8fb2, L:/10.0.20.160:55295 ! R:10.0.18.97/10.0.18.97:11222] due to transport error java.net.SocketTimeoutException: GetOperation{Cache-Name-Test, key=[B0x4A6C636F6D2E6475636174692E77612E..[110], flags=1, connection=10.0.18.97/10.0.18.97:11222} timed out after 60000 ms
at org.infinispan.client.hotrod.impl.operations.HotRodOperation.run(HotRodOperation.java:185)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:822)
The issue is more frequently when the app receives a massive GET KEY requests, but sometimes the issue is occurred durung a unmassive load requests.
Could you know how to resolve the issue ?
Do you know any suggestion ?
Thanks
I tried to edit the client config HOT-ROD to understand the behavior of the client, below I want to summarize the new configuration changes :
infinispan.client.hotrod.server_list=server1:11222;server2:11222;server3:11222;server4:11222
infinispan.client.hotrod.auth_username=user-app
infinispan.client.hotrod.auth_password=password
infinispan.client.hotrod.auth_realm=default
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512
infinispan.client.hotrod.socket_timeout=15000
infinispan.client.hotrod.connect_timeout=60000
infinispan.client.hotrod.max_retries=5
# Connection pool
infinispan.client.hotrod.connection_pool.max_active=50
infinispan.client.hotrod.connection_pool.exhausted_action=WAIT
infinispan.client.hotrod.connection_pool.max_wait=20000
infinispan.client.hotrod.connection_pool.min_idle=10
#default is 1800000
infinispan.client.hotrod.connection_pool.min_evictable_idle_time=600000
infinispan.client.hotrod.connection_pool.max_pending_requests=846
I noticed that changing infinispan.client.hotrod.socket_timeout=15000
millisecond.
Whatching app log file I seen the same error, but currently, instead 60000 ms, is to 15000 ms the socketTimeoutException. With the new setting the thread request successfully completing the thread request even if the exception is logged.
I think to understand that into the pool sometimes it seems there are connections died and the client search and try another connection alive into the pool retrying the GetOperation.
As written inorg.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation class code into library hot-rod:
/**
* Base class for all the operations that need retry logic: if the operation fails due to connection problems, try with
* another available connection.
*
* #author Mircea.Markus#jboss.com
* #since 4.1
*/
In a nutshell I need to keep alive connections into the pool, I don't know which is the property for Data Grid client HOT-ROD.
The next step I would like to try to set infinispan.client.hotrod.tcp_keep_alive=true
to test if the behavior is changed, after that I will update the post.
I tried to change some configuration properties, considering the reasonable trade-off beetween retry operations and timeouts.
HOT-ROD client config:
infinispan.client.hotrod.server_list=server1:11222;server2:11222;server3:11222;server4:11222
infinispan.client.hotrod.auth_username=app-user
infinispan.client.hotrod.auth_password=password
infinispan.client.hotrod.auth_realm=default
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512
infinispan.client.hotrod.socket_timeout=5000
infinispan.client.hotrod.connect_timeout=20000
infinispan.client.hotrod.max_retries=5
# Connection pool
infinispan.client.hotrod.connection_pool.max_active=100
infinispan.client.hotrod.connection_pool.exhausted_action=WAIT
infinispan.client.hotrod.connection_pool.max_wait=20000
infinispan.client.hotrod.connection_pool.min_idle=50
#default is 1800000
infinispan.client.hotrod.connection_pool.min_evictable_idle_time=600000
infinispan.client.hotrod.connection_pool.max_pending_requests=846
infinispan.client.hotrod.client_intelligence=HASH_DISTRIBUTION_AWARE
infinispan.client.hotrod.force_return_values=false
#default is false
infinispan.client.hotrod.tcp_keep_alive=true
The changes have been done are:
infinispan.client.hotrod.socket_timeout reduced to 5000ms
infinispan.client.hotrod.connection_pool.min_idle incremented to 50
infinispan.client.hotrod.connect_timeout reduced to 20000ms
infinispan.client.hotrod.max_retries reduced to 5
infinispan.client.hotrod.connection_pool.max_active incremented to 100
Tested in QA env and from log application, I can say the HOT-ROD client rarely after 4 RetryOnFailureOperations obtained
java.net.SocketTimeoutException: GetOperation`timed out after 5000 ms
occurs the updateTopologyInfo action client HOT-ROD side.
This HOT-ROD client policy it seems to be only the log generated from the Async thread runs in background to check the sockets are still alive.
[2/23/22 9:06:12:923 CET] 000001ab HOTROD W org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation handleException ISPN004098: Closing connection [id: 0xa5345361, L:/10.0.20.160
:44021 - R:10.0.18.102/10.0.18.102:11222] due to transport error
java.net.SocketTimeoutException: GetOperation{MY-CACHE, key=[B0x4A4B636F6D2E6475636174692E77612E..[77], flags=1, connection=10.0.18.102/10.0.18.102:11222} timed out a
fter 5000 ms
at org.infinispan.client.hotrod.impl.operations.HotRodOperation.run(HotRodOperation.java:185)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:822)
[2/23/22 9:06:17:924 CET] 000001a5 HOTROD W org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation handleException ISPN004098: Closing connection [id: 0x54dacb9b, L:/10.0.20.160:42115 - R:10.0.18.97/10.0.18.97:11222] due to transport error
java.net.SocketTimeoutException: GetOperation{MY-CACHE, key=[B0x4A4B636F6D2E6475636174692E77612E..[77], flags=1, connection=10.0.18.97/10.0.18.97:11222} timed out after 5000 ms
at org.infinispan.client.hotrod.impl.operations.HotRodOperation.run(HotRodOperation.java:185)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:822)
[2/23/22 9:06:22:925 CET] 000001a5 HOTROD W org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation handleException ISPN004098: Closing connection [id: 0xe3e6fd98, L:/10.0.20.160:43629 - R:10.0.18.101/10.0.18.101:11222] due to transport error
java.net.SocketTimeoutException: GetOperation{MY-CACHE, key=[B0x4A4B636F6D2E6475636174692E77612E..[77], flags=1, connection=10.0.18.101/10.0.18.101:11222} timed out after 5000 ms
at org.infinispan.client.hotrod.impl.operations.HotRodOperation.run(HotRodOperation.java:185)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:822)
[2/23/22 9:06:27:926 CET] 000001ab HOTROD W org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation handleException ISPN004098: Closing connection [id: 0x52a4bd4a, L:/10.0.20.160:32912 - R:10.0.18.96/10.0.18.96:11222] due to transport error
java.net.SocketTimeoutException: GetOperation{MY-CACHE, key=[B0x4A4B636F6D2E6475636174692E77612E..[77], flags=1, connection=10.0.18.96/10.0.18.96:11222} timed out after 5000 ms
at org.infinispan.client.hotrod.impl.operations.HotRodOperation.run(HotRodOperation.java:185)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:822)
[2/23/22 9:06:27:927 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(server4:11222), adding to the pool.
[2/23/22 9:06:27:927 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(server2:11222), adding to the pool.
[2/23/22 9:06:27:928 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(server3:11222), adding to the pool.
[2/23/22 9:06:27:928 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(server1:11222), adding to the pool.
[2/23/22 9:06:27:928 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.101:11222), removing from the pool.
[2/23/22 9:06:27:928 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.102:11222), removing from the pool.
[2/23/22 9:06:27:928 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.96:11222), removing from the pool.
[2/23/22 9:06:27:929 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.97:11222), removing from the pool.
[2/23/22 9:06:27:929 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.101:11222), removing from the pool.
[2/23/22 9:06:27:929 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.102:11222), removing from the pool.
[2/23/22 9:06:27:929 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.96:11222), removing from the pool.
[2/23/22 9:06:27:929 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(10.0.18.97:11222), removing from the pool.
[2/23/22 9:06:28:059 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.protocol.Codec20 readNewTopologyAndHash ISPN004006: Server sent new topology view (id=108, age=0) containing 4 addresses: [10.0.18.101:11222, 10.0.18.102:11222, 10.0.18.96:11222, 10.0.18.97:11222]
[2/23/22 9:06:28:060 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(10.0.18.101:11222), adding to the pool.
[2/23/22 9:06:28:062 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(10.0.18.102:11222), adding to the pool.
[2/23/22 9:06:28:063 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(10.0.18.96:11222), adding to the pool.
[2/23/22 9:06:28:065 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory updateTopologyInfo ISPN004014: New server added(10.0.18.97:11222), adding to the pool.
[2/23/22 9:06:28:082 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(server4:11222), removing from the pool.
[2/23/22 9:06:28:083 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(server2:11222), removing from the pool.
[2/23/22 9:06:28:083 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(server3:11222), removing from the pool.
[2/23/22 9:06:28:083 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory closeChannelPools ISPN004016: Server not in cluster anymore(server1:11222), removing from the pool.
16576429:[2/23/22 9:06:33:056 CET] 000001ab HOTROD W org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation handleException ISPN004098: Closing connection [id: 0x4299851e, L:/10.0.20.160:45567 ! R:server4/10.0.18.102:11222] due to transport error
16576703- java.net.SocketTimeoutException: GetOperation{MY-CACHE, key=[B0x4A4B636F6D2E6475636174692E77612E..[77], flags=1, connection=server4/10.0.18.102:11222} timed out after 5000 ms
16576925- at org.infinispan.client.hotrod.impl.operations.HotRodOperation.run(HotRodOperation.java:185)
16577020- at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
16577090- at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170)
16577173- at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
16577268- at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
16577371- at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
16577442- at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
16577539- at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
16577617- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
16577701- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
16577785- at java.lang.Thread.run(Thread.java:822)
NOTE
Character after HOTROD word indicates the LOG level
Example:
[2/23/22 9:06:28:083 CET] 000001ab HOTROD I org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory
I = INFO
W = WARNING
D = DEBUG
E = ERROR

org.redisson.client.handler.PingConnectionHandler$1.run(PingConnectionHandler.java:79)

I am using redisson api 'org.redisson:redisson:3.13.6' to consume redis(redis 6.2.5) stream in my java project. After running for months. shows error:
[02:45:28:123] [ERROR] - org.redisson.client.handler.PingConnectionHandler$1.run(PingConnectionHandler.java:79) - Unable to send PING command over channel: [id: 0x5aa1ca8d, L:0.0.0.0/0.0.0.0:56714]
org.redisson.client.RedisTimeoutException: Command execution timeout for command: (PING), params: [], Redis client: [addr=redis://cruise-redis-master.reddwarf-cache.svc.cluster.local:6379]
at org.redisson.client.RedisConnection.lambda$async$1(RedisConnection.java:207) ~[redisson-3.13.6.jar!/:3.13.6]
at org.redisson.client.RedisConnection$$Lambda$952/0x000000003818de40.run(Unknown Source) ~[?:?]
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) [netty-common-4.1.70.Final.jar!/:4.1.70.Final]
at io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) [netty-common-4.1.70.Final.jar!/:4.1.70.Final]
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) [netty-common-4.1.70.Final.jar!/:4.1.70.Final]
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) [netty-common-4.1.70.Final.jar!/:4.1.70.Final]
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) [netty-common-4.1.70.Final.jar!/:4.1.70.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.70.Final.jar!/:4.1.70.Final]
at java.lang.Thread.run(Thread.java:836) [?:?]
[02:45:28:126] [ERROR] - org.redisson.client.handler.PingConnectionHandler$1.run(PingConnectionHandler.java:79) - Unable to send PING command over channel: [id: 0x306c5661, L:0.0.0.0/0.0.0.0:56750]
org.redisson.client.RedisTimeoutException: Command execution timeout for command: (PING), params: [], Redis client: [addr=redis://cruise-redis-master.reddwarf-cache.svc.cluster.local:6379]
at org.redisson.client.RedisConnection.lambda$async$1(RedisConnection.java:207) ~[redisson-3.13.6.jar!/:3.13.6]
at org.redisson.client.RedisConnection$$Lambda$952/0x000000003818de40.run(Unknown Source) ~[?:?]
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) [netty-common-4.1.70.Final.jar!/:4.1.70.Final]
at io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) [netty-common-4.1.70.Final.jar!/:4.1.70.Final]
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) [netty-common-4.1.70.Final.jar!/:4.1.70.Final]
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) [netty-common-4.1.70.Final.jar!/:4.1.70.Final]
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) [netty-common-4.1.70.Final.jar!/:4.1.70.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.70.Final.jar!/:4.1.70.Final]
at java.lang.Thread.run(Thread.java:836) [?:?]
I could connection to redis success, some channels works fine, some channels failed. anyone facing the same problem? why would this happen? what should I do to fix the problem?
My bad, I have copied the wrong url, the correct one should be: https://github.com/redisson/redisson/wiki/16.-FAQ#q-i-saw-a-redistimeoutexception-what-does-it-mean-what-shall-i-do-can-redisson-team-fix-it

Spring cloud gateway AbstractErrorWebExceptionHandler

I've created global route for all incoming requests and face with AbstractErrorWebExceptionHandler.
My application.yml
spring:
cloud:
gateway:
routes:
- id: global_route
uri: http://localhost:8082
predicates:
- Path=/**
filters:
- RewritePath=/service(?<segment>/?.*), $\{segment}
- id: authorization_route
uri: http://localhost:8082
predicates:
- Path=/key/login
filters:
- JWTFilter=RSA512,HS512
Logs when I run the application:
2020-10-28 09:31:58.606 ERROR 9184 --- [ctor-http-nio-1] a.w.r.e.AbstractErrorWebExceptionHandler : [6d5d3da0] 500 Server Error for HTTP GET "/keyfd"
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: localhost/127.0.0.1:8082
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
|_ checkpoint ⇢ org.springframework.cloud.gateway.filter.WeightCalculatorWebFilter [DefaultWebFilterChain]
|_ checkpoint ⇢ HTTP GET "/keyfd" [ExceptionHandlingWebHandler]
Stack trace:
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_221]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_221]
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330) ~[netty-transport-4.1.52.Final.jar:4.1.52.Final]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) ~[netty-transport-4.1.52.Final.jar:4.1.52.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702) ~[netty-transport-4.1.52.Final.jar:4.1.52.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) ~[netty-transport-4.1.52.Final.jar:4.1.52.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) ~[netty-transport-4.1.52.Final.jar:4.1.52.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-transport-4.1.52.Final.jar:4.1.52.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.52.Final.jar:4.1.52.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.52.Final.jar:4.1.52.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.52.Final.jar:4.1.52.Final]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_221]
Thank you for help.
The error cause is that I specified uri for spring cloud gateway the same as uri I'm running this application. So application started on localhost:8082 and the uri for route was localhost:8082.

javax.net.ssl.SSLHandshakeException from wildfly 10

I'm facing an issue about TLS connection to a secure site. my application deoloyed on wildfly-10. When calling a web service overs https, it raises the exception:
Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:994) at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1334)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1309)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:259)
at org.glassfish.jersey.client.internal.HttpUrlConnector$4.getOutputStream(HttpUrlConnector.java:385)
at org.glassfish.jersey.message.internal.CommittingOutputStream.commitStream(CommittingOutputStream.java:200)
at org.glassfish.jersey.message.internal.CommittingOutputStream.commitStream(CommittingOutputStream.java:194)
at org.glassfish.jersey.message.internal.CommittingOutputStream.write(CommittingOutputStream.java:228)
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$UnCloseableOutputStream.write(WriterInterceptorExecutor.java:299)
at com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:1982)
at com.fasterxml.jackson.core.json.UTF8JsonGenerator.flush(UTF8JsonGenerator.java:995)
at com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:932)
at org.glassfish.jersey.jackson.internal.jackson.jaxrs.base.ProviderBase.writeTo(ProviderBase.java:648)
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:265)
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250)
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
at org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1130)
at org.glassfish.jersey.client.ClientRequest.doWriteEntity(ClientRequest.java:517)
at org.glassfish.jersey.client.ClientRequest.writeEntity(ClientRequest.java:499)
at org.glassfish.jersey.client.internal.HttpUrlConnector._apply(HttpUrlConnector.java:388)
at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:285)
... 284 more
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(InputRecord.java:505)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
... 309 more
I've enabled SSL logging with option -Djavax.net.debug=all. I noticed the following error:
[stdout] (default I/O-2) default I/O-2, fatal error: 80: Inbound closed before receiving peer's close_notify: possible truncation attack?
[stdout] (default I/O-2) javax.net.ssl.SSLException: Inbound closed before receiving peer's close_notify: possible truncation attack?
[stdout] (default I/O-2) %% Invalidated: [Session-2, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384]
[stdout] (default I/O-2) default I/O-2, SEND TLSv1.2 ALERT: fatal, description = internal_error
[stdout] (default I/O-2) default I/O-2, Exception sending alert: java.io.IOException: writer side was already closed.
[stdout] (default I/O-2) default I/O-2, called closeOutbound()
[stdout] (default I/O-2) default I/O-2, closeOutboundInternal()
Any idea about what might be the issue? Many thanks.

Categories