I was using Zookeeper Hazelcast discovery but now i have changed it to Hazelcast-Kubernetes. It seems it is working fine but some times it prints warning message like :
2020-10-16 13:45:27.434 WARN 1 --- [.IO.thread-in-1] com.hazelcast.nio.tcp.TcpIpConnection : [10.131.6.73]:5701 [dev] [3.12.7] Connection[id=6, /10.131.6.73:5701->/10.131.6.1:58546, qualifier=null, endpoint=null, alive=false, type=NONE] closed. Reason: Exception in Connection[id=6, /10.131.6.73:5701->/10.131.6.1:58546, qualifier=null, endpoint=null, alive=true, type=NONE], thread=hz.switch-data-analytics.IO.thread-in-1
java.lang.IllegalStateException: Unknown protocol: OPT
at com.hazelcast.nio.tcp.UnifiedProtocolDecoder.onRead(UnifiedProtocolDecoder.java:107)
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:135)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:369)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:354)
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:280)
at com.hazelcast.internal.networking.nio.NioThread.run(NioThread.java:235)
and
2020-10-16 13:45:27.438 WARN 1 --- [.IO.thread-in-2] com.hazelcast.nio.tcp.TcpIpConnection : [10.131.6.73]:5701 [dev] [3.12.7] Connection[id=7, /10.131.6.73:5701->/10.131.6.1:58548, qualifier=null, endpoint=null, alive=false, type=NONE] closed. Reason: Exception in Connection[id=7, /10.131.6.73:5701->/10.131.6.1:58548, qualifier=null, endpoint=null, alive=true, type=NONE], thread=hz.switch-data-analytics.IO.thread-in-2
java.lang.IllegalStateException: TLS handshake header detected, but plain protocol header was expected.
at com.hazelcast.nio.tcp.UnifiedProtocolDecoder.loadProtocol(UnifiedProtocolDecoder.java:125)
at com.hazelcast.nio.tcp.UnifiedProtocolDecoder.onRead(UnifiedProtocolDecoder.java:87)
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:135)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:369)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:354)
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:280)
at com.hazelcast.internal.networking.nio.NioThread.run(NioThread.java:235)
I could not find any protocol usage like OPT or etc.
What should i do to clear this warning ?
Note :
I am using openshift and and my spring boot application use hazelcast distributed cache in 3 pods under one namespace.
Here is my setting :
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
config.getNetworkConfig().getJoin().getAwsConfig().setEnabled(false);
config.getNetworkConfig().getJoin().getTcpIpConfig().setEnabled(false);
config.getNetworkConfig().getJoin().getKubernetesConfig().setEnabled(true)
.setProperty("service-name", applicationProperties.getHazelcast().getServiceName())
.setProperty("namespace", applicationProperties.getPod().getNamespace());
There is a similar issue report in this GH Issue. As described there, adding the following part might solve the issue:
- containerPort: 5701
In any case, it might be a bug of hazelcast-kubernetes. Feel free to add the "steps to reproduce" to the following Hazelcast Kubernetes GH Issue.
Related
I'm trying make secure OrientDB cluster , I mean that the replication between
2 nodes will be over TLS .
Used this guide, but really not sure , may be missed something and this config related only Server / Console
https://orientdb.com/docs/2.2.x/Using-SSL-with-OrientDB.html
While I'm starting only one node - it's a lot off errors :
[10.40.1.52]:2441 [orientdb] [3.8.4] Established socket connection between /10.40.1.52:42064 and /10.40.1.52:2440 [TcpIpConnectionManager][10.40.1.52]:2441 [orientdb] [3.8.4] Connection[id=9, /10.40.1.52:48867->/10.40.1.54:2440, endpoint=[10.40.1.54]:2440, alive=false, type=NONE] closed. Reason: Exception in NonBlockingSocketReader
java.lang.ClassCastException: class com.hazelcast.nio.tcp.MemberWriteHandler cannot be cast to class com.hazelcast.nio.ascii.TextWriteHandler (com.hazelcast.nio.tcp.MemberWriteHandler and com.hazelcast.nio.ascii.TextWriteHandler are in unnamed module of loader 'app')
at com.hazelcast.nio.ascii.TextReadHandler.<init>(TextReadHandler.java:111)
at com.hazelcast.nio.tcp.SocketReaderInitializerImpl.init(SocketReaderInitializerImpl.java:89)
at com.hazelcast.nio.tcp.SocketReaderInitializerImpl.init(SocketReaderInitializerImpl.java:41)
at com.hazelcast.internal.networking.nonblocking.NonBlockingSocketReader.handle(NonBlockingSocketReader.java:143)
at com.hazelcast.internal.networking.nonblocking.NonBlockingIOThread.handleSelectionKey(NonBlockingIOThread.java:349)
at com.hazelcast.internal.networking.nonblocking.NonBlockingIOThread.handleSelectionKeys(NonBlockingIOThread.java:334)
at com.hazelcast.internal.networking.nonblocking.NonBlockingIOThread.selectLoop(NonBlockingIOThread.java:252)
at com.hazelcast.internal.networking.nonblocking.NonBlockingIOThread.run(NonBlockingIOThread.java:205)
Error on client connection
javax.net.ssl.SSLException: Unsupported or unrecognized SSL message
at java.base/sun.security.ssl.SSLSocketInputRecord.handleUnknownRecord(SSLSocketInputRecord.java:451)
at java.base/sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:175)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:110)
at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1408)
at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1314)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:440)
at java.base/sun.security.ssl.SSLSocketImpl.ensureNegotiated(SSLSocketImpl.java:819)
at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1189)
at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81)
at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:142)
at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:123)
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinary.flush(OChannelBinary.java:327)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.config(ONetworkProtocolBinary.java:165)
at com.orientechnologies.orient.server.network.OServerNetworkListener.run(OServerNetworkListener.java:219)
[10.40.1.52]:2441 [orientdb] [3.8.4] Connection[id=10, /10.40.1.52:42064->/10.40.1.52:2440, endpoint=[10.40.1.52]:2440, alive=false, type=NONE] closed. Reason: Exception in NonBlockingSocketReader
java.lang.ClassCastException: class com.hazelcast.nio.tcp.MemberWriteHandler cannot be cast to class com.hazelcast.nio.ascii.TextWriteHandler (com.hazelcast.nio.tcp.MemberWriteHandler and com.hazelcast.nio.ascii.TextWriteHandler are in unnamed module of loader 'app')
at com.hazelcast.nio.ascii.TextReadHandler.<init>(TextReadHandler.java:111)
at com.hazelcast.nio.tcp.SocketReaderInitializerImpl.init(SocketReaderInitializerImpl.java:89)
at com.hazelcast.nio.tcp.SocketReaderInitializerImpl.init(SocketReaderInitializerImpl.java:41)
at com.hazelcast.internal.networking.nonblocking.NonBlockingSocketReader.handle(NonBlockingSocketReader.java:143)
at com.hazelcast.internal.networking.nonblocking.NonBlockingIOThread.handleSelectionKey(NonBlockingIOThread.java:349)
at com.hazelcast.internal.networking.nonblocking.NonBlockingIOThread.handleSelectionKeys(NonBlockingIOThread.java:334)
at com.hazelcast.internal.networking.nonblocking.NonBlockingIOThread.selectLoop(NonBlockingIOThread.java:252)
at com.hazelcast.internal.networking.nonblocking.NonBlockingIOThread.run(NonBlockingIOThread.java:205)
Error on client connection
javax.net.ssl.SSLException: Unsupported or unrecognized SSL message
at java.base/sun.security.ssl.SSLSocketInputRecord.handleUnknownRecord(SSLSocketInputRecord.java:451)
at java.base/sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:175)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:110)
at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1408)
at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1314)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:440)
at java.base/sun.security.ssl.SSLSocketImpl.ensureNegotiated(SSLSocketImpl.java:819)
at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1189)
at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81)
at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:142)
at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:123)
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinary.flush(OChannelBinary.java:327)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.config(ONetworkProtocolBinary.java:165)
at com.orientechnologies.orient.server.network.OServerNetworkListener.run(OServerNetworkListener.java:219)
2022-02-28 10:54:52:999 INFO [10.40.1.52]:2441 [orientdb] [3.8.4] Cluster version set to 3.8 [system]
2022-02-28 10:54:53:000 INFO [10.40.1.52]:2441 [orientdb] [3.8.4]
Please advice .
I have in my AWS VPC a cluster of ES with 2 nodes. On top of those nodes I have a load balancer. In the same vpc I have a microservice that accesses Elasticsearch via RestHighLevelClient version 7.5.2 .
I create the client in the following manner :
public class ESClientWrapper {
#Getter
private RestHighLevelClient client;
public ESClientWrapper() throws IOException {
FileInputStream propertiesFile = new FileInputStream("/var/elastic.properties");
Properties properties = new Properties();
properties.load(propertiesFile );
RestClientBuilder builder = RestClient.builder(new HttpHost(
properties .getProperty("host"),
Integer.parseInt(properties.getProperty("port"))
));
this.client = new RestHighLevelClient(builder);
}
}
When my micro service doesn't get requests for a long time (12h..) there are occurrences when the first response that is sent (or a few after..) are getting the following error:
2020-09-09 07:03:13.106 INFO 1 --- [nio-8080-exec-1] c.a.a.services.CustomersMetadataService : Trying to add the following role : {role=a2}
2020-09-09 07:03:13.106 INFO 1 --- [nio-8080-exec-1] c.a.a.e.repositories.ESRepository : Trying to insert the following document to app-index : {role=a2}
2020-09-09 07:03:13.109 ERROR 1 --- [nio-8080-exec-1] c.a.a.e.dal.ESRepository : Failed to add customer : {role=a2}
java.io.IOException: Connection reset by peer
at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:828) ~[elasticsearch-rest-client-7.5.2.jar!/:7.5.2]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:248) ~[elasticsearch-rest-client-7.5.2.jar!/:7.5.2]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:235) ~[elasticsearch-rest-client-7.5.2.jar!/:7.5.2]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1514) ~[elasticsearch-rest-high-level-client-7.5.2.jar!/:7.5.2]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1484) ~[elasticsearch-rest-high-level-client-7.5.2.jar!/:7.5.2]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1454) ~[elasticsearch-rest-high-level-client-7.5.2.jar!/:7.5.2]
at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:871) ~[elasticsearch-rest-high-level-client-7.5.2.jar!/:7.5.2]
....
....
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.35.jar!/:9.0.35]
at java.base/java.lang.Thread.run(Thread.java:836) ~[na:na]
Caused by: java.io.IOException: Connection reset by peer
at java.base/sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:na]
at java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[na:na]
at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276) ~[na:na]
at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:245) ~[na:na]
at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:223) ~[na:na]
at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:358) ~[na:na]
at org.apache.http.impl.nio.reactor.SessionInputBufferImpl.fill(SessionInputBufferImpl.java:231) ~[httpcore-nio-4.4.13.jar!/:4.4.13]
at org.apache.http.impl.nio.codecs.AbstractMessageParser.fillBuffer(AbstractMessageParser.java:136) ~[httpcore-nio-4.4.13.jar!/:4.4.13]
at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:241) ~[httpcore-nio-4.4.13.jar!/:4.4.13]
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81) ~[httpasyncclient-4.1.4.jar!/:4.1.4]
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39) ~[httpasyncclient-4.1.4.jar!/:4.1.4]
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114) ~[httpcore-nio-4.4.13.jar!/:4.4.13]
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162) ~[httpcore-nio-4.4.13.jar!/:4.4.13]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337) ~[httpcore-nio-4.4.13.jar!/:4.4.13]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315) ~[httpcore-nio-4.4.13.jar!/:4.4.13]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276) ~[httpcore-nio-4.4.13.jar!/:4.4.13]
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104) ~[httpcore-nio-4.4.13.jar!/:4.4.13]
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591) ~[httpcore-nio-4.4.13.jar!/:4.4.13]
... 1 common frames omitted
2020-09-09 07:06:55.109 INFO 1 --- [nio-8080-exec-2] c.a.a.services.MyService : Trying to add the following role : {role=a2}
2020-09-09 07:06:55.109 INFO 1 --- [nio-8080-exec-2] c.a.a.e.repositories.ESRepository : Trying to insert the following document to index app-index: {role=a2}
2020-09-09 07:06:55.211 INFO 1 --- [nio-8080-exec-2] c.a.a.e.dal.ESRepository : IndexResponse[index=app-index,type=_doc,id=x532323272533321870287,version=1,result=created,seqNo=70,primaryTerm=1,shards={"total":2,"successful":2,"failed":0}]
As you can see, 3 minutes after the failed request the next request was successfully handeled by ES. What can kill the request ? I checked Elasticsearch logs and didn't see any indication for killing connection. The MS is in the same vpc as elastic so it isn't passing through any firewall that might kill it.
I found the following issue in github that suggested to increase the default connection timeout but I'm wondering if the issue here is really a timeout problem and if increasing the default time is really the best solution..
Also, I found this bug opened in their repo regarding the same problem but without any answers.
UPDATE
I noticed that even after 10 minutes my service is up this happens. My service started and sent a query to ES and everything worked well. After 10 minutes I sent insert request and it failed on connection reset by peer.
In the end I didn't find a problem in my configuration/implementation. It seems like a bug in the implementation of Elasticsearch's RestHighLevelClient.
I implemented a retry mechanism that wraps the RestHighLevelClient and retries the query if I get the same error. I used Spring #Retry annotation for this solution.
I was facing the same issue. Everything worked fine, but after some time a single request got refused.
The solution (in my case) was to set the keepalive property of the tcp connection with:
final RestClientBuilder restClientBuilder = RestClient.builder(...);
restClientBuilder.setHttpClientConfigCallback(httpClientBuilder -> httpClientBuilder.setDefaultIOReactorConfig(IOReactorConfig.custom()
.setSoKeepAlive(true)
.build()))
Found here:
https://github.com/elastic/elasticsearch/issues/65213
Background
Trying to create Apache Flink Standalone cluster.
Environment : AWS
Job Manager : 1
Task Manager : 2
Config :
FLINK_PLUGINS_DIR : /usr/local/flink-1.9.1/plugins
io.tmp.dirs : /tmp/flink
jobmanager.execution.failover-strategy : region
jobmanager.heap.size : 1024m
jobmanager.rpc.address : job manager ip
jobmanager.rpc.port : 6123
jobstore.cache-size : 52428800
jobstore.expiration-time : 3600
parallelism.default : 4
slot.idle.timeout : 50000
slot.request.timeout : 300000
task.cancellation.interval : 30000
task.cancellation.timeout : 180000
task.cancellation.timers.timeout : 7500
taskmanager.exit-on-fatal-akka-error : false
taskmanager.heap.size : 1024m
taskmanager.network.bind-policy : "ip"
taskmanager.numberOfTaskSlots : 2
taskmanager.registration.initial-backoff: 500ms
taskmanager.registration.timeout : 5min
taskmanager.rpc.port : 50100-50200
web.tmpdir : /tmp/flink-web-74cce811-17c0-411e-9d11-6d91edd2e9b0
Instance Type : t2 medium (2 CPUs 4 GB Memory)
Security Group ports opened : 6123, 8081, 50100 - 50200
OS : CentOS Linux release 7.6.1810 (Core)
Java :
openjdk version "1.8.0_191"
OpenJDK Runtime Environment (build 1.8.0_191-b12)
OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)
Cluster is up and running properly
org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - http://ip:8081 was granted leadership with leaderSessionID=00000000-0000-0000-0000-000000000000
org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Web frontend listening at http:/ip:8081.
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.resourcemanager.StandaloneResourceManager at akka://flink/user/resourcemanager .
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.dispatcher.StandaloneDispatcher at akka://flink/user/dispatcher .
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - ResourceManager akka.tcp://flink#ip:6123/user/resourcemanager was granted leadership with fencing token 00000000000000000000000000000000
org.apache.flink.runtime.resourcemanager.slotmanager.SlotManagerImpl - Starting the SlotManager.
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Dispatcher akka.tcp://flink#ip:6123/user/dispatcher was granted leadership with fencing token 00000000-0000-0000-0000-000000000000
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Recovering all persisted jobs.
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registering TaskManager with ResourceID f2c7f664378b40ce44463713ae98e1c4 (akka.tcp://flink#TaskManager1Ip:38566/user/taskmanager_0) at ResourceManager
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registering TaskManager with ResourceID 354a785f637751fb3b034618a47480ed (akka.tcp://flink#TaskManager2Ip:34400/user/taskmanager_0) at ResourceManager
UI shows all the cluster details
Problem
The task submission does not work
java.util.concurrent.CompletionException: akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://flink/user/resourcemanager#-1545644127]] after [10000 ms]. Message of type [org.apache.flink.runtime.rpc.messages.LocalFencedMessage]. A typical reason for `AskTimeoutException` is that the recipient actor didn'
t send a reply.
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:871)
at akka.dispatch.OnComplete.internal(Future.scala:263)
at akka.dispatch.OnComplete.internal(Future.scala:261)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
at org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:644)
at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:205)
at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)
at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)
at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
at java.lang.Thread.run(Thread.java:748)
Caused by: akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://flink/user/resourcemanager#-1545644127]] after [10000 ms]. Message of type [org.apache.flink.runtime.rpc.messages.LocalFencedMessage]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply.
at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635)
at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635)
at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:648)
... 9 more
2020-02-04 23:25:16,125 ERROR org.apache.flink.runtime.rest.handler.taskmanager.TaskManagerLogFileHandler - Unhandled exception.
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://flink/user/resourcemanager#-1545644127]] after [10000 ms]. Message of type [org.apache.flink.runtime.rpc.messages.LocalFencedMessage]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply.
at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635)
at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635)
at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:648)
at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:205)
at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)
at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)
at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
at java.lang.Thread.run(Thread.java:748)
Can somebody throw some light on this ? Is it problem related to ports / firewall or some setting is messed up ?
The issue was with security group port permissions. When opened entire range from 0 - 65565, everything started working. Still this is not good enough for production system so eventually for Job workers in the flink-conf.yaml config file, the key taskmanager.data.port was assigned a particular port and that did the trick. This way the task managers could be configured to listen to a particular port within the range.
Few years back we have developed a client to consume a usi-ws v2, this webservice uses a STS service v2. It was working fine.
But, now soap-ws v2 is updated to usi-ws v3 which in turns uses STS service v3.
Key differences are
1) usi-ws v3 uses <sp:Basic256Sha256Rsa15/> as AlgorithmSuite policy which matches STS service v3's AlgorithmSuite policy.
2) usi-ws v3 uses STS service v3 instead of STS service v2
I can integrate the change by two different approaches
First Approach
I use apache-cxf wsdl2java on usi-ws v3 to generate the client code. Below is sample endpoint code
private static void SetupRequestContext(IUSIService endpoint, X509Certificate certificate, PrivateKey privateKey) {
Map<String, Object> requestContext = ((BindingProvider)endpoint).getRequestContext();
requestContext.put(XWSSConstants.CERTIFICATE_PROPERTY, certificate);
requestContext.put(XWSSConstants.PRIVATEKEY_PROPERTY, privateKey);
requestContext.put(STSIssuedTokenConfiguration.STS_ENDPOINT, "https://thirdparty.authentication.business.gov.au/R3.0/vanguard/S007v1.3/Service.svc");
requestContext.put(STSIssuedTokenConfiguration.STS_NAMESPACE, "http://schemas.microsoft.com/ws/2008/06/identity/securitytokenservice");
requestContext.put(STSIssuedTokenConfiguration.STS_WSDL_LOCATION, "https://thirdparty.authentication.business.gov.au/R3.0/vanguard/S007v1.3/Service.svc");
requestContext.put(STSIssuedTokenConfiguration.STS_SERVICE_NAME, "SecurityTokenService");
requestContext.put(STSIssuedTokenConfiguration.LIFE_TIME, 30);
requestContext.put(STSIssuedTokenConfiguration.STS_PORT_NAME, "S007SecurityTokenServiceEndpoint");
requestContext.put(BindingProviderProperties.REQUEST_TIMEOUT, REQUEST_TIMEOUT);
requestContext.put(BindingProviderProperties.CONNECT_TIMEOUT, CONNECT_TIMEOUT);
}
After configuring the endpoint context, I try to createUSI
endpoint.createUSI(createUsiRequest);
It throws below error (logs)
... LOGS before are removed
[main] WARN au.gov.abr.akm.credential.store.ABRRequester$ABRHttpPost - XML request is => <ns:requests xmlns:ns="http://auth.sbr.gov.au/AutoRenew"><request id="ABRD:TESTDeviceID" credentialType="D"><cmsB64>MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwGggCSABIIBqDCCAaQwggENAgEAMDoxFTATBgNVBAMMDFRlc3REZXZpY2UwMzEUMBIGA1UECgwLMTIzMDAwMDAwNTkxCzAJBgNVBAYTAkFVMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCjFrK/biRUDUpRBEmJtV1XAM+sNP0NMwydRdv4NntG0x/JHZaOoJJpFrTXvB0gAIIHhHlhfkpLkSLQmz8iMYKnPgaqG+g/quSM6VYKQcCMr0UwS9b37NdzOhpf8n6JRWkTIFWznUz8WxiASCLuj5VmRiacHlrtJul/Gj89zbDJtwIDAQABoCowKAYJKoZIhvcNAQkOMRswGTAXBgYqJAGCTQEEDRYLMTIzMDAwMDAwNTkwDQYJKoZIhvcNAQEFBQADgYEAmUIkEDpCtZJbCZ04DfVxMgsjZfIEsF3yh+VWlCO/6jJcdcJKKjY0xbJDxzdh8xhbq2RzBKnP5th4p/yzBGN8Wafvr/2mQVNC9LG/3IGsawZLGMqUjeL0aIwDEmYBJWt0wm1ntKUF5DiuZJgcIgjFIfHWBq0WB2bU8SroO5O07coAAAAAAACggDCCBB0wggMFoAMCAQICAwQHvDANBgkqhkiG9w0BAQsFADCBhTELMAkGA1UEBhMCQVUxJTAjBgNVBAoTHEF1c3RyYWxpYW4gQnVzaW5lc3MgUmVnaXN0ZXIxIDAeBgNVBAsTF0NlcnRpZmljYXRpb24gQXV0aG9yaXR5MS0wKwYDVQQDEyRUZXN0IEF1c3RyYWxpYW4gQnVzaW5lc3MgUmVnaXN0ZXIgQ0EwHhcNMTgxMTI4MDQyMDMwWhcNMjAwMzI5MDQyMDMwWjA6MQswCQYDVQQGEwJBVTEUMBIGA1UEChMLMTIzMDAwMDAwNTkxFTATBgNVBAMTDFRlc3REZXZpY2UwMzCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA866tnV+RQ5v09wEPTTeA8C679zsZhgWhc4RjqtwvB73ZN7+g9NjJ1KujZUxXB5RbBdLQQ9GFPBx1DYifjDIN2Z1vMObqhT/QqUwz0sy8y6xQh6ukJjlyr4r9CHCQOBuHCe8rB3DzirMZsAv+qjT8lCOfdK9lq++IlsglpYkSAO8CAwEAAaOCAWIwggFeMAwGA1UdEwEB/wQCMAAwgeQGA1UdIASB3DCB2TCB1gYJKiQBlzllAQgBMIHIMIGmBggrBgEFBQcCAjCBmRqBllVzZSB0aGlzIGNlcnRpZmljYXRlIG9ubHkgZm9yIHRoZSBwdXJwb3NlIHBlcm1pdHRlZCBpbiB0aGUgYXBwbGljYWJsZSBDZXJ0aWZpY2F0ZSBQb2xpY3kuIExpbWl0ZWQgbGlhYmlsaXR5IGFwcGxpZXMgLSByZWZlciB0byB0aGUgQ2VydGlmaWNhdGUgUG9saWN5LjAdBggrBgEFBQcCARYRd3d3LnRlc3RhYnJjYS5jb20wFwYGKiQBgk0BBA0WCzEyMzAwMDAwMDU5MA4GA1UdDwEB/wQEAwIE8DAfBgNVHSMEGDAWgBSJfa5qeCJphOwHaVTGwPRjl+HPTjAdBgNVHQ4EFgQU17H18nWNxfR8MnD6gVtz8f91bu4wDQYJKoZIhvcNAQELBQADggEBADFiv5BD06bmEwkvr8cKF0MDET9+kUCPz2Kka5YuEfy8gIITz6ET2upJRLlt9BKOFpyrevCfEdoSd1Tbsz9czm6Vn/fDhQZ25HfKZgDLxQU8zqrMkc2rNyxXrJIWT1LNaVtNmUN5KMcHRjHXQcN6Qou5GkjsmPk/wuzcp0K7F2DI1pvjbr7r2TE1xiaO1l4sD+6JpPugqidPT+/41ADdmcbKwWH1p0HjPR1/XoIiR/qcQWL0TWBozZsiJq7Ad4xI2mm/8AS6wjGMkwckDH2wpROfiZkcfKavDOf2/wJaWG+RBCL2B2LNYAltG30LNwno4R/J7LfGauoOSPmkd3Tdc00wggXdMIIDxaADAgECAgECMA0GCSqGSIb3DQEBCwUAMIGKMQswCQYDVQQGEwJBVTElMCMGA1UEChMcQXVzdHJhbGlhbiBCdXNpbmVzcyBSZWdpc3RlcjEgMB4GA1UECxMXQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxMjAwBgNVBAMTKVRlc3QgQXVzdHJhbGlhbiBCdXNpbmVzcyBSZWdpc3RlciBSb290IENBMB4XDTEwMDMyMDAwMDAwMFoXDTIwMDMyMDAwMDAwMFowgYUxCzAJBgNVBAYTAkFVMSUwIwYDVQQKExxBdXN0cmFsaWFuIEJ1c2luZXfghfghfFDN0cmFsaWFuIEJ1c2luZXNzIFJlZ2lzdGVyIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxGJ5qZfrXMMxOq24M8K13oLHegF1C0fN2j1q2RotpIIkGCGszJtpV8n/XAOM6pVm9jp5Pc4+v3No1mtdj/dVP1nMP9xxGuDrd/gJUddZnhRGQVeXto9pB03bioWLmsszoG8e2OvTf4AnBum0ukHRqTFuNJ1qQu4HXUjahxg66ArdMVVRDbS4fHO4hwoPAob5gyHFP5NoiJjBZTWcmmF3gC6AYIkx64NLZMxNFImGqJvc1G1zBxKU4a79fiz4kM779N/pzdAjafxu7vpaC/N5xjx6uI+sV8bAucLgiCuGCfQIPeoTwoSlQQn65WxFYAx3m3KfiTN+PzQQniViWRI5OQIDAQABo4IBTzCCAUswEgYDVR0TAQH/BAgwBgEB/wIBADCB5AYDVR0gBIHcMIHZMIHWBgkqJAGXOWUBAQEwgcgwgaYGCCsGAQUFBwICMIGZGoGWVXNlIHRoaXMgY2VydGlmaWNhdGUgb25seSBmb3IgdGhlIHB1cnBvc2UgcGVybWl0dGVkIGluIHRoZSBhcHBsaWNhYmxlIENlcnRpZmljYXRlIFBvbGljeS4gTGltaXRlZCBsaWFiaWxpdHkgYXBwbGllcyAtIHJlZmVyIHRvIHRoZSBDZXJ0aWZpY2F0ZSBQb2xpY3kuMB0GCCsGAQUFBwIBFhF3d3cudGVzdGFicmNhLmNvbTAOBgNVHQ8BAf8EBAMCAcYwHwYDVR0jBBgwFoAUaoz51J3tdoVnf3kQz50VsOUivyIwHQYDVR0OBBYEFIl9rmp4ImmE7AdpVMbA9GOX4c9OMA0GCSqGSIb3DQEBCwUAA4ICAQCjCpFDZXLAuhgMZPMCl9goYzAPrReIal90oKEh1WrQn7iZrampLL00fL5EUlb9kiaVKo6MfYEot6T2Zu3GsIMMHnfKDBAAMYEUH7XDutwJChmm9eVX5p7sRSxON+Ldah7MDlF4kjPzDIa/QsUFuXyJZicGrMlQEu5qiMdXo+z4Dtq/R+O8pEuyzLv1tIcbufDk0V/ofz0VUuUEntwigsyputtes9OouikEvERzLLif+y4nOducyAaIXSVMFEqREafT6eC05k/A2K2RrTowMb1NKKybUjW89Wvbj2z/O5h1WP8s3U+A5sPtOEJYBU+zM1+lxz3NinRceIAKkyBjOPsX5Zh+ao4fAN3Vhyl3tIGlc3o+bWvOl7AUMGWv+gOAIexaAHYIaK7nX9qhZqkNqGOqBVtG9Hxr0WUXMLKMjMSpCUvYWZAb/ReCt5mISw6ZOxLPUxY/jBRE8MzoLNqAEH+dHiSVuyLy0y3dFiCUkKZ1yUJWy+mytmvS8FxLQ6Dl3CRxoQhms6dRNg5WIk7rtdHHNPwoWd2Ew3JdhDO4YiwtkVxcwzajhlNWlum5sUUJqSlajxlBdtE7mkuOvBcvqv7fyzuHStGwXDy0F0S9ZeSQB5Q45K23L3Z1v9CygiBocAUGdtBHxZGmikbymewqiX6gdQhqc9I0a2Y6bd6xUPMyuzCCBuMwggTLoAMCAQICAQEwDQYJKoZIhvcNAQELBQAwgYoxCzAJBgNVBAYTAkFVMSUwIwYDVQQKExxBdXN0cmFsaWFuIEJ1c2ludferJlZ2lzdGVyMSAwHgYDVQQLExdDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTEyMDAGA1UEAxMpVGVzdCBBdXN0cmFsaWFuIEJ1c2luZXNzIFJlZ2lzdGVyIFJvb3QgQ0EwHhcNMTAwMzIwMDAwMDAwWhcNMzAwMzIwMDAwMDAwWjCBijELMAkGA1UEBhMCQVUxJTAjBgNVBAoTHEF1c3RyYWxpYW4gQnVzaW5lc3MgUmVnaXN0ZXIxIDAeBgNVBAsTF0NlcnRpZmljYXRpb24gQXV0aG9yaXR5MTIwMAYDVQQDEylUZXN0IEF1c3RyYWxpYW4gQnVzaW5lc3MgUmVnaXN0ZXIgUm9vdCBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAKViU48+OChS9P2MsQlroR+xTHQlut/q6R5r1yEzsXjBBSiy9HYOnuO31cCqoZWw87QGfS2A4ZfaoVMcj0Mu89+HKeuqBeYuGLr5oc1xU+fZDft/BbvN1BHLJsD7srmPivC4MDoKDvCHXX4ayHEDuCCKJ8ywguKT/kaFzzDTLcl0rL7ayB2XHdL7eWiBAwbB/YCe/3vUe+g1kDsEeY7OatU1l5VZkStomOr/vD1O+MlYMkn7LtmJ89NhL2ZwaHp1twYN7g7FpapOQTT9Uw1JlxkA1d2h6XU8VsXxqmriSV7kyJNLeUpKngZO8XmjbW4FIYLu6tHs1Pv0viUsfP9GLlI3IkbXOptyfToKPMH3bJXGvgYGzQWK2P3MsspRXfWpMajoFi/WN4EuApf/j0iRKC1tGk4UXfHfVHMSJlTbQUQt4UAyDHgLqGVgA7rpWJJHux1SUE0lYpxufMuDD7CQdELI2VTFjxjaDLzLuNgLqM9DP4fc3/4QxTiYQacUA0DwZYk6tLRgbGPUB6VTO699THa8OeoBlmR/zk5LWDDf3CLVRJxbm4ylBcor/PQ8DmqpFlrGubHkmZaEo/nm9GhCvhn97uEcZ+uHGal8xYfUe5/k4e7nDLYBK2lF7hQA5KLkWhDG+z8b0+RBHF7KvVN3LjapAHEF2V1a/Q1AgMBAAGjggFQMIIBTDASBgNVHRMBAf8ECDAGAQH/AgECMIHlBgNVHSAEgd0wgdowgdcGCCokAZc5ZQEBMIHKMIGmBggrBgEFBQcCAjCBmRqBllVzZSB0aGlzIGNlcnRpZmljYXRlIG9ubHkgZm9yIHRoZSBwdXJwb3NlIHBlcm1pdHRlZCBpbiB0bGljYWJsZSBDZXJ0aWZpY2F0ZSBQb2xpY3kuIExpbWl0ZWQgbGlhYmlsaXR5IGFwcGxpZXMgLSByZWZlciB0byB0aGUgQ2VydGlmaWNhdGUgUG9saWN5LjAfBggrBgEFBQcCARYTd3d3LnRlc3RzYnJyb290LmNvbTAOBgNVHQ8BAf8EBAMCAcYwHQYDVR0OBBYEFGqM+dSd7XaFZ395EM+dFbDlIr8iMB8GA1UdIwQYMBaAFGqM+dSd7XaFZ395EM+dFbDlIr8iMA0GCSqGSIb3DQEBCwUAA4ICAQAo6w3fIgwhYJXhA7RxyXTJvmtglTwIY9xUabR7GxvivITy07VSiCSti/pMaNFx5sl0C93kB1UrJzehuzG3usIQPVQBcOin8lP7DPlI0trIHF2KZyA7J2uU116fRV4imXYOyt1odj4nLJPnB7GEYmfA4LpTFoP1/kqAYpnbGvNqu6S+4QKhIhaMR88b/s5PEYMNYSVVxBFQGLs4RT6+xnMCxsohuaLB/YuPGrtr1gwptz+nObJPL4e/8TyzTXMbgeWfgl70c6OlSEO+VhHyJf5HONSAN4ioVZ+aHZMcwWf3PGMu6jmLi2e3SuXZImWzXNyHBwtdhGdA8jZj8RLqlkNm8qZioooVw9fmI+uB+04E5SVeMDvcPq8Afxrdkt9/nYiI9ijLmmW11k8zxhQdS6oU/6gEQpFfjaIcY5PeaOyO4K57ihO74T0CC9al1ZBx5Wvz/Mo731TrXJuLYuOPBaDFmc5puu33ZBV9uirQqH15Xy2J1gf0wZK0wa3FdibH8mEO9mkmJsw74SoHepBBLjD/ymSDhDJSpkmFsub9pX3RvVl0M9r8EsO6YSCSc9wD99eg24ESiM9iXeLhyAvJ/al99FOspGFUBFgxsIg24RCp/49e2M4w7mzHePCzcvhtR8xUefqm702HaSJm1Cl0X010Qo6AAAMYIBozCCAZ8CAQEwgY0wgYUxCzAJBgNVBAYTAkFVMSUwIwYDVQQKExxBdXN0cmFsaWFuIEJ1c2luZXNzIFJlZ2lzdGVyMSAwHgYDVQQLExdDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTEtMCsGA1UEAxMkVGVzdCBBdXN0cmFsaWFuIEJ1c2luZXNzIFJlZ2lzdGVyIENBAgMEB7wwDQYJYIZIAWUDBAIBBQCgaTAYBgkqhkiG9w0BCQMxCwYJKoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0yMDAyMDQwNTU5MzBaMC8GCSqGSIb3DQEJBDEiBCDLdB9zB8ImqwV9/2iNqAKmkxGzFclxM97JbZQaKQubsTANBgkqhkiG9w0BAQEFAASBgKEUqCKv1btBuUVC3PcJMBDFkulVKvZP1GBR9ZIRku8s9LVnOItemvz3PdnV0dCxhDzwYR+QAXdpnYAhq45Khx/T0NlDHxICgdyFF4oXVgpz9tHJehXH8VoYZtEy5GxmgGZHQeHc9BZfzCywdnGLDHXdwIP+JEa4WwmCrzaf0e9sAAAAAAAA</cmsB64></request></ns:requests>
[main] INFO au.gov.abr.akm.credential.store.DaemonThreadFactory - Creating a new Thread in ThreadGroup: main
[pool-1-thread-1] WARN au.gov.abr.akm.credential.store.ABRRequester$ABRHttpPost - Constructing the response reader
[pool-1-thread-1] WARN au.gov.abr.akm.credential.store.ABRRequester$ABRHttpPost - java.net connection timeout = 0
[pool-1-thread-1] WARN au.gov.abr.akm.credential.store.ABRRequester$ABRHttpPost - java.net read timeout = 0
[main] INFO au.gov.abr.akm.credential.store.ABRKeyStoreImpl - correct password given, resetting bad password count to zero
[main] INFO au.gov.abr.akm.credential.store.ABRKeyStoreFactory - Will attempt to load the keystore, if the keystore doesn't exist then an exception will be thrown
[main] INFO au.gov.abr.akm.credential.store.ABRKeyStoreSerializerTransporterFactory - No custom Transporter specified, using the default File Transporter.
[main] INFO au.gov.abr.akm.credential.store.ABRKeyStoreSerializerTransporterFile - A keystore file has been passed through, keystore location is that of the provided file
[pool-1-thread-1] WARN au.gov.abr.akm.credential.store.ABRRequester - ABRRequester timeout = 60000ms
[pool-1-thread-1] INFO au.gov.abr.akm.cryptoOps.CredentialRequestResponse - XML Response length -> 358
[pool-1-thread-1] INFO au.gov.abr.akm.cryptoOps.CredentialRequestResponse - Auto-renew => ***BEGIN XML RESPONSE***
<?xml version="1.0" encoding="utf-8"?><responses xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://auth.sbr.gov.au/AutoRenew"><response id="ABRD:TESTDeviceID" xmlns=""><error><errorId>2106</errorId><errorMessage>Unrecognised error - 1137</errorMessage></error></response></responses>
******END XML******
[pool-1-thread-1] WARN au.gov.abr.akm.cryptoOps.CredentialRequestResponse - CredentialRequestResponse.processResponse (2106) Unrecognised error - 1137
javax.xml.ws.soap.SOAPFaultException: An error occurred when verifying security for the message.
at com.sun.xml.ws.fault.SOAP12Fault.getProtocolException(SOAP12Fault.java:225)
at com.sun.xml.ws.fault.SOAPFaultBuilder.createException(SOAPFaultBuilder.java:122)
at com.sun.xml.ws.client.dispatch.DispatchImpl.doInvoke(DispatchImpl.java:195)
at com.sun.xml.ws.client.dispatch.DispatchImpl.invoke(DispatchImpl.java:214)
at com.sun.xml.ws.security.trust.impl.TrustPluginImpl.invokeRST(TrustPluginImpl.java:624)
at com.sun.xml.ws.security.trust.impl.TrustPluginImpl.process(TrustPluginImpl.java:170)
at com.sun.xml.ws.security.trust.impl.client.STSIssuedTokenProviderImpl.getIssuedTokenContext(STSIssuedTokenProviderImpl.java:136)
at com.sun.xml.ws.security.trust.impl.client.STSIssuedTokenProviderImpl.issue(STSIssuedTokenProviderImpl.java:74)
at com.sun.xml.ws.api.security.trust.client.IssuedTokenManager.getIssuedToken(IssuedTokenManager.java:79)
at com.sun.xml.wss.jaxws.impl.SecurityClientTube.invokeTrustPlugin(SecurityClientTube.java:655)
at com.sun.xml.wss.jaxws.impl.SecurityClientTube.processClientRequestPacket(SecurityClientTube.java:264)
at com.sun.xml.wss.jaxws.impl.SecurityClientTube.processRequest(SecurityClientTube.java:233)
at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:629)
at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:588)
at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:573)
at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:470)
at com.sun.xml.ws.client.Stub.process(Stub.java:319)
at com.sun.xml.ws.client.sei.SEIStub.doProcess(SEIStub.java:157)
at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:109)
at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:89)
at com.sun.xml.ws.client.sei.SEIStub.invoke(SEIStub.java:140)
at com.sun.proxy.$Proxy44.createUSI(Unknown Source)
at usi.gov.au.USITest.main(USITest.java:83)
Second Approach
2nd approach is that I directly call STS service v3 by generating wsdl2java client class. This approach is already answered here. But I couldn't understand the answer nor I was able to achieve the result by adding signatureAlgorithm="SHA256withRSA" in sp:AlgorithmSuite
<sp:AlgorithmSuite signatureAlgorithm="SHA256withRSA">
<wsp:Policy>
<sp:Basic256 />
</wsp:Policy>
</sp:AlgorithmSuite>
OR
<sp:AlgorithmSuite signatureAlgorithm="SHA256withRSA">
<wsp:Policy>
<sp:Basic256Sha256Rsa15 />
</wsp:Policy>
</sp:AlgorithmSuite>
Everytime I get
com.microsoft.schemas.ws._2008._06.identity.securitytokenservice.IWSTrust13SyncTrust13IssueSTSFaultFaultMessage: Could not validate the ActAs token
I can't understand that which approach is right and how to fix WSDL or my code to update the SecurityPolicy i.e. switchs from Sha1 to Sha256 with RSA.
You need updated STS 1.3 wsdl which does not require ActAs token as old one did
Here it is my part of application properties:
spring.cloud.stream.rabbit.bindings.studentInput.consumer.exchange-type=direct
spring.cloud.stream.rabbit.bindings.studentInput.consumer.delayed-exchange=true
But it appears that in the RabbitMQ Admin page, it does not have x-delayed-type: direct in the Args in feature of my queue. I am referencing to this Spring Cloud Stream documentation: https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/
What am I doing wrong? Thanks in advance :D
I just tested it and it worked fine.
Did you enable the plugin? If not, you should see this in the log...
2018-07-09 08:52:04.173 ERROR 156 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: connection error; protocol method: #method(reply-code=503, reply-text=COMMAND_INVALID - unknown exchange type 'x-delayed-message', class-id=40, method-id=10)
See the plugin documentation.
Another possibility is the exchange already existed. Exchange configuration is immutable; you will see a message like this...
2018-07-09 09:04:43.202 ERROR 3309 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method(reply-code=406, reply-text=PRECONDITION_FAILED - inequivalent arg 'type' for exchange 'so51244078' in vhost '/': received ''x-delayed-message'' but current is 'direct', class-id=40, method-id=10)
In this case you have to delete the exchange first.
By the way, you will need a routing key too; by default the queue will be bound with the topic exchange wildcard #.