The specified queue does not exist - AWS.SimpleQueueService.NonExistentQueue with Spring-Boot - java

I'm unable to attach a #SqsListener in a spring boot application. It throws AWS.SimpleQueueService.NonExistentQueue exception.
I've gone through the question: Specified queue does not exist
and as far as I know, all the configurations are correct.
#Component
public class SQSListenerImpl{
#SqsListener(value = Constants.SQS_REQUEST_QUEUE, deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void listen(String taskJson, Acknowledgment acknowledgment, #Headers Map<String, String> headers) throws ExecutionException, InterruptedException {
//stuff
}
#PostConstruct
private void init(){
final AmazonSQS sqs = AmazonSQSClientBuilder.defaultClient();
LOGGER.info("Listing all queues in your account.\n");
for (final String queueUrl : sqs.listQueues().getQueueUrls()) {
LOGGER.info(" QueueUrl: " + queueUrl);
}
}
}
application.properties
cloud.aws.stack.auto=false
cloud.aws.region.static=ap-southeast-1
logging.level.root=INFO
Logs from above code:
[requestId: MainThread] [INFO] [SQSListenerImpl] [main] QueueUrl: https://sqs.ap-southeast-1.amazonaws.com/xxxxx/hello-world
[requestId: MainThread] [INFO] [SQSListenerImpl] [main] QueueUrl: https://sqs.ap-southeast-1.amazonaws.com/xxxxx/some-name2
[requestId: MainThread] [INFO] [SQSListenerImpl] [main] QueueUrl: https://sqs.ap-southeast-1.amazonaws.com/xxxxx/some-name3
[requestId: MainThread] [WARN] [SimpleMessageListenerContainer] [main] Ignoring queue with name 'hello-world': The queue does not exist.; nested exception is com.amazonaws.services.sqs.model.QueueDoesNotExistException: The specified queue does not exist for this wsdl version. (Service: AmazonSQS; Status Code: 400; Error Code: AWS.SimpleQueueService.NonExistentQueue; Request ID: 3c0108aa-7611-528f-ac69-5eb01fasb9f3)
[requestId: MainThread] [INFO] [Http11NioProtocol] [main] Starting ProtocolHandler ["http-nio-8080"]
[requestId: MainThread] [INFO] [TomcatWebServer] [main] Tomcat started on port(s): 8080 (http) with context path ''
[requestId: MainThread] [INFO] [Startup] [main] Started Startup in 11.391 seconds (JVM running for 12.426)
Aws credentials used are under ~/.aws/ directory.
Now my question is, if sqs.listQueues() can see the queue then why can't #SqsListener? Am I missing something or doing something wrong?

I tried with SpringBoot Aws clound like you and got same error.
Then i used the full http url as queue name and got access denied error
#SqsListener(value = "https://sqs.ap-southeast-1.amazonaws.com/xxxxx/hello-world")
So in the end, i end up using AWS SDK directly to get message from SQS

Here's what I'm doing with Spring Cloud.
Using SPEL I'm attaching a value from my application.properties to the Annotation #SqsListener like this
#SqsListener(value = "#{queueConfiguration.getQueue()}", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
One thing to note, make sure you use the full HTTPS path for the queue.
For all local development, I'm using "localstack" and using a local implementation of SQS but the same code applies as it gets deploy in ECS. The other piece to note is that the role or instance needs to be able to Receive Messages via IAM to make this happen.

Using full URL worked for me.
#SqsListener("https://sqs.ap-south-1.amazonaws.com/xxxxxxxxx/queue-name")

Using below code work for me
#SqsListener(value="https://sqs.us-west-1.amazonaws.com/xxxxx/queue-name")

Related

Avro GenericRecord deserialization not working via SpringKafka

I'm trying to simplify my consumer as much as possible. The problem is, when looking at the records coming in my Kafka listener:
List<GenericRecord> incomingRecords the values are just string values. I've tried turning specific reader to true and false. I've set the value deserializer as well. Am I missing something? This worked fine when I use a Java configuration class, but want to keep consolidated to this application.properties file.
application.properties
spring.kafka.properties.security.protocol=SASL_SSL
spring.kafka.properties.sasl.mechanism=SCRAM-SHA-256
spring.kafka.properties.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="${SASL_ACCESS_KEY}" password="${SASL_SECRET}";
spring.kafka.consumer.auto-offset-reset=earliest
#### Consumer Properties Configuration
spring.kafka.properties.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.properties.value.deserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer
spring.kafka.properties.value.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicRecordNameStrategy
spring.kafka.bootstrap-servers=
spring.kafka.properties.schema.registry.url=
spring.kafka.properties.specific.avro.reader=true
spring.kafka.consumer.properties.spring.json.trusted.packages=*
logging.level.org.apache.kafka=TRACE
logging.level.io.confluent.kafka.schemaregistry=TRACE
consumer
#KafkaListener(topics = "${topic}", groupId = "${group}")
public void processMessageBatch(List<GenericRecord> incomingRecords,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) List<Integer> partitions,
#Header(KafkaHeaders.RECEIVED_TOPIC) List<String> topics,
#Header(KafkaHeaders.OFFSET) List<Long> offsets) {
currentMicroBatch = Stream.of(currentMicroBatch, incomingRecords).flatMap(List::stream).collect(Collectors.toList());
if (currentMicroBatch.size() >= maxRecords || validatedElapsedDuration(durationMonitor)) {
System.out.println("ETL processing logic will be done here");
}
clearBatch();
}
I notice when I use:
spring.kafka.consumer.value-deserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
I get the following error:
2020-12-02 17:04:42.745 DEBUG 51910 --- [ntainer#0-0-C-1] i.c.k.s.client.rest.RestService : Sending GET with input null to https://myschemaregistry.com
2020-12-02 17:04:42.852 ERROR 51910 --- [ntainer#0-0-C-1] o.s.kafka.listener.LoggingErrorHandler : Error while processing: null
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition my-topic-avro-32 at offset 7836. If needed, please seek past the record to continue consumption.
java.lang.IllegalArgumentException: argument "src" is null
at com.fasterxml.jackson.databind.ObjectMapper._assertNotNull(ObjectMapper.java:4735)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3502)
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:270)
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:334)
at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:573)
at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:557)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:149)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getBySubjectAndId(CachedSchemaRegistryClient.java:230)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getById(CachedSchemaRegistryClient.java:209)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer$DeserializationContext.schemaFromRegistry(AbstractKafkaAvroDeserializer.java:241)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:102)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:81)
at io.confluent.kafka.serializers.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:55)
at org.apache.kafka.common.serialization.Deserializer.deserialize(Deserializer.java:60)
at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:1268)
at org.apache.kafka.clients.consumer.internals.Fetcher.access$3600(Fetcher.java:124)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1492)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1600(Fetcher.java:1332)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:645)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:606)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1263)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1225)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1201)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1062)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1018)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:949)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:834)
I found the issue. Debugging deep into the rest client for confluent, I was hit with a 401 (terrible logs btw)
I needed to add this:
spring.kafka.properties.basic.auth.credentials.source=SASL_INHERIT
since I'm using SASL auth and needed registry to inherit the SASL config I added up above. fun stuff..
Had the same issue.
for me I just needed to set the protocol of the schema-registry url from http:// to https:// and it worked.
#Ryan answered gave me a clue.

An error occurred when verifying security for the message - SAML Token

Few years back we have developed a client to consume a usi-ws v2, this webservice uses a STS service v2. It was working fine.
But, now soap-ws v2 is updated to usi-ws v3 which in turns uses STS service v3.
Key differences are
1) usi-ws v3 uses <sp:Basic256Sha256Rsa15/> as AlgorithmSuite policy which matches STS service v3's AlgorithmSuite policy.
2) usi-ws v3 uses STS service v3 instead of STS service v2
I can integrate the change by two different approaches
First Approach
I use apache-cxf wsdl2java on usi-ws v3 to generate the client code. Below is sample endpoint code
private static void SetupRequestContext(IUSIService endpoint, X509Certificate certificate, PrivateKey privateKey) {
Map<String, Object> requestContext = ((BindingProvider)endpoint).getRequestContext();
requestContext.put(XWSSConstants.CERTIFICATE_PROPERTY, certificate);
requestContext.put(XWSSConstants.PRIVATEKEY_PROPERTY, privateKey);
requestContext.put(STSIssuedTokenConfiguration.STS_ENDPOINT, "https://thirdparty.authentication.business.gov.au/R3.0/vanguard/S007v1.3/Service.svc");
requestContext.put(STSIssuedTokenConfiguration.STS_NAMESPACE, "http://schemas.microsoft.com/ws/2008/06/identity/securitytokenservice");
requestContext.put(STSIssuedTokenConfiguration.STS_WSDL_LOCATION, "https://thirdparty.authentication.business.gov.au/R3.0/vanguard/S007v1.3/Service.svc");
requestContext.put(STSIssuedTokenConfiguration.STS_SERVICE_NAME, "SecurityTokenService");
requestContext.put(STSIssuedTokenConfiguration.LIFE_TIME, 30);
requestContext.put(STSIssuedTokenConfiguration.STS_PORT_NAME, "S007SecurityTokenServiceEndpoint");
requestContext.put(BindingProviderProperties.REQUEST_TIMEOUT, REQUEST_TIMEOUT);
requestContext.put(BindingProviderProperties.CONNECT_TIMEOUT, CONNECT_TIMEOUT);
}
After configuring the endpoint context, I try to createUSI
endpoint.createUSI(createUsiRequest);
It throws below error (logs)
... LOGS before are removed
[main] WARN au.gov.abr.akm.credential.store.ABRRequester$ABRHttpPost - XML request is => <ns:requests xmlns:ns="http://auth.sbr.gov.au/AutoRenew"><request id="ABRD:TESTDeviceID" credentialType="D"><cmsB64>MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwGggCSABIIBqDCCAaQwggENAgEAMDoxFTATBgNVBAMMDFRlc3REZXZpY2UwMzEUMBIGA1UECgwLMTIzMDAwMDAwNTkxCzAJBgNVBAYTAkFVMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCjFrK/biRUDUpRBEmJtV1XAM+sNP0NMwydRdv4NntG0x/JHZaOoJJpFrTXvB0gAIIHhHlhfkpLkSLQmz8iMYKnPgaqG+g/quSM6VYKQcCMr0UwS9b37NdzOhpf8n6JRWkTIFWznUz8WxiASCLuj5VmRiacHlrtJul/Gj89zbDJtwIDAQABoCowKAYJKoZIhvcNAQkOMRswGTAXBgYqJAGCTQEEDRYLMTIzMDAwMDAwNTkwDQYJKoZIhvcNAQEFBQADgYEAmUIkEDpCtZJbCZ04DfVxMgsjZfIEsF3yh+VWlCO/6jJcdcJKKjY0xbJDxzdh8xhbq2RzBKnP5th4p/yzBGN8Wafvr/2mQVNC9LG/3IGsawZLGMqUjeL0aIwDEmYBJWt0wm1ntKUF5DiuZJgcIgjFIfHWBq0WB2bU8SroO5O07coAAAAAAACggDCCBB0wggMFoAMCAQICAwQHvDANBgkqhkiG9w0BAQsFADCBhTELMAkGA1UEBhMCQVUxJTAjBgNVBAoTHEF1c3RyYWxpYW4gQnVzaW5lc3MgUmVnaXN0ZXIxIDAeBgNVBAsTF0NlcnRpZmljYXRpb24gQXV0aG9yaXR5MS0wKwYDVQQDEyRUZXN0IEF1c3RyYWxpYW4gQnVzaW5lc3MgUmVnaXN0ZXIgQ0EwHhcNMTgxMTI4MDQyMDMwWhcNMjAwMzI5MDQyMDMwWjA6MQswCQYDVQQGEwJBVTEUMBIGA1UEChMLMTIzMDAwMDAwNTkxFTATBgNVBAMTDFRlc3REZXZpY2UwMzCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA866tnV+RQ5v09wEPTTeA8C679zsZhgWhc4RjqtwvB73ZN7+g9NjJ1KujZUxXB5RbBdLQQ9GFPBx1DYifjDIN2Z1vMObqhT/QqUwz0sy8y6xQh6ukJjlyr4r9CHCQOBuHCe8rB3DzirMZsAv+qjT8lCOfdK9lq++IlsglpYkSAO8CAwEAAaOCAWIwggFeMAwGA1UdEwEB/wQCMAAwgeQGA1UdIASB3DCB2TCB1gYJKiQBlzllAQgBMIHIMIGmBggrBgEFBQcCAjCBmRqBllVzZSB0aGlzIGNlcnRpZmljYXRlIG9ubHkgZm9yIHRoZSBwdXJwb3NlIHBlcm1pdHRlZCBpbiB0aGUgYXBwbGljYWJsZSBDZXJ0aWZpY2F0ZSBQb2xpY3kuIExpbWl0ZWQgbGlhYmlsaXR5IGFwcGxpZXMgLSByZWZlciB0byB0aGUgQ2VydGlmaWNhdGUgUG9saWN5LjAdBggrBgEFBQcCARYRd3d3LnRlc3RhYnJjYS5jb20wFwYGKiQBgk0BBA0WCzEyMzAwMDAwMDU5MA4GA1UdDwEB/wQEAwIE8DAfBgNVHSMEGDAWgBSJfa5qeCJphOwHaVTGwPRjl+HPTjAdBgNVHQ4EFgQU17H18nWNxfR8MnD6gVtz8f91bu4wDQYJKoZIhvcNAQELBQADggEBADFiv5BD06bmEwkvr8cKF0MDET9+kUCPz2Kka5YuEfy8gIITz6ET2upJRLlt9BKOFpyrevCfEdoSd1Tbsz9czm6Vn/fDhQZ25HfKZgDLxQU8zqrMkc2rNyxXrJIWT1LNaVtNmUN5KMcHRjHXQcN6Qou5GkjsmPk/wuzcp0K7F2DI1pvjbr7r2TE1xiaO1l4sD+6JpPugqidPT+/41ADdmcbKwWH1p0HjPR1/XoIiR/qcQWL0TWBozZsiJq7Ad4xI2mm/8AS6wjGMkwckDH2wpROfiZkcfKavDOf2/wJaWG+RBCL2B2LNYAltG30LNwno4R/J7LfGauoOSPmkd3Tdc00wggXdMIIDxaADAgECAgECMA0GCSqGSIb3DQEBCwUAMIGKMQswCQYDVQQGEwJBVTElMCMGA1UEChMcQXVzdHJhbGlhbiBCdXNpbmVzcyBSZWdpc3RlcjEgMB4GA1UECxMXQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxMjAwBgNVBAMTKVRlc3QgQXVzdHJhbGlhbiBCdXNpbmVzcyBSZWdpc3RlciBSb290IENBMB4XDTEwMDMyMDAwMDAwMFoXDTIwMDMyMDAwMDAwMFowgYUxCzAJBgNVBAYTAkFVMSUwIwYDVQQKExxBdXN0cmFsaWFuIEJ1c2luZXfghfghfFDN0cmFsaWFuIEJ1c2luZXNzIFJlZ2lzdGVyIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxGJ5qZfrXMMxOq24M8K13oLHegF1C0fN2j1q2RotpIIkGCGszJtpV8n/XAOM6pVm9jp5Pc4+v3No1mtdj/dVP1nMP9xxGuDrd/gJUddZnhRGQVeXto9pB03bioWLmsszoG8e2OvTf4AnBum0ukHRqTFuNJ1qQu4HXUjahxg66ArdMVVRDbS4fHO4hwoPAob5gyHFP5NoiJjBZTWcmmF3gC6AYIkx64NLZMxNFImGqJvc1G1zBxKU4a79fiz4kM779N/pzdAjafxu7vpaC/N5xjx6uI+sV8bAucLgiCuGCfQIPeoTwoSlQQn65WxFYAx3m3KfiTN+PzQQniViWRI5OQIDAQABo4IBTzCCAUswEgYDVR0TAQH/BAgwBgEB/wIBADCB5AYDVR0gBIHcMIHZMIHWBgkqJAGXOWUBAQEwgcgwgaYGCCsGAQUFBwICMIGZGoGWVXNlIHRoaXMgY2VydGlmaWNhdGUgb25seSBmb3IgdGhlIHB1cnBvc2UgcGVybWl0dGVkIGluIHRoZSBhcHBsaWNhYmxlIENlcnRpZmljYXRlIFBvbGljeS4gTGltaXRlZCBsaWFiaWxpdHkgYXBwbGllcyAtIHJlZmVyIHRvIHRoZSBDZXJ0aWZpY2F0ZSBQb2xpY3kuMB0GCCsGAQUFBwIBFhF3d3cudGVzdGFicmNhLmNvbTAOBgNVHQ8BAf8EBAMCAcYwHwYDVR0jBBgwFoAUaoz51J3tdoVnf3kQz50VsOUivyIwHQYDVR0OBBYEFIl9rmp4ImmE7AdpVMbA9GOX4c9OMA0GCSqGSIb3DQEBCwUAA4ICAQCjCpFDZXLAuhgMZPMCl9goYzAPrReIal90oKEh1WrQn7iZrampLL00fL5EUlb9kiaVKo6MfYEot6T2Zu3GsIMMHnfKDBAAMYEUH7XDutwJChmm9eVX5p7sRSxON+Ldah7MDlF4kjPzDIa/QsUFuXyJZicGrMlQEu5qiMdXo+z4Dtq/R+O8pEuyzLv1tIcbufDk0V/ofz0VUuUEntwigsyputtes9OouikEvERzLLif+y4nOducyAaIXSVMFEqREafT6eC05k/A2K2RrTowMb1NKKybUjW89Wvbj2z/O5h1WP8s3U+A5sPtOEJYBU+zM1+lxz3NinRceIAKkyBjOPsX5Zh+ao4fAN3Vhyl3tIGlc3o+bWvOl7AUMGWv+gOAIexaAHYIaK7nX9qhZqkNqGOqBVtG9Hxr0WUXMLKMjMSpCUvYWZAb/ReCt5mISw6ZOxLPUxY/jBRE8MzoLNqAEH+dHiSVuyLy0y3dFiCUkKZ1yUJWy+mytmvS8FxLQ6Dl3CRxoQhms6dRNg5WIk7rtdHHNPwoWd2Ew3JdhDO4YiwtkVxcwzajhlNWlum5sUUJqSlajxlBdtE7mkuOvBcvqv7fyzuHStGwXDy0F0S9ZeSQB5Q45K23L3Z1v9CygiBocAUGdtBHxZGmikbymewqiX6gdQhqc9I0a2Y6bd6xUPMyuzCCBuMwggTLoAMCAQICAQEwDQYJKoZIhvcNAQELBQAwgYoxCzAJBgNVBAYTAkFVMSUwIwYDVQQKExxBdXN0cmFsaWFuIEJ1c2ludferJlZ2lzdGVyMSAwHgYDVQQLExdDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTEyMDAGA1UEAxMpVGVzdCBBdXN0cmFsaWFuIEJ1c2luZXNzIFJlZ2lzdGVyIFJvb3QgQ0EwHhcNMTAwMzIwMDAwMDAwWhcNMzAwMzIwMDAwMDAwWjCBijELMAkGA1UEBhMCQVUxJTAjBgNVBAoTHEF1c3RyYWxpYW4gQnVzaW5lc3MgUmVnaXN0ZXIxIDAeBgNVBAsTF0NlcnRpZmljYXRpb24gQXV0aG9yaXR5MTIwMAYDVQQDEylUZXN0IEF1c3RyYWxpYW4gQnVzaW5lc3MgUmVnaXN0ZXIgUm9vdCBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAKViU48+OChS9P2MsQlroR+xTHQlut/q6R5r1yEzsXjBBSiy9HYOnuO31cCqoZWw87QGfS2A4ZfaoVMcj0Mu89+HKeuqBeYuGLr5oc1xU+fZDft/BbvN1BHLJsD7srmPivC4MDoKDvCHXX4ayHEDuCCKJ8ywguKT/kaFzzDTLcl0rL7ayB2XHdL7eWiBAwbB/YCe/3vUe+g1kDsEeY7OatU1l5VZkStomOr/vD1O+MlYMkn7LtmJ89NhL2ZwaHp1twYN7g7FpapOQTT9Uw1JlxkA1d2h6XU8VsXxqmriSV7kyJNLeUpKngZO8XmjbW4FIYLu6tHs1Pv0viUsfP9GLlI3IkbXOptyfToKPMH3bJXGvgYGzQWK2P3MsspRXfWpMajoFi/WN4EuApf/j0iRKC1tGk4UXfHfVHMSJlTbQUQt4UAyDHgLqGVgA7rpWJJHux1SUE0lYpxufMuDD7CQdELI2VTFjxjaDLzLuNgLqM9DP4fc3/4QxTiYQacUA0DwZYk6tLRgbGPUB6VTO699THa8OeoBlmR/zk5LWDDf3CLVRJxbm4ylBcor/PQ8DmqpFlrGubHkmZaEo/nm9GhCvhn97uEcZ+uHGal8xYfUe5/k4e7nDLYBK2lF7hQA5KLkWhDG+z8b0+RBHF7KvVN3LjapAHEF2V1a/Q1AgMBAAGjggFQMIIBTDASBgNVHRMBAf8ECDAGAQH/AgECMIHlBgNVHSAEgd0wgdowgdcGCCokAZc5ZQEBMIHKMIGmBggrBgEFBQcCAjCBmRqBllVzZSB0aGlzIGNlcnRpZmljYXRlIG9ubHkgZm9yIHRoZSBwdXJwb3NlIHBlcm1pdHRlZCBpbiB0bGljYWJsZSBDZXJ0aWZpY2F0ZSBQb2xpY3kuIExpbWl0ZWQgbGlhYmlsaXR5IGFwcGxpZXMgLSByZWZlciB0byB0aGUgQ2VydGlmaWNhdGUgUG9saWN5LjAfBggrBgEFBQcCARYTd3d3LnRlc3RzYnJyb290LmNvbTAOBgNVHQ8BAf8EBAMCAcYwHQYDVR0OBBYEFGqM+dSd7XaFZ395EM+dFbDlIr8iMB8GA1UdIwQYMBaAFGqM+dSd7XaFZ395EM+dFbDlIr8iMA0GCSqGSIb3DQEBCwUAA4ICAQAo6w3fIgwhYJXhA7RxyXTJvmtglTwIY9xUabR7GxvivITy07VSiCSti/pMaNFx5sl0C93kB1UrJzehuzG3usIQPVQBcOin8lP7DPlI0trIHF2KZyA7J2uU116fRV4imXYOyt1odj4nLJPnB7GEYmfA4LpTFoP1/kqAYpnbGvNqu6S+4QKhIhaMR88b/s5PEYMNYSVVxBFQGLs4RT6+xnMCxsohuaLB/YuPGrtr1gwptz+nObJPL4e/8TyzTXMbgeWfgl70c6OlSEO+VhHyJf5HONSAN4ioVZ+aHZMcwWf3PGMu6jmLi2e3SuXZImWzXNyHBwtdhGdA8jZj8RLqlkNm8qZioooVw9fmI+uB+04E5SVeMDvcPq8Afxrdkt9/nYiI9ijLmmW11k8zxhQdS6oU/6gEQpFfjaIcY5PeaOyO4K57ihO74T0CC9al1ZBx5Wvz/Mo731TrXJuLYuOPBaDFmc5puu33ZBV9uirQqH15Xy2J1gf0wZK0wa3FdibH8mEO9mkmJsw74SoHepBBLjD/ymSDhDJSpkmFsub9pX3RvVl0M9r8EsO6YSCSc9wD99eg24ESiM9iXeLhyAvJ/al99FOspGFUBFgxsIg24RCp/49e2M4w7mzHePCzcvhtR8xUefqm702HaSJm1Cl0X010Qo6AAAMYIBozCCAZ8CAQEwgY0wgYUxCzAJBgNVBAYTAkFVMSUwIwYDVQQKExxBdXN0cmFsaWFuIEJ1c2luZXNzIFJlZ2lzdGVyMSAwHgYDVQQLExdDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTEtMCsGA1UEAxMkVGVzdCBBdXN0cmFsaWFuIEJ1c2luZXNzIFJlZ2lzdGVyIENBAgMEB7wwDQYJYIZIAWUDBAIBBQCgaTAYBgkqhkiG9w0BCQMxCwYJKoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0yMDAyMDQwNTU5MzBaMC8GCSqGSIb3DQEJBDEiBCDLdB9zB8ImqwV9/2iNqAKmkxGzFclxM97JbZQaKQubsTANBgkqhkiG9w0BAQEFAASBgKEUqCKv1btBuUVC3PcJMBDFkulVKvZP1GBR9ZIRku8s9LVnOItemvz3PdnV0dCxhDzwYR+QAXdpnYAhq45Khx/T0NlDHxICgdyFF4oXVgpz9tHJehXH8VoYZtEy5GxmgGZHQeHc9BZfzCywdnGLDHXdwIP+JEa4WwmCrzaf0e9sAAAAAAAA</cmsB64></request></ns:requests>
[main] INFO au.gov.abr.akm.credential.store.DaemonThreadFactory - Creating a new Thread in ThreadGroup: main
[pool-1-thread-1] WARN au.gov.abr.akm.credential.store.ABRRequester$ABRHttpPost - Constructing the response reader
[pool-1-thread-1] WARN au.gov.abr.akm.credential.store.ABRRequester$ABRHttpPost - java.net connection timeout = 0
[pool-1-thread-1] WARN au.gov.abr.akm.credential.store.ABRRequester$ABRHttpPost - java.net read timeout = 0
[main] INFO au.gov.abr.akm.credential.store.ABRKeyStoreImpl - correct password given, resetting bad password count to zero
[main] INFO au.gov.abr.akm.credential.store.ABRKeyStoreFactory - Will attempt to load the keystore, if the keystore doesn't exist then an exception will be thrown
[main] INFO au.gov.abr.akm.credential.store.ABRKeyStoreSerializerTransporterFactory - No custom Transporter specified, using the default File Transporter.
[main] INFO au.gov.abr.akm.credential.store.ABRKeyStoreSerializerTransporterFile - A keystore file has been passed through, keystore location is that of the provided file
[pool-1-thread-1] WARN au.gov.abr.akm.credential.store.ABRRequester - ABRRequester timeout = 60000ms
[pool-1-thread-1] INFO au.gov.abr.akm.cryptoOps.CredentialRequestResponse - XML Response length -> 358
[pool-1-thread-1] INFO au.gov.abr.akm.cryptoOps.CredentialRequestResponse - Auto-renew => ***BEGIN XML RESPONSE***
<?xml version="1.0" encoding="utf-8"?><responses xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://auth.sbr.gov.au/AutoRenew"><response id="ABRD:TESTDeviceID" xmlns=""><error><errorId>2106</errorId><errorMessage>Unrecognised error - 1137</errorMessage></error></response></responses>
******END XML******
[pool-1-thread-1] WARN au.gov.abr.akm.cryptoOps.CredentialRequestResponse - CredentialRequestResponse.processResponse (2106) Unrecognised error - 1137
javax.xml.ws.soap.SOAPFaultException: An error occurred when verifying security for the message.
at com.sun.xml.ws.fault.SOAP12Fault.getProtocolException(SOAP12Fault.java:225)
at com.sun.xml.ws.fault.SOAPFaultBuilder.createException(SOAPFaultBuilder.java:122)
at com.sun.xml.ws.client.dispatch.DispatchImpl.doInvoke(DispatchImpl.java:195)
at com.sun.xml.ws.client.dispatch.DispatchImpl.invoke(DispatchImpl.java:214)
at com.sun.xml.ws.security.trust.impl.TrustPluginImpl.invokeRST(TrustPluginImpl.java:624)
at com.sun.xml.ws.security.trust.impl.TrustPluginImpl.process(TrustPluginImpl.java:170)
at com.sun.xml.ws.security.trust.impl.client.STSIssuedTokenProviderImpl.getIssuedTokenContext(STSIssuedTokenProviderImpl.java:136)
at com.sun.xml.ws.security.trust.impl.client.STSIssuedTokenProviderImpl.issue(STSIssuedTokenProviderImpl.java:74)
at com.sun.xml.ws.api.security.trust.client.IssuedTokenManager.getIssuedToken(IssuedTokenManager.java:79)
at com.sun.xml.wss.jaxws.impl.SecurityClientTube.invokeTrustPlugin(SecurityClientTube.java:655)
at com.sun.xml.wss.jaxws.impl.SecurityClientTube.processClientRequestPacket(SecurityClientTube.java:264)
at com.sun.xml.wss.jaxws.impl.SecurityClientTube.processRequest(SecurityClientTube.java:233)
at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:629)
at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:588)
at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:573)
at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:470)
at com.sun.xml.ws.client.Stub.process(Stub.java:319)
at com.sun.xml.ws.client.sei.SEIStub.doProcess(SEIStub.java:157)
at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:109)
at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:89)
at com.sun.xml.ws.client.sei.SEIStub.invoke(SEIStub.java:140)
at com.sun.proxy.$Proxy44.createUSI(Unknown Source)
at usi.gov.au.USITest.main(USITest.java:83)
Second Approach
2nd approach is that I directly call STS service v3 by generating wsdl2java client class. This approach is already answered here. But I couldn't understand the answer nor I was able to achieve the result by adding signatureAlgorithm="SHA256withRSA" in sp:AlgorithmSuite
<sp:AlgorithmSuite signatureAlgorithm="SHA256withRSA">
<wsp:Policy>
<sp:Basic256 />
</wsp:Policy>
</sp:AlgorithmSuite>
OR
<sp:AlgorithmSuite signatureAlgorithm="SHA256withRSA">
<wsp:Policy>
<sp:Basic256Sha256Rsa15 />
</wsp:Policy>
</sp:AlgorithmSuite>
Everytime I get
com.microsoft.schemas.ws._2008._06.identity.securitytokenservice.IWSTrust13SyncTrust13IssueSTSFaultFaultMessage: Could not validate the ActAs token
I can't understand that which approach is right and how to fix WSDL or my code to update the SecurityPolicy i.e. switchs from Sha1 to Sha256 with RSA.
You need updated STS 1.3 wsdl which does not require ActAs token as old one did

How to add feature "x-delayed-type: direct" in a queue in RabbitMQ using Spring Cloud Stream?

Here it is my part of application properties:
spring.cloud.stream.rabbit.bindings.studentInput.consumer.exchange-type=direct
spring.cloud.stream.rabbit.bindings.studentInput.consumer.delayed-exchange=true
But it appears that in the RabbitMQ Admin page, it does not have x-delayed-type: direct in the Args in feature of my queue. I am referencing to this Spring Cloud Stream documentation: https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/
What am I doing wrong? Thanks in advance :D
I just tested it and it worked fine.
Did you enable the plugin? If not, you should see this in the log...
2018-07-09 08:52:04.173 ERROR 156 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: connection error; protocol method: #method(reply-code=503, reply-text=COMMAND_INVALID - unknown exchange type 'x-delayed-message', class-id=40, method-id=10)
See the plugin documentation.
Another possibility is the exchange already existed. Exchange configuration is immutable; you will see a message like this...
2018-07-09 09:04:43.202 ERROR 3309 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method(reply-code=406, reply-text=PRECONDITION_FAILED - inequivalent arg 'type' for exchange 'so51244078' in vhost '/': received ''x-delayed-message'' but current is 'direct', class-id=40, method-id=10)
In this case you have to delete the exchange first.
By the way, you will need a routing key too; by default the queue will be bound with the topic exchange wildcard #.

Shutting down Apache ActiveMQ

I'm running a Spring Boot (1.3.5) console application with an embedded ActiveMQ server (5.10.0), which works just fine for receiving messages. However, I'm having trouble shutting down the application without exceptions.
This exception is thrown once for each queue, after hitting Ctrl-C:
2016-09-21 15:46:36.561 ERROR 18275 --- [update]] o.apache.activemq.broker.BrokerService : Failed to start Apache ActiveMQ ([my-mq-server, null], {})
java.lang.IllegalStateException: Shutdown in progress
at java.lang.ApplicationShutdownHooks.add(ApplicationShutdownHooks.java:66)
at java.lang.Runtime.addShutdownHook(Runtime.java:211)
at org.apache.activemq.broker.BrokerService.addShutdownHook(BrokerService.java:2446)
at org.apache.activemq.broker.BrokerService.doStartBroker(BrokerService.java:693)
at org.apache.activemq.broker.BrokerService.startBroker(BrokerService.java:684)
at org.apache.activemq.broker.BrokerService.start(BrokerService.java:605)
at org.apache.activemq.transport.vm.VMTransportFactory.doCompositeConnect(VMTransportFactory.java:127)
at org.apache.activemq.transport.vm.VMTransportFactory.doConnect(VMTransportFactory.java:56)
at org.apache.activemq.transport.TransportFactory.connect(TransportFactory.java:65)
at org.apache.activemq.ActiveMQConnectionFactory.createTransport(ActiveMQConnectionFactory.java:314)
at org.apache.activemq.ActiveMQConnectionFactory.createActiveMQConnection(ActiveMQConnectionFactory.java:329)
at org.apache.activemq.ActiveMQConnectionFactory.createActiveMQConnection(ActiveMQConnectionFactory.java:302)
at org.apache.activemq.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:242)
at org.apache.activemq.jms.pool.PooledConnectionFactory.createConnection(PooledConnectionFactory.java:283)
at org.apache.activemq.jms.pool.PooledConnectionFactory$1.makeObject(PooledConnectionFactory.java:96)
at org.apache.activemq.jms.pool.PooledConnectionFactory$1.makeObject(PooledConnectionFactory.java:93)
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.create(GenericKeyedObjectPool.java:1041)
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:357)
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:279)
at org.apache.activemq.jms.pool.PooledConnectionFactory.createConnection(PooledConnectionFactory.java:243)
at org.apache.activemq.jms.pool.PooledConnectionFactory.createConnection(PooledConnectionFactory.java:212)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:180)
at org.springframework.jms.listener.AbstractJmsListeningContainer.createSharedConnection(AbstractJmsListeningContainer.java:413)
at org.springframework.jms.listener.AbstractJmsListeningContainer.refreshSharedConnection(AbstractJmsListeningContainer.java:398)
at org.springframework.jms.listener.DefaultMessageListenerContainer.refreshConnectionUntilSuccessful(DefaultMessageListenerContainer.java:925)
at org.springframework.jms.listener.DefaultMessageListenerContainer.recoverAfterListenerSetupFailure(DefaultMessageListenerContainer.java:899)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1075)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-09-21 15:46:36.564 INFO 18275 --- [update]] o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.12.3 (my-mq-server, null) is shutting down
It seems as the DefaultMessageListenerContainer tries to start an ActiveMQ server which doesn't make sense to me. I've set the phase of the BrokerService to Integer.MAX_INT - 1 and the phase of DefaultJmsListeningContainerFactory to Integer.MAX_INT to make it go away before the ActiveMQ server is stopped.
I have this in my main():
public static void main(String[] args) {
final ConfigurableApplicationContext context = SpringApplication.run(SiteServer.class, args);
context.registerShutdownHook();
}
I've tried setting daemon to true as suggested here: Properly Shutting Down ActiveMQ and Spring DefaultMessageListenerContainer.
Any ideas? Thanks! =)
Found it. This problem occurs when the Camel context is shutdown after the BrokerService. Adding proper life cycle management so that Camel is shutdown before resolved the issue. Now everything is shutdown in a clean way without errors.

connecting to mongo shard via java driver 3.2.2

I am trying to connect to mongo query routers in a test environment (I setup just one query router for test -> pointing to a config server (instead of three) which in turn points to a two node shard with no replicas). I can insert/fetch documents using the mongo shell (and have verified that the documents are going to the sharded nodes). However, when I try to test the connection to the mongo database, I get the output copied below (code being used is also copied underneath). I am using mongo database v3.2.0 and java driver v3.2.2 (I am trying to use the async api).
[info] 14:34:44.562 227 [main] MongoAuthentication INFO - testing 1
[info] 14:34:44.595 260 [main] cluster INFO - Cluster created with settings {hosts=[192.168.0.1:27018], mode=MULTIPLE, requiredClusterType=SHARDED, serverSelectionTimeout='30000 ms', maxWaitQueueSize=30}
[info] 14:34:44.595 260 [main] cluster INFO - Adding discovered server 192.168.0.1:27018 to client view of cluster
[info] 14:34:44.652 317 [main] cluster DEBUG - Updating cluster description to {type=SHARDED, servers=[{address=192.168.0.1:27018, type=UNKNOWN, state=CONNECTING}]
[info] Outputting database names:
[info] 14:34:44.660 325 [main] cluster INFO - No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=SHARDED, connectionMode=MULTIPLE, all=[ServerDescription{address=192.168.0.1:27018, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
[info] Counting the number of documents
[info] 14:34:44.667 332 [main] cluster INFO - No server chosen by ReadPreferenceServerSelector{readPreference=primaryPreferred} from cluster description ClusterDescription{type=SHARDED, connectionMode=MULTIPLE, all=[ServerDescription{address=192.168.0.1:27018, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
[info] - Count result: 0
[info] 14:34:45.669 1334 [cluster-ClusterId{value='577414c420055e5bc086c255', description='null'}-192.168.0.1:27018] connection DEBUG - Closing connection connectionId{localValue:1}
part of the code being used
final MongoClient mongoClient = MongoClientAccessor.INSTANCE.getMongoClientInstance();
final CountDownLatch listDbsLatch = new CountDownLatch(1);
System.out.println("Outputting database names:");
mongoClient.listDatabaseNames().forEach(new Block<String>() {
#Override
public void apply(final String name) {
System.out.println(" - " + name);
}
}, new SingleResultCallback<Void>() {
#Override
public void onResult(final Void result, final Throwable t) {
listDbsLatch.countDown();
}
});
The enum being used is responsible for reading config options and passing a MongoClient reference to its caller. The enum itself calls other classes which I can copy as well if needed. I have the following option configured for ReadPreference:
mongo.client.readPreference=PRIMARYPREFERRED
Any thoughts on what I might be doing wrong or might have misinterpreted? The goal is to connect to the shard via the mongos (query router) so that I can insert/fetch documents in the Mongo shard.
The mongo shard setup (query router, config and shard with replica sets) was not correctly configured. Ensure that the config server(s) replica set is launched first, mongos (query router) is brought up and points to these config servers, the mongo shards are brought up and then the shards are registered via the query router (mongos) as well as the collection is enabled for sharding. Obviously, make sure that the driver is connecting to the mongos (query router) process.

Categories