Hibernate Search with AWS OpenSearch: 403 Forbidden after 10 minutes - java

The past few weeks I've been trying to solve this issue. The indexing works "perfectly" until it doesn't. It is often with large amounts of entities (10k+) that at around 70% of all entities it starts throwing 403 responses.
The Setup
The packages (gradle):
implementation group: 'org.hibernate.search', name: 'hibernate-search-engine', version: '6.1.7.Final'
implementation group: "org.hibernate.search", name: "hibernate-search-mapper-orm", version: "6.1.7.Final"
implementation group: "org.hibernate.search", name: "hibernate-search-backend-lucene", version: "6.1.7.Final"
implementation group: "org.hibernate.search", name: "hibernate-search-backend-elasticsearch", version: "6.1.7.Final"
implementation group: "org.hibernate.search", name: "hibernate-search-backend-elasticsearch-aws", version: "6.1.7.Final"
The hibernate config:
cfg.setProperty("hibernate.search.backend.type", "elasticsearch")
cfg.setProperty("hibernate.search.backend.aws.signing.enabled", "true")
cfg.setProperty("hibernate.search.backend.aws.region",config.getProperty("AWS_REGION"))
cfg.setProperty("hibernate.search.backend.aws.credentials.type", "static")
cfg.setProperty(
"hibernate.search.backend.aws.credentials.access_key_id",
config.getProperty("AWS_ACCESS_KEY_ID")
)
cfg.setProperty(
"hibernate.search.backend.aws.credentials.secret_access_key",
config.getProperty("AWS_SECRET_ACCESS_KEY")
)
cfg.setProperty("hibernate.search.backend.uris", config.getProperty("AWS_ES_HOSTS"))
cfg.setProperty("hibernate.search.backend.version_check.enabled", "false")
cfg.setProperty("hibernate.search.backend.version",config.getProperty("AWS_ES_VERSION"))
cfg.setProperty("hibernate.search.schema_management.strategy", "create")
cfg.setProperty("hibernate.search.backend.indexing.max_bulk_size", "20")
cfg.setProperty("hibernate.search.backend.indexing.queue_size", "20")
the code:
CompletableFuture.runAsync {
Search.session(database.factory.openSession())
.massIndexer()
.idFetchSize(20)
.batchSizeToLoadObjects(10)
.dropAndCreateSchemaOnStart(false)
.purgeAllOnStart(false)
.startAndWait()
}
The output:
At first... it works fine:
PojoMassIndexingLoggingMonitor:100 - HSEARCH000031: Mass indexing progress: 75.74% [128.930679documents/second].
PojoMassIndexingLoggingMonitor:97 - HSEARCH000030: Mass indexing progress: indexed 33458 entities in 259447 ms.
PojoMassIndexingLoggingMonitor:100 - HSEARCH000031: Mass indexing progress: 75.85% [128.958908 documents/second].
PojoMassIndexingLoggingMonitor:97 - HSEARCH000030: Mass indexing progress: indexed 33508 entities in 259708 ms.
PojoMassIndexingLoggingMonitor:100 - HSEARCH000031: Mass indexing progress: 75.97% [129.021820 documents/second].
PojoMassIndexingLoggingMonitor:97 - HSEARCH000030: Mass indexing progress: indexed 33558 entities in 260016 ms.
PojoMassIndexingLoggingMonitor:100 - HSEARCH000031: Mass indexing progress: 76.08% [129.061295 documents/second].
PojoMassIndexingLoggingMonitor:97 - HSEARCH000030: Mass indexing progress: indexed 33608 entities in 260383 ms.
PojoMassIndexingLoggingMonitor:100 - HSEARCH000031: Mass indexing progress: 76.19% [129.071411 documents/second].
PojoMassIndexingLoggingMonitor:97 - HSEARCH000030: Mass indexing progress: indexed 33658 entities in 260650 ms.
PojoMassIndexingLoggingMonitor:100 - HSEARCH000031: Mass indexing progress: 76.31% [129.131012 documents/second].
PojoMassIndexingLoggingMonitor:97 - HSEARCH000030: Mass indexing progress: indexed 33708 entities in 260961 ms.
PojoMassIndexingLoggingMonitor:100 - HSEARCH000031: Mass indexing progress: 76.42% [129.168732 documents/second].
PojoMassIndexingLoggingMonitor:97 - HSEARCH000030: Mass indexing progress: indexed 33758 entities in 261316 ms.
PojoMassIndexingLoggingMonitor:100 - HSEARCH000031: Mass indexing progress: 76.53% [129.184586 documents/second].
PojoMassIndexingLoggingMonitor:97 - HSEARCH000030: Mass indexing progress: indexed 33808 entities in 261771 ms.
PojoMassIndexingLoggingMonitor:100 - HSEARCH000031: Mass indexing progress: 76.65% [129.151047 documents/second].
And then... it does not:
ERROR LogFailureHandler:36 - HSEARCH000058: Exception occurred org.hibernate.search.util.common.SearchException: HSEARCH400588: Call to the bulk REST API failed: HSEARCH400007: Elasticsearch request failed: HSEARCH400090: Elasticsearch response indicates a failure.
Request: POST /_bulk with parameters {}
Response: 403 'Forbidden' from 'https://redacted.es.amazonaws.com' with body
"message": "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\n\nThe Canonical String for this request should have been"
}
Failing operation:
Indexing instance of entity 'Entity' during mass indexing
Entities that could not be indexed correctly:
Entity#ac4650e6-2008-4fd5-949f-8541569d6ba9
org.hibernate.search.util.common.SearchException: HSEARCH400588: Call to the bulk REST API failed: HSEARCH400007: Elasticsearch request failed: HSEARCH400090: Elasticsearch response indicates a failure.
Request: POST /_bulk with parameters {}
Response: 403 'Forbidden' from 'https://redacted.es.amazonaws.com' with body
{
"message": "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\n\nThe Canonical String for this request should have been"
}
at org.hibernate.search.backend.elasticsearch.orchestration.impl.ElasticsearchDefaultWorkSequenceBuilder$BulkedWorkExecutionState.onBulkWorkComplete(ElasticsearchDefaultWorkSequenceBuilder.java:239) ~[hibernate-search-backend-elasticsearch-6.1.7.Final.jar:6.1.7.Final]
at org.hibernate.search.util.common.impl.Futures.lambda$handler$3(Futures.java:100) ~[hibernate-search-util-common-6.1.7.Final.jar:6.1.7.Final]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) ~[?:?]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?]
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2144) ~[?:?]
at org.hibernate.search.backend.elasticsearch.client.impl.ElasticsearchClientImpl$1.onFailure(ElasticsearchClientImpl.java:127) ~[hibernate-search-backend-elasticsearch-6.1.7.Final.jar:6.1.7.Final]
at org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onDefinitiveFailure(RestClient.java:672) ~[elasticsearch-rest-client-7.17.4.jar:7.17.4]
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:408) ~[elasticsearch-rest-client-7.17.4.jar:7.17.4]
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:392) ~[elasticsearch-rest-client-7.17.4.jar:7.17.4]
at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122) ~[httpcore-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:182) ~[httpasyncclient-4.1.5.jar:4.1.5]
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:87) ~[httpasyncclient-4.1.5.jar:4.1.5]
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:40) ~[httpasyncclient-4.1.5.jar:4.1.5]
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:121) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104) ~[httpcore-nio-4.4.15.jar:4.4.15]
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591) ~[httpcore-nio-4.4.15.jar:4.4.15]
at java.lang.Thread.run(Thread.java:830) ~[?:?]
Caused by: org.hibernate.search.util.common.SearchException: HSEARCH400007: Elasticsearch request failed: HSEARCH400090: Elasticsearch response indicates a failure.
Request: POST /_bulk with parameters {}
Response: 403 'Forbidden' from 'https://redacted.es.amazonaws.com' with body
{
"message": "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\n\nThe Canonical String for this request should have been"
}
at org.hibernate.search.backend.elasticsearch.work.impl.AbstractNonBulkableWork.handleResult(AbstractNonBulkableWork.java:84) ~[hibernate-search-backend-elasticsearch-6.1.7.Final.jar:6.1.7.Final]
at org.hibernate.search.backend.elasticsearch.work.impl.AbstractNonBulkableWork.lambda$execute$3(AbstractNonBulkableWork.java:66) ~[hibernate-search-backend-elasticsearch-6.1.7.Final.jar:6.1.7.Final]
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642
I've spit through all the documentation I could find, tried different parameters, offsets, fetchId sizes, etc. To no avail. The true differentiating factor I seem to not be able to dig through is that is first works (the entities are actually indexed and show up), but that it doesn't work after a while... (4-5minutes). Nothing is to be found about that.
Any feedback, answers or questions are welcome.

Related

endpoint_failure (context canceled) resulting in duplicate call to REST Endpoint

Am invoking an REST endpoint from another service using restTemplate.exchange.
The endpoint that receives the request invokes DB and fetches around 1.5 mil records and stores them in another DB.
Now am getting below x_cf_routererror:"endpoint_failure (context canceled)" after invoking the DB. I get this error in about 120+ seconds and process continues as is.
After this error I see another call being made the same endpoint and this is resulting duplicates in target DB.
Not sure why this is happening, I do not have any retry mechanism in place and the restTempalte timeout is set to 300 at client service that invokes.
Has someone faced this issue? whats causing this endpoint_failure (context canceled) and duplicate invocation of endpoint.
Appreciate your help in this.
Log snippet:
2022-05-12T08:57:18.840-04:00 [APP/PROC/WEB/0] [OUT] 2022-05-12 12:57:18.840 INFO 28 --- [nio-8080-exec-4]
Controller1 : Request received to load all timecard information::RequestedTime=12:57:18.840
2022-05-12T08:59:21.530-04:00 [RTR/17] [OUT] - [2022-05-12T12:57:18.829182975Z] "GET HTTP/1.1" 499 0 22 "-" "Java/1.8.0_332" "" "1" x_forwarded_for:"" x_forwarded_proto:"https" vcap_request_id:"" response_time:122.701301 gorouter_time:0.000164 app_id:"" app_index:"0" instance_id:"" x_cf_routererror:"endpoint_failure (context canceled)" x_b3_traceid:"" x_b3_spanid:"" x_b3_parentspanid:"-" b3:"599552bb012c2adc60adef7187a865e7-60adef7187a865e7"
**Below is the duplicate call**
2022-05-12T08:59:21.777-04:00 [APP/PROC/WEB/0] [OUT] 2022-05-12 12:59:21.777 INFO 28 --- [nio-8080-exec-2]
Controller1 : Request received to load all timecard information::RequestedTime=12:59:21.777
Thanks,
S
The error 499 (or 502) is returned by PCF gorouter, instead of your web app [APP/PROC/WEB/0].
PCF gorouter will retry with same HTTP request for particular error cases.
For more details please refer to: https://docs.cloudfoundry.org/adminguide/troubleshooting-router-error-responses.html
Error 499 usually mean that call is taking too much of time and client closed the connection. It has been noticed many times when your app is not able to get a response within specific time (usually 2min time limit set). You might need to see if there is any DB issue or something else that delaying the expected response.

Apache Flink. Streaming job can not continue processing messages because some threads of job could not recover state from latest successful checkpoint

We have a flink streaming job which consumes messages from kafka topic and saves it to S3 storage using StreamingFileSink sink. Job run with parallelism of 10. At the beginning job was running without any exceptions for several days. However we have got stuck with processing messages, because 6 threads cannot recover state from latest successful checkpoint.
In UI we have next exception -
java.io.IOException: Recovering commit failed for object RMS/XXLM_RMS_BATCH_COMP_DPAC/2021-10-14--10/part-9-644. Object does not exist and MultiPart Upload 0005CE4D24D0D208 is not valid.
at org.apache.flink.fs.s3.common.writer.S3Committer.commitAfterRecovery(S3Committer.java:123)
at org.apache.flink.streaming.api.functions.sink.filesystem.OutputStreamBasedPartFileWriter$OutputStreamBasedPendingFile.commitAfterRecovery(OutputStreamBasedPartFileWriter.java:218)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.commitRecoveredPendingFiles(Bucket.java:160)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.<init>(Bucket.java:127)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.restore(Bucket.java:466)
at org.apache.flink.streaming.api.functions.sink.filesystem.DefaultBucketFactoryImpl.restoreBucket(DefaultBucketFactoryImpl.java:67)
at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.handleRestoredBucketState(Buckets.java:192)
at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.initializeActiveBuckets(Buckets.java:179)
at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.initializeState(Buckets.java:163)
at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.<init>(StreamingFileSinkHelper.java:75)
at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:472)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:189)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:171)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
at org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:118)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:290)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:436)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:574)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.call(StreamTaskActionExecutor.java:100)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:554)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:756)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.fs.s3a.AWSBadRequestException: Completing multipart commit on RMS/XXLM_RMS_BATCH_COMP_DPAC/2021-10-14--10/part-9-644: com.amazonaws.services.s3.model.AmazonS3Exception: s entity tag. (Service: Amazon S3; Status Code: 400; Error Code: InvalidPart; Request ID: 4a52c48aed3bffc6; S3 Extended Request ID: null; Proxy: null), S3 Extended Request ID: null:InvalidPart: s entity tag. (Service: Amazon S3; Status Code: 400; Error Code: InvalidPart; Request ID: 4a52c48aed3bffc6; S3 Extended Request ID: null; Proxy: null)
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:212)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:256)
at org.apache.hadoop.fs.s3a.WriteOperationHelper.finalizeMultipartUpload(WriteOperationHelper.java:222)
at org.apache.hadoop.fs.s3a.WriteOperationHelper.completeMPUwithRetries(WriteOperationHelper.java:267)
at org.apache.flink.fs.s3hadoop.HadoopS3AccessHelper.commitMultiPartUpload(HadoopS3AccessHelper.java:95)
at org.apache.flink.fs.s3.common.writer.S3Committer.commitAfterRecovery(S3Committer.java:93)
... 22 more Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: s entity tag. (Service: Amazon S3; Status Code: 400; Error Code: InvalidPart; Request ID: 4a52c48aed3bffc6; S3 Extended Request ID: null; Proxy: null), S3 Extended Request ID: null
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1811)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1395)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1371)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5062)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5008)
at com.amazonaws.services.s3.AmazonS3Client.completeMultipartUpload(AmazonS3Client.java:3490)
at org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$finalizeMultipartUpload$1(WriteOperationHelper.java:229)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
... 29 more
Screen of running job
And if we look inside the bucket on S3 there are no files mentioned in error message, for instance part-9-644 or part-0-644 or part-5-644, but there are some temp files with such prefixes.
List of objects in bucket
Another thing that we want mention is that last checkpoint was failed with Exception: Checkpoint Coordinator is suspending.
Checkpoints summary
Has anybody faced the same problem? Or may be could explain the root cause of such situation?
Thanks.

Unauthorized to call exchange rate service

I'm following Deep Dive 9 with SAP S/4HANA Cloud SDK: Tenant and User Aware Microservices Communication via REST APIs and already set up communications between Converter service and Exchange Rate service.
Deep Dive 9
However, when I try to run
https://approuter-converter-accountID.cfapps.us10.hana.ondemand.com/converter?sum=100&from=EUR&to=USD
, it returns Internal Server Error.
I've checked the log in Converter app:
2018-06-19T15:42:15.86+1000 [APP/PROC/WEB/0] OUT Destination: ScpCfDestination(destinationType=HTTP, name=app, description=null, propertiesByName={Type=HTTP, ProxyType=Internet, Authentication=AppToAppSSO, URL=https://approuter-exchangerate-<accountid2>.cfapps.eu10.hana.ondemand.com, Name=app})
2018-06-19T15:42:15.86+1000 [APP/PROC/WEB/0] OUT HttpClient: com.sap.cloud.sdk.cloudplatform.connectivity.HttpClientWrapper#5e49142a
2018-06-19T15:42:16.53+1000 [APP/PROC/WEB/0] OUT HttpResponse: HTTP/1.1 200 OK [Cache-Control: no-cache, no-store, must-revalidate, Content-Length: 512, Date: Tue, 19 Jun 2018 05:42:16 GMT, Set-Cookie: locationAfterLogin=%2Fexchange-rate; Path=/; HttpOnly, X-Frame-Options: SAMEORIGIN, X-Request-Id: jil9hsq1, X-Vcap-Request-Id: cbd2ec1a-e318-4a88-56dd-68b6f5183aa2] org.apache.http.conn.BasicManagedEntity#1d9b5223
2018-06-19T15:42:16.53+1000 [APP/PROC/WEB/0] OUT HttpEntity: org.apache.http.conn.BasicManagedEntity#1d9b5223
2018-06-19T15:42:16.53+1000 [APP/PROC/WEB/0] OUT Rates Json: <html><head><link rel="shortcut icon" href="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" /><script>document.cookie="fragmentAfterLogin="+encodeURIComponent(location.hash)+";path=/";location="https://p1942866225trial.authentication.eu10.hana.ondemand.com/oauth/authorize?response_type=code&client_id=sb-exchangerate-<accountId2>!t3509&redirect_uri=https%3A%2F%2Fapprouter-exchangerate-<accountId2>.cfapps.eu10.hana.ondemand.com%2Flogin%2Fcallback"</script></head></html>
I can see that HttpResponse is successful. But Rates Json (which is the returning entity) seems not working.
I've followed all steps in the blog but no idea why it returns error like this.
Can you please suggest what could be the reason?
Update:
I was using one S-user and one P-user in the scenario.
After I change and use 2 P-users, I can see that the response is in the converter log. What is wrong with using S-user?
However, the response is not displayed in browser. Instead, it shows error: Internal Server Error and in the log written:
{ "written_at":"2018-06-21T02:25:30.359Z","written_ts":39256458075633,"component_id":"e8cb8a72-6ca9-4bcb-86db-c4a3a6addfe0","component_name":"converter","DCComponent":"","organization_name":"-","component_type":"application","space_name":"dev","component_instance":"0","organization_id":"-","correlation_id":"-","CSNComponent":"","space_id":"a17affa3-0b14-4c07-af21-6a2afd1f40bd","Application":"converter","container_id":"10.0.137.152","type":"log","logger":"org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/].[com.mycompany.ConverterServlet]","thread":"http-bio-0.0.0.0-8080-exec-3","level":"ERROR","categories":[],"msg":"Servlet.service() for servlet [com.mycompany.ConverterServlet] in context with path [] threw exception","stacktrace":["java.io.IOException: Attempted read from closed stream."," at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:165)"," at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)"," at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)"," at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)"," at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)"," at java.io.InputStreamReader.read(InputStreamReader.java:184)"," at java.io.Reader.read(Reader.java:140)"," at org.apache.http.util.EntityUtils.toString(EntityUtils.java:225)"," at org.apache.http.util.EntityUtils.toString(EntityUtils.java:306)"," at com.mycompany.ConverterServlet.doGet(ConverterServlet.java:65)"," at javax.servlet.http.HttpServlet.service(HttpServlet.java:624)"," at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)"," at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)"," at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)"," at org.apache.tomee.webservices.CXFJAXRSFilter.doFilter(CXFJAXRSFilter.java:83)"," at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)"," at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)"," at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)"," at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)"," at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)"," at com.sap.cloud.sdk.cloudplatform.servlet.RequestContextServletFilter$1.execute(RequestContextServletFilter.java:215)"," at com.sap.cloud.sdk.cloudplatform.servlet.Executable.call(Executable.java:19)"," at com.sap.cloud.sdk.cloudplatform.servlet.Executable.call(Executable.java:9)"," at com.sap.cloud.sdk.cloudplatform.servlet.RequestContextCallable.call(RequestContextCallable.java:78)"," at com.sap.cloud.sdk.cloudplatform.servlet.RequestContextServletFilter.doFilter(RequestContextServletFilter.java:217)"," at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)"," at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)"," at com.sap.cloud.sdk.cloudplatform.security.servlet.HttpCachingHeaderFilter.doFilter(HttpCachingHeaderFilter.java:52)"," at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)"," at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)"," at com.sap.cloud.sdk.cloudplatform.security.servlet.HttpSecurityHeadersFilter.doFilter(HttpSecurityHeadersFilter.java:37)"," at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)"," at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)"," at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)"," at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)"," at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)"," at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)"," at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)"," at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)"," at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)"," at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)"," at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:154)"," at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)"," at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45)"," at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)"," at org.springframework.security.oauth2.provider.authentication.OAuth2AuthenticationProcessingFilter.doFilter(OAuth2AuthenticationProcessingFilter.java:176)"," at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)"," at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:50)"," at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)"," at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)"," at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)"," at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)"," at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)"," at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)"," at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:344)"," at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:261)"," at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)"," at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)"," at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219)"," at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:110)"," at org.apache.tomee.catalina.OpenEJBValve.invoke(OpenEJBValve.java:44)"," at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:506)"," at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:169)"," at com.sap.xs.java.valves.ErrorReportValve.invoke(ErrorReportValve.java:66)"," at ch.qos.logback.access.tomcat.LogbackValve.invoke(LogbackValve.java:191)"," at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)"," at com.sap.xs.security.UserInfoValve.invoke(UserInfoValve.java:23)"," at com.sap.xs.statistics.tomcat.valve.RequestTracingValve.invoke(RequestTracingValve.java:43)"," at com.sap.xs.logging.catalina.RuntimeInfoValve.invoke(RuntimeInfoValve.java:40)"," at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:683)"," at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:445)"," at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1115)"," at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:637)"," at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)"," at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)"," at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)"," at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)"," at java.lang.Thread.run(Thread.java:836)"] }
please, ensure that the called application (exchange rate) is secured properly. For that, call the app via its app router directly: exchange rate application router URL/exchange-rate and check if you are getting the correct payload after you enter your credentials.
If not, please, check and modify your xs-security-exchangerate.json and manifest-approuter-exchangerate.yml accordingly based on your account data. In case of changes, re-run the deployment and service binding for the exchange rate service.
Best regards,
Ekaterina

Cannot create connection to DynamoDB in Java AWS lambda function

I have a lambda function, which I'm trying to use to connect to a DynamoDB table I have. I'm using this code to establish the connection:
...
context.getLogger().log("Before create client..");
AmazonDynamoDB ddb = AmazonDynamoDBClientBuilder.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(
"https://dynamodb.ap-southeast-2.amazonaws.com", "ap-southeast-2")).build();
context.getLogger().log("After create client..");
...
The output I have from the function is as follows:
==================== FUNCTION OUTPUT ====================
{"errorMessage":"2017-07-28T01:11:34.092Z aeee6505-7331-11e7-b28b-db98038611cc Task timed out after 5.00 seconds"}
==================== FUNCTION LOG OUTPUT ====================
START RequestId: aeee6505-7331-11e7-b28b-db98038611cc Version: $LATEST
Before create client..END RequestId: aeee6505-7331-11e7-b28b-db98038611cc
REPORT RequestId: aeee6505-7331-11e7-b28b-db98038611cc Duration: 5003.51 ms Billed Duration: 5000 ms Memory Size: 256 MB Max Memory Used: 62 MB
2017-07-28T01:11:34.092Z aeee6505-7331-11e7-b28b-db98038611cc Task timed out after 5.00 seconds
As you can see, it times out when trying to build the connection and never prints the second log statement. Is there a reason it would timeout rather than throwing an exception, e.g. if there's an error with the IAM role or something? The dynamoDB region and lambda region are the same (Sydney - ap-southeast-2), so I'd have thought this would work.
The IAM role the lambda function is using has the following permissions:
AmazonDynamoDBReadOnlyAccess
AmazonS3ReadOnlyAccess
AWSLambdaBasicExecutionRole
Fixed it.. bumped up the memory of the lambda function to 1024MB. Seriously not sure why that was required given memory used was always around 60-70MB :/
It's memory issue only..I changed lambda function to 1024MB it start working fine

JMeter test on Netty-based impl produces error for every second request

I've implemented an HTTP service based on the HTTP server example as provided by the netty.io project.
When I execute a GET request to the service URL from command-line (wget) or from a browser, I receive a result as expected.
When I perform a load test using ApacheBench ab -n 100000 -c 8 http://localhost:9000/path/to/service, experience no errors (neither on service nor on ab side) and see fair numbers for request processing duration.
Afterwards, I set up a test plan in JMeter having a thread group with 1 thread and a loop count of 2. I inserted an HTTP request sampler where I simply added the server name localhost, the port number 9000 and the path /path/to/service. Then I also added a View Results Tree and a Summary Report listener.
Finally, I executed the test plan and received one valid response and one error showing the following content:
Thread Name: Thread Group 1-1
Sample Start: 2015-06-04 09:23:12 CEST
Load time: 0
Connect Time: 0
Latency: 0
Size in bytes: 2068
Headers size in bytes: 0
Body size in bytes: 2068
Sample Count: 1
Error Count: 1
Response code: Non HTTP response code: org.apache.http.NoHttpResponseException
Response message: Non HTTP response message: The target server failed to respond
Response headers:
HTTPSampleResult fields:
ContentType:
DataEncoding: null
The associated exception found in response data tab showed the following content
org.apache.http.NoHttpResponseException: The target server failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:95)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:61)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:254)
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:289)
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191)
at org.apache.jmeter.protocol.http.sampler.MeasuringConnectionManager$MeasuredConnection.receiveResponseHeader(MeasuringConnectionManager.java:201)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:300)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:127)
at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:715)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:520)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.executeRequest(HTTPHC4Impl.java:517)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.sample(HTTPHC4Impl.java:331)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:74)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1146)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1135)
at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:434)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:261)
at java.lang.Thread.run(Thread.java:745)
As I have a similar service already running which receives and processes web tracking data which shows no errors, it might be a problem within my test plan or JMeter .. but I am not sure :-(
Did anyone experience similar behavior? Thanks in advance ;-)
Issue can be related to Keep-Alive management.
Read those:
https://bz.apache.org/bugzilla/show_bug.cgi?id=57921
https://wiki.apache.org/jmeter/JMeterSocketClosed
So your solution is one of those:
If you're sure it's a keep alive issue:
Try jmeter nightly build http://jmeter.apache.org/nightly.html:
Download the _bin and _lib files
Unpack the archives into the same directory structure
The other archives are not needed to run JMeter.
And adapt the value of httpclient4.idletimeout
A workaround is to increase retry or add connection stale check as per :
https://wiki.apache.org/jmeter/JMeterSocketClosed

Categories