I have had some trouble while using open-feign
I'm using: Hoxton.RELEASE with Spring-boot version 2.2.1.RELEASE
Caused by: feign.FeignException: status 400 reading Service#method(List,String); content:
at feign.FeignException.errorStatus(FeignException.java:62)
at feign.codec.ErrorDecoder$Default.decode(ErrorDecoder.java:91)
at feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:134)
at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:76)
at feign.hystrix.HystrixInvocationHandler$1.run(HystrixInvocationHandler.java:108)
at com.netflix.hystrix.HystrixCommand$2.call(HystrixCommand.java:301)
at com.netflix.hystrix.HystrixCommand$2.call(HystrixCommand.java:297)
at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:46)
... 26 common frames omitted
This error occurs when the list size too big
If the list size is small then the execution succeeds
Thank you !
Related
My application works fine locally and I'm able to connect to GCP Datastore from local. But when deployed to a server, I'm getting the below exception.
Caused by: com.google.datastore.v1.client.DatastoreException: I/O error\n\t
at com.google.datastore.v1.client.RemoteRpc.makeException(RemoteRpc.java:136)\n\t
at com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:105)\n\t
at com.google.datastore.v1.client.Datastore.beginTransaction(Datastore.java:79)\n\t
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc.beginTransaction(HttpDatastoreRpc.java:153)\n\t... 92 common frames omitted\nCaused by: java.io.IOException: Error getting access token for service account: connect timed out\n\t
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:444)\n\t
at com.google.auth.oauth2.OAuth2Credentials.refresh(OAuth2Credentials.java:157)\n\t
at com.google.auth.oauth2.OAuth2Credentials.getRequestMetadata(OAuth2Credentials.java:145)\n\t
at com.google.auth.oauth2.ServiceAccountCredentials.getRequestMetadata(ServiceAccountCredentials.java:603)\n\t
at com.google.auth.http.HttpCredentialsAdapter.initialize(HttpCredentialsAdapter.java:91)\n\t
at com.google.cloud.http.HttpTransportOptions$1.initialize(HttpTransportOptions.java:159)\n\t
at com.google.cloud.http.CensusHttpModule$CensusHttpRequestInitializer.initialize(CensusHttpModule.java:109)\n\t
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc$1.initialize(HttpDatastoreRpc.java:91)\n\t
at com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:91)\n\t... 94 common frames omitted\nCaused by: java.net.SocketTimeoutException: connect timed out\n\t
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)\n\t
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)\n\t
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)\n\t
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)\n\t
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)\n\t
at java.base/java.net.Socket.connect(Socket.java:591)\n\t
at java.base/sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:285)\n\t
at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177)\n\t
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)\n\t
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)\n\t
at java.base/sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:265)\n\t
at java.base/sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:372)\n\t
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)\n\t
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1181)\n\t
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1075)\n\t
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)\n\t
at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1356)\n\t
at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1331)\n\t
at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:241)\n\t
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:113)\n\t
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:84)\n\t
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1012)\n\t
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:441)
Edit -
Scopes under this credential
[https://www.googleapis.com/auth/pubsub,
https://www.googleapis.com/auth/spanner.admin,
https://www.googleapis.com/auth/spanner.data,
https://www.googleapis.com/auth/datastore,
https://www.googleapis.com/auth/sqlservice.admin,
https://www.googleapis.com/auth/devstorage.read_only,
https://www.googleapis.com/auth/devstorage.read_write,
https://www.googleapis.com/auth/cloudruntimeconfig,
https://www.googleapis.com/auth/trace.append,
https://www.googleapis.com/auth/cloud-platform,
https://www.googleapis.com/auth/cloud-vision,
https://www.googleapis.com/auth/bigquery,
https://www.googleapis.com/auth/monitoring.write]
Any lead will be appreciated.
Thank you in advance!
Turned out that it was a connectivity issue. Our server (in AWS) didn't have the right to access datastore.
We are running an ignite cluster with 3 nodes which pre-loads the data from 3rd party database (using custom cache store). when we try to connect to the cluster using java thin client and if the request reaches the cluster before data loading gets completed, we are getting unknown pair exception and some unstable behavior.
Is there anyway we can block the client request (TCP socket connection) till the data loading gets completed?
I tried with different life cycle events (NODE_START_COMPLETED) but no luck.
Stack trace
Caused by: org.apache.ignite.binary.BinaryInvalidTypeException: Unknown pair [platformId=0, typeId=-845247802]
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:707)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1757)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
at org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:798)
at org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:143)
at org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinary(CacheObjectUtils.java:177)
at org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinaryIfNeeded(CacheObjectUtils.java:67)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:125)
at org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1773)
at org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1761)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:573)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.putAll(GridCacheStoreManagerAdapter.java:627)
at org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.batchStoreCommit(IgniteTxAdapter.java:1507)
at org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:589)
at org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.localFinish(GridNearTxLocal.java:3646)
at org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.doFinish(GridNearTxFinishFuture.java:475)
... 41 common frames omitted
Caused by: java.lang.ClassNotFoundException: Unknown pair [platformId=0, typeId=-845247802]
at org.apache.ignite.internal.MarshallerContextImpl.getClassName(MarshallerContextImpl.java:394)
at org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:344)
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:698)
... 56 common frames omitted
There is no way to forbid thin clients to connect to a cluster using Ignite API at the moment. I created a JIRA ticket for this improvement: https://issues.apache.org/jira/browse/IGNITE-12237
The unknown pair exception doesn't seem to be caused by thin clients connecting at a wrong time though. Usually it's caused by a missing marshaller directory in the work path.
im trying to run one of the examples coming with the OPC-UA Java Stack and getting the following Exception.
Exception in thread "main" org.opcfoundation.ua.common.RuntimeServiceResultException: org.opcfoundation.ua.common.ServiceResultException: Bad_CertificateInvalid (code=0x80120000, description="2148663296, java.io.IOException: Duplicate extensions not allowed")
at org.opcfoundation.ua.transport.TransportChannelSettings.getServerCertificate(TransportChannelSettings.java:114)
at org.opcfoundation.ua.transport.tcp.io.TcpConnection.initialize(TcpConnection.java:376)
at org.opcfoundation.ua.transport.tcp.io.SecureChannelTcp.initialize(SecureChannelTcp.java:273)
at org.opcfoundation.ua.transport.tcp.io.SecureChannelTcp.initialize(SecureChannelTcp.java:246)
at org.opcfoundation.ua.application.Client.createSecureChannel(Client.java:640)
at org.opcfoundation.ua.application.Client.createSecureChannel(Client.java:555)
at org.opcfoundation.ua.application.Client.createSessionChannel(Client.java:370)
at org.opcfoundation.ua.application.Client.createSessionChannel(Client.java:345)
at org.opcfoundation.ua.examples.SampleClient.main(SampleClient.java:109)
Caused by: org.opcfoundation.ua.common.ServiceResultException: Bad_CertificateInvalid (code=0x80120000, description="2148663296, java.io.IOException: Duplicate extensions not allowed")
at org.opcfoundation.ua.transport.security.Cert.<init>(Cert.java:143)
at org.opcfoundation.ua.transport.TransportChannelSettings.getServerCertificate(TransportChannelSettings.java:112)
... 8 more
Caused by: java.security.cert.CertificateParsingException: java.io.IOException: Duplicate extensions not allowed
at sun.security.x509.X509CertInfo.<init>(X509CertInfo.java:169)
at sun.security.x509.X509CertImpl.parse(X509CertImpl.java:1804)
at sun.security.x509.X509CertImpl.<init>(X509CertImpl.java:195)
at sun.security.provider.X509Factory.engineGenerateCertificate(X509Factory.java:102)
at java.security.cert.CertificateFactory.generateCertificate(CertificateFactory.java:339)
at org.opcfoundation.ua.utils.CertificateUtils.decodeX509Certificate(CertificateUtils.java:193)
at org.opcfoundation.ua.transport.security.Cert.<init>(Cert.java:136)
... 9 more
Caused by: java.io.IOException: Duplicate extensions not allowed
at sun.security.x509.CertificateExtensions.parseExtension(CertificateExtensions.java:115)
at sun.security.x509.CertificateExtensions.init(CertificateExtensions.java:88)
at sun.security.x509.CertificateExtensions.<init>(CertificateExtensions.java:78)
at sun.security.x509.X509CertInfo.parse(X509CertInfo.java:702)
at sun.security.x509.X509CertInfo.<init>(X509CertInfo.java:167)
... 15 more
I tried running another client (c#) and it works.
For a short time today, my code running in Flexible Environment "compat" using Google Cloud Datastore API experienced java.net.SocketTimeoutException: Timeout while fetching URL on inserting an Item to Datastore in another GAE project.
In addition, Dataflow failed to insert Items at that time; almost certainly the same problem.
This also occurs on simple key queries, so it is not a problem of heavy data.
Before and after this, a lot of other data did get inserted correctly; including a rerun on this same data.
Googling the error suggests it can be caused by downtime in the Google Cloud, but the Google Cloud Status Dashboard shows green.
What caused this? How can we avoid it in the future?
com.freightos.backup.datastore.gcloudapi.GCloudApiDSBackup$CopyEntities run: ERROR in CopyEntities(commerceDocs/RFQ) java.lang.RuntimeException: com.google.cloud.datastore.DatastoreException: I/O error at
com.freightos.backup.datastore.gcloudapi.GCloudApiDSBackup$CopyEntities.retryIfAllowed(GCloudApiDSBackup.java:963) at
com.freightos.backup.datastore.gcloudapi.GCloudApiDSBackup$CopyEntities.putEntities(GCloudApiDSBackup.java:857) at
com.freightos.backup.datastore.gcloudapi.GCloudApiDSBackup$CopyEntities.retryIfAllowed(GCloudApiDSBackup.java:959) at
com.freightos.backup.datastore.gcloudapi.GCloudApiDSBackup$CopyEntities.putEntities(GCloudApiDSBackup.java:857) at
com.freightos.backup.datastore.gcloudapi.GCloudApiDSBackup$CopyEntities.putEntitiesByParts(GCloudApiDSBackup.java:991) at
com.freightos.backup.datastore.gcloudapi.GCloudApiDSBackup$CopyEntities.run(GCloudApiDSBackup.java:801) at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at
java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at
java.lang.Thread.run(Thread.java:745) Caused by: com.google.cloud.datastore.DatastoreException: I/O error at
com.google.cloud.datastore.spi.DefaultDatastoreRpc.translate(DefaultDatastoreRpc.java:105) at
com.google.cloud.datastore.spi.DefaultDatastoreRpc.commit(DefaultDatastoreRpc.java:133) at
com.google.cloud.datastore.DatastoreImpl$4.call(DatastoreImpl.java:390) at
com.google.cloud.datastore.DatastoreImpl$4.call(DatastoreImpl.java:387) at
com.google.cloud.RetryHelper.doRetry(RetryHelper.java:179) at
com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:244) at
com.google.cloud.datastore.DatastoreImpl.commit(DatastoreImpl.java:386) at
com.google.cloud.datastore.DatastoreImpl.commitMutation(DatastoreImpl.java:380) at
com.google.cloud.datastore.DatastoreImpl.put(DatastoreImpl.java:340) at
com.freightos.backup.datastore.gcloudapi.GCloudApiDSBackup$CopyEntities.putEntities(GCloudApiDSBackup.java:836) ... 9 more Caused by: com.google.datastore.v1.client.DatastoreException: I/O error, code=UNAVAILABLE at
com.google.datastore.v1.client.RemoteRpc.makeException(RemoteRpc.java:126) at
com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:95) at
com.google.datastore.v1.client.Datastore.commit(Datastore.java:84) at
com.google.cloud.datastore.spi.DefaultDatastoreRpc.commit(DefaultDatastoreRpc.java:131) ... 17 more Caused by: java.net.SocketTimeoutException: Timeout while fetching URL: https://datastore.googleapis.com/v1/projects/freightos-prod-backup2:commit at
com.google.appengine.api.urlfetch.URLFetchServiceImpl.convertApplicationException(URLFetchServiceImpl.java:173) at
com.google.appengine.api.urlfetch.URLFetchServiceImpl.fetch(URLFetchServiceImpl.java:45) at
com.google.api.client.extensions.appengine.http.UrlFetchRequest.execute(UrlFetchRequest.java:74) at
com.google.api.client.http.HttpRequest.execute(HttpRequest.java:981) at
com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:87) ... 19 more
It looks like UrlFetchTransport is being used when it shouldn't be.
I've filed this issue to fix the google-cloud-java library.
In the mean time, you should be able to force it to use NetHttpTransport when you construct the DatastoreOptions object:
import com.google.api.client.http.HttpTransport;
import com.google.api.client.http.javanet.NetHttpTransport;
import com.google.auth.http.HttpTransportFactory;
DatastoreOptions.newBuilder()
.setHttpTransportFactory(new HttpTransportFactory() {
#Override
public HttpTransport create() {
return new NetHttpTransport();
}
})
...
I have big problem with ES :(
When i insert bulk (size = 20) a lot of document to ES, ES server throw below exception.
I find many topic discuss about this, but nothing. :sosad: , Any help me, what actually happened ??? Thks so much.
Sr for my bad english.
I using ES 2.3 , client Transport 2.2.1.
Server config
http.port: 9200
http.max_content_length: 100mb
node.name: "es_test"
nod.master: true
node.data: true
index.store.type: niofs
index.number_of_shards: 5
index.number_of_replicas: 0
discovery.zen.ping.multicast.enabled: false
script.inline: on
script.indexed: on
bootstrap.mlockall: true
Erros1
[2016-03-31 07:45:02,601][ERROR][index.engine ] [es_test] [my_index][1] failed to merge
java.io.EOFException: read past EOF: NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm")
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:336)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54)
at org.apache.lucene.store.BufferedChecksumIndexInput.readByte(BufferedChecksumIndexInput.java:41)
at org.apache.lucene.store.DataInput.readInt(DataInput.java:101)
at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:195)
at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:256)
at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:115)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:99)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65)
at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4233)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
Suppressed: org.apache.lucene.index.CorruptIndexException: checksum status indeterminate: remaining=0, please run checkindex for more details (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm")))
at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:371)
at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:164)
... 8 more
[2016-03-31 07:45:02,608][WARN ][index.engine ] [es_test] [my_index][1] failed engine [already closed by tragic event on the index writer]
java.io.EOFException: read past EOF: NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm")
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:336)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54)
at org.apache.lucene.store.BufferedChecksumIndexInput.readByte(BufferedChecksumIndexInput.java:41)
at org.apache.lucene.store.DataInput.readInt(DataInput.java:101)
at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:195)
at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:256)
at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:115)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:99)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65)
at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4233)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
Suppressed: org.apache.lucene.index.CorruptIndexException: checksum status indeterminate: remaining=0, please run checkindex for more details (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm")))
at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:371)
at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:164)
... 8 more
[2016-03-31 07:45:02,609][ERROR][index.engine ] [es_test] [my_index][4] failed to merge
java.io.EOFException: read past EOF: NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/4/index/_190.fdx")
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:336)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54)
at org.apache.lucene.store.BufferedChecksumIndexInput.readByte(BufferedChecksumIndexInput.java:41)
at org.apache.lucene.store.DataInput.readInt(DataInput.java:101)
at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:195)
at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:256)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.<init>(CompressingStoredFieldsReader.java:133)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsReader(CompressingStoredFieldsFormat.java:121)
at org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsReader(Lucene50StoredFieldsFormat.java:173)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:117)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65)
at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4233)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
Suppressed: org.apache.lucene.index.CorruptIndexException: checksum status indeterminate: remaining=0, please run checkindex for more details (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/4/index/_190.fdx")))
at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:371)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.<init>(CompressingStoredFieldsReader.java:140)
... 10 more
Erros2
[2016-03-31 20:04:07,419][DEBUG][action.admin.cluster.node.stats] [es_test] failed to execute on node [mplUA6JET92RPgmNx-DPMA]
RemoteTransportException[[es_test][ip:9300][cluster:monitor/nodes/stats[n]]]; nested: AlreadyClosedException[this IndexReader is closed];
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed
at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
at org.apache.lucene.index.CompositeReader.getContext(CompositeReader.java:101)
at org.apache.lucene.index.CompositeReader.getContext(CompositeReader.java:55)
at org.apache.lucene.index.IndexReader.leaves(IndexReader.java:438)
at org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:330)
at org.elasticsearch.index.shard.IndexShard.completionStats(IndexShard.java:765)
at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:164)
at org.elasticsearch.indices.IndicesService.stats(IndicesService.java:253)
at org.elasticsearch.node.service.NodeService.stats(NodeService.java:157)
at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:82)
at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:44)
at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:92)
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:230)
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:226)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I suggest that you receive the error, since upgrading to ES 2.3?
The cause is most likely that your transport client version is older than your clusters version.
When using the transport Client, the versions have to be compatible.