I have big problem with ES :(
When i insert bulk (size = 20) a lot of document to ES, ES server throw below exception.
I find many topic discuss about this, but nothing. :sosad: , Any help me, what actually happened ??? Thks so much.
Sr for my bad english.
I using ES 2.3 , client Transport 2.2.1.
Server config
http.port: 9200
http.max_content_length: 100mb
node.name: "es_test"
nod.master: true
node.data: true
index.store.type: niofs
index.number_of_shards: 5
index.number_of_replicas: 0
discovery.zen.ping.multicast.enabled: false
script.inline: on
script.indexed: on
bootstrap.mlockall: true
Erros1
[2016-03-31 07:45:02,601][ERROR][index.engine ] [es_test] [my_index][1] failed to merge
java.io.EOFException: read past EOF: NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm")
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:336)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54)
at org.apache.lucene.store.BufferedChecksumIndexInput.readByte(BufferedChecksumIndexInput.java:41)
at org.apache.lucene.store.DataInput.readInt(DataInput.java:101)
at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:195)
at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:256)
at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:115)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:99)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65)
at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4233)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
Suppressed: org.apache.lucene.index.CorruptIndexException: checksum status indeterminate: remaining=0, please run checkindex for more details (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm")))
at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:371)
at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:164)
... 8 more
[2016-03-31 07:45:02,608][WARN ][index.engine ] [es_test] [my_index][1] failed engine [already closed by tragic event on the index writer]
java.io.EOFException: read past EOF: NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm")
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:336)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54)
at org.apache.lucene.store.BufferedChecksumIndexInput.readByte(BufferedChecksumIndexInput.java:41)
at org.apache.lucene.store.DataInput.readInt(DataInput.java:101)
at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:195)
at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:256)
at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:115)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:99)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65)
at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4233)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
Suppressed: org.apache.lucene.index.CorruptIndexException: checksum status indeterminate: remaining=0, please run checkindex for more details (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm")))
at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:371)
at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:164)
... 8 more
[2016-03-31 07:45:02,609][ERROR][index.engine ] [es_test] [my_index][4] failed to merge
java.io.EOFException: read past EOF: NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/4/index/_190.fdx")
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:336)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54)
at org.apache.lucene.store.BufferedChecksumIndexInput.readByte(BufferedChecksumIndexInput.java:41)
at org.apache.lucene.store.DataInput.readInt(DataInput.java:101)
at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:195)
at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:256)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.<init>(CompressingStoredFieldsReader.java:133)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsReader(CompressingStoredFieldsFormat.java:121)
at org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsReader(Lucene50StoredFieldsFormat.java:173)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:117)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65)
at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4233)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
Suppressed: org.apache.lucene.index.CorruptIndexException: checksum status indeterminate: remaining=0, please run checkindex for more details (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/4/index/_190.fdx")))
at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:371)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.<init>(CompressingStoredFieldsReader.java:140)
... 10 more
Erros2
[2016-03-31 20:04:07,419][DEBUG][action.admin.cluster.node.stats] [es_test] failed to execute on node [mplUA6JET92RPgmNx-DPMA]
RemoteTransportException[[es_test][ip:9300][cluster:monitor/nodes/stats[n]]]; nested: AlreadyClosedException[this IndexReader is closed];
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed
at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
at org.apache.lucene.index.CompositeReader.getContext(CompositeReader.java:101)
at org.apache.lucene.index.CompositeReader.getContext(CompositeReader.java:55)
at org.apache.lucene.index.IndexReader.leaves(IndexReader.java:438)
at org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:330)
at org.elasticsearch.index.shard.IndexShard.completionStats(IndexShard.java:765)
at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:164)
at org.elasticsearch.indices.IndicesService.stats(IndicesService.java:253)
at org.elasticsearch.node.service.NodeService.stats(NodeService.java:157)
at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:82)
at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:44)
at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:92)
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:230)
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:226)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I suggest that you receive the error, since upgrading to ES 2.3?
The cause is most likely that your transport client version is older than your clusters version.
When using the transport Client, the versions have to be compatible.
Related
I have had some trouble while using open-feign
I'm using: Hoxton.RELEASE with Spring-boot version 2.2.1.RELEASE
Caused by: feign.FeignException: status 400 reading Service#method(List,String); content:
at feign.FeignException.errorStatus(FeignException.java:62)
at feign.codec.ErrorDecoder$Default.decode(ErrorDecoder.java:91)
at feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:134)
at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:76)
at feign.hystrix.HystrixInvocationHandler$1.run(HystrixInvocationHandler.java:108)
at com.netflix.hystrix.HystrixCommand$2.call(HystrixCommand.java:301)
at com.netflix.hystrix.HystrixCommand$2.call(HystrixCommand.java:297)
at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:46)
... 26 common frames omitted
This error occurs when the list size too big
If the list size is small then the execution succeeds
Thank you !
I wrote a mapreduce job to scan an hbase table for a certain time range to count certain elements we need for analysis.
Mappers in the MR job keeps failing but I don't know why. Seems like each time I run the job, a different number of mappers fail. The YARN log (see below) from Cloudera manager isn't helpful in pointing what the problem is, although, someone said I might be running out of memory.
It seems to retry multiple times but each time it fails. What do I need to do to make it stop failing or how can I log things to help me better determine what is happening?
Below is a log from YARN for one of the mappers that failed.
Error: org.apache.hadoop.hbase.client.RetriesExhaustedException:
Failed after attempts=36, exceptions: Thu Jun 15 16:26:57 PDT 2017,
null, java.net.SocketTimeoutException: callTimeout=60000,
callDuration=60301: row '152_p3401.db161139.sjc102.dbi_1496271480' on
table 'dbi_based_data' at
region=dbi_based_data,151_p3413.db162024.iad4.dbi_1476974340,1486675565213.d83250d0682e648d165872afe5abd60e., hostname=hslave35118.ams9.mysecretdomain.com,60020,1483570489305,
seqNum=19308931 at
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:276)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:207)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:403)
at
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
at
org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:236)
at
org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:147)
at
org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$1.nextKeyValue(TableInputFormatBase.java:216)
at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at
org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:415) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused
by: java.net.SocketTimeoutException: callTimeout=60000,
callDuration=60301: row '152_p3401.db161139.sjc102.dbi_1496271480' on
table 'dbi_based_data' at
region=dbi_based_data,151_p3413.db162024.iad4.dbi_1476974340,1486675565213.d83250d0682e648d165872afe5abd60e., hostname=hslave35118.ams9.mysecretdomain.com,60020,1483570489305,
seqNum=19308931 at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) Caused by:
java.io.IOException: Call to
hslave35118.ams9.mysecretdomain.com/10.216.35.118:60020 failed on
local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException:
Call id=12, waitTime=60001, operationTimeout=60000 expired. at
org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:291)
at
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1272)
at
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
at
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:219)
at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:64)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:360)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:334)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more Caused by:
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=12,
waitTime=60001, operationTimeout=60000 expired. at
org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:73) at
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1246)
... 13 more
So it looks like for my case I needed to extend the timeout setting. In my Java program I had to add the following lines to make the exception go away:
conf.set("hbase.rpc.timeout","90000");
conf.set("hbase.client.scanner.timeout.period","90000");
The answer was found on this link on Cloudera's site
I have the route "imap://%s#%s?password=%s&folderName=%s&unseen=true&delete=true&skipFailedMessage=true" to poll emails and skip failed ones. This property skipFailedMessage=true is not being honored or I am miss understanding the use of it.
I am reading emails from 5 different mailboxes with different placeholders but no emails get read when I encounter "org.apache.camel.RuntimeCamelException: Failed to extract body due to: BASE64Decoder: Error in encoded stream: found valid base64 character after a padding character (=)" on onw of the emails. I can only read all other emails on different mail boxes if the failing message is deleted. Please help. I tried versions 2.17.3 and 2.18 and both are behaving the same way.
Here is the stack trace:
org.apache.camel.RuntimeCamelException: Failed to extract body due to:
BASE64Decoder: Error in encoded stream: found valid base64 character
after a padding character (=), the 10 most recent characters were:
"xmlns:v="u". Exchange: Exchange[]. Message:
com.sun.mail.imap.IMAPMessage#7883ab8c at
org.apache.camel.component.mail.MailBinding.extractBodyFromMail(MailBinding.java:278)
at
org.apache.camel.component.mail.MailMessage.createBody(MailMessage.java:105)
at
org.apache.camel.impl.MessageSupport.getBody(MessageSupport.java:47)
at
org.apache.camel.component.mail.MailConsumer.createExchanges(MailConsumer.java:354)
at
org.apache.camel.component.mail.MailConsumer.poll(MailConsumer.java:128)
at
org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:175)
at
org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:102)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) Caused by:
com.sun.mail.util.DecodingException: BASE64Decoder: Error in encoded
stream: found valid base64 character after a padding character (=),
the 10 most recent characters were: "xmlns:v="u" at
com.sun.mail.util.BASE64DecoderStream.decode(BASE64DecoderStream.java:309)
at
com.sun.mail.util.BASE64DecoderStream.read(BASE64DecoderStream.java:144)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) at
sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) at
sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) at
java.io.InputStreamReader.read(InputStreamReader.java:184) at
com.sun.mail.handlers.text_plain.getContent(text_plain.java:98) at
javax.activation.DataSourceDataContentHandler.getContent(DataHandler.java:795)
at javax.activation.DataHandler.getContent(DataHandler.java:542) at
javax.mail.internet.MimeMessage.getContent(MimeMessage.java:1454) at
org.apache.camel.component.mail.MailBinding.extractBodyFromMail(MailBinding.java:250)
... 13 common frames omitted
The error is coming from JavaMail, probably due to an incorrectly formatted message. You can tell JavaMail to ignore such errors by setting the System property "mail.mime.base64.ignoreerrors" to "true".
Thanks #Claus Ibsen for logging the issue. The problem was reported fixed in version 2.17.5, 2.18.1, 2.19.0
I verified that it was fixed on version 2.18.1.
I am working with a route which downloads files from a remote server to local directory using SFTP.
Apache Camel version: 2.15.2
From endpoint: sftp://xxx.xxx.xxx.xxx:xx//User/User01?delay=30s&include=File.*.csv&initialDelay=1m&password=xxxxxx&stepwise=false&streamDownload=true&username=User01
To endpoint: file:///var/opt/myfolder/incoming?doneFileName=${file:name}.done
The remote location has more than 10 files available for download. After downloading 2-3 files, the route gets stuck and after around 30 seconds I get the below error in logs:
DEBUG 04/11/15 07:45:24,183 org.apache.camel.component.file.FileOperations :Using InputStream to write file: /var/opt/myfolder/incoming/File01.csv
...
around 30 secs gap
...
INFO 04/11/15 07:49:53,820 org.apache.camel.component.file.remote.SftpOperations$JSchLogger :JSCH -> Caught an exception, leaving main loop due to Connection reset
INFO 04/11/15 07:49:53,821 org.apache.camel.component.file.remote.SftpOperations$JSchLogger :JSCH -> Disconnecting from xxx.xxx.xxx.xxx port xx
WARN 04/11/15 07:49:53,823 org.apache.camel.util.IOHelper :Cannot close: File01.csv. Reason: Pipe closed
java.io.IOException: Pipe closed
at java.io.PipedInputStream.read(PipedInputStream.java:308)
at java.io.PipedInputStream.read(PipedInputStream.java:378)
at java.io.InputStream.skip(InputStream.java:222)
at com.jcraft.jsch.ChannelSftp.skip(ChannelSftp.java:2894)
at com.jcraft.jsch.ChannelSftp.access$600(ChannelSftp.java:36)
at com.jcraft.jsch.ChannelSftp$RequestQueue.cancel(ChannelSftp.java:1246)
at com.jcraft.jsch.ChannelSftp$2.close(ChannelSftp.java:1503)
at org.apache.camel.util.IOHelper.close(IOHelper.java:326)
at org.apache.camel.component.file.FileOperations.writeFileByStream(FileOperations.java:404)
at org.apache.camel.component.file.FileOperations.storeFile(FileOperations.java:274)
at org.apache.camel.component.file.GenericFileProducer.writeFile(GenericFileProducer.java:277)
at org.apache.camel.component.file.GenericFileProducer.processExchange(GenericFileProducer.java:165)
at org.apache.camel.component.file.GenericFileProducer.process(GenericFileProducer.java:79)
at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:129)
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:448)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:118)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:80)
at org.apache.camel.processor.ChoiceProcessor.process(ChoiceProcessor.java:111)
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:448)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:118)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:80)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:60)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:166)
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:435)
at org.apache.camel.component.file.remote.RemoteFileConsumer.processExchange(RemoteFileConsumer.java:137)
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:211)
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:175)
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:174)
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:101)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
I have set redelivery so it re attempts the download. The reattempt happens for the last file and as per the logs the file is downloaded. But when i check the folder the file size is 0 and even the .done file is created, the actual file size on remote server is 28 KB.
For rest of the files i get the below error for each file and none of the file is downloaded:
WARN 04/11/15 07:49:58,877 org.slf4j.helpers.MarkerIgnoringBase :Error processing file RemoteFile[/User/User01/File02.csv] due to Cannot retrieve file: /User/User01/File02.csv. Caused by: [org.apache.camel.component.file.GenericFileOperationFailedException - Cannot retrieve file: /User/User01/File02.csv]
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot retrieve file: /User/User01/File02.csv
at org.apache.camel.component.file.remote.SftpOperations.retrieveFileToStreamInBody(SftpOperations.java:651)
at org.apache.camel.component.file.remote.SftpOperations.retrieveFile(SftpOperations.java:594)
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:396)
at org.apache.camel.component.file.remote.RemoteFileConsumer.processExchange(RemoteFileConsumer.java:137)
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:211)
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:175)
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:174)
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:101)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: 4:
at com.jcraft.jsch.ChannelSftp.get(ChannelSftp.java:1513)
at com.jcraft.jsch.ChannelSftp.get(ChannelSftp.java:1266)
at org.apache.camel.component.file.remote.SftpOperations.retrieveFileToStreamInBody(SftpOperations.java:636)
... 14 more
Caused by: java.io.IOException: Pipe closed
at java.io.PipedInputStream.read(PipedInputStream.java:308)
at com.jcraft.jsch.Channel$MyPipedInputStream.updateReadSide(Channel.java:362)
at com.jcraft.jsch.ChannelSftp.get(ChannelSftp.java:1287)
... 16 more
I tried using both disconnect=true and false, the issue is happening in both the cases.
Any suggestions what could be wrong?
You had used both initial delay which is the delay before polling for a file by the consumer and delay which is before the next poll. Both creates a time buffer which is
Initialdelay+DownloadTime+Delay+WriteDestinationDirectoryTime
Try using only "delay".
Try appending a timestamp to the file being stored in destination.
Try camel-ftp2
I am trying to use sstableloader to stream data to a Cassandra database, which is in fact in the same node. It used to work when i was using DSE 2.2 but when i upgraded it to DSE 4.5 and made all the relevant changes in the cassandra.yaml file, it stopped working and now it is throwing an error like this:
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of demo/test_yale/demo-test_yale-jb-2-Data.db demo/test_yale/demo-test_yale-jb-1-Data.db to [/127.0.0.1]
Streaming session ID: 02225ef0-1c17-11e4-a1ea-5f2d4f6a32c1
progress: [/127.0.0.1 1/2 (88%)] [total: 88% - 2147483647MB/s (avg: 14MB/s)]ERROR 16:36:29,029 [Stream #02225ef0-1c17-11e4-a1ea-5f2d4f6a32c1] Streaming error occurred
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)
at java.nio.channels.Channels.writeFully(Channels.java:98)
at java.nio.channels.Channels.access$000(Channels.java:61)
at java.nio.channels.Channels$1.write(Channels.java:174)
at com.ning.compress.lzf.LZFChunk.writeCompressedHeader(LZFChunk.java:77)
at com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:132)
at com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:203)
at com.ning.compress.lzf.LZFOutputStream.write(LZFOutputStream.java:97)
at org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:151)
at org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:101)
at org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:59)
at org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:42)
at org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:383)
at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:363)
at java.lang.Thread.run(Thread.java:745)
WARN 16:36:29,032 [Stream #02225ef0-1c17-11e4-a1ea-5f2d4f6a32c1] Stream failed
Streaming to the following hosts failed:
[/127.0.0.1]
java.util.concurrent.ExecutionException: org.apache.cassandra.streaming.StreamException: Stream failed
I have even tried assigning actual ip address of the node for the listen_address, broadcast_address, and rpc_address in the cassandra.yaml file but the same error occurs.
Can anyone be of assistance please?
It's worth looking at your system.log as specified in cassandra/conf/logback.xml, as suggested by Zanson.
In my case the issue was simply with exhausting disk space on the node:
ERROR [STREAM-IN-/xx.xx.xx.xx] 2016-08-02 10:50:31,125 StreamSession.java:505 - [Stream #8420bfa0-589c-11e6-9512-235b1f79cf1b] Streaming error occurred
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes(Native Method) ~[na:1.8.0_101]
at java.io.RandomAccessFile.write(RandomAccessFile.java:525) ~[na:1.8.0_101]