I have the route "imap://%s#%s?password=%s&folderName=%s&unseen=true&delete=true&skipFailedMessage=true" to poll emails and skip failed ones. This property skipFailedMessage=true is not being honored or I am miss understanding the use of it.
I am reading emails from 5 different mailboxes with different placeholders but no emails get read when I encounter "org.apache.camel.RuntimeCamelException: Failed to extract body due to: BASE64Decoder: Error in encoded stream: found valid base64 character after a padding character (=)" on onw of the emails. I can only read all other emails on different mail boxes if the failing message is deleted. Please help. I tried versions 2.17.3 and 2.18 and both are behaving the same way.
Here is the stack trace:
org.apache.camel.RuntimeCamelException: Failed to extract body due to:
BASE64Decoder: Error in encoded stream: found valid base64 character
after a padding character (=), the 10 most recent characters were:
"xmlns:v="u". Exchange: Exchange[]. Message:
com.sun.mail.imap.IMAPMessage#7883ab8c at
org.apache.camel.component.mail.MailBinding.extractBodyFromMail(MailBinding.java:278)
at
org.apache.camel.component.mail.MailMessage.createBody(MailMessage.java:105)
at
org.apache.camel.impl.MessageSupport.getBody(MessageSupport.java:47)
at
org.apache.camel.component.mail.MailConsumer.createExchanges(MailConsumer.java:354)
at
org.apache.camel.component.mail.MailConsumer.poll(MailConsumer.java:128)
at
org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:175)
at
org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:102)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) Caused by:
com.sun.mail.util.DecodingException: BASE64Decoder: Error in encoded
stream: found valid base64 character after a padding character (=),
the 10 most recent characters were: "xmlns:v="u" at
com.sun.mail.util.BASE64DecoderStream.decode(BASE64DecoderStream.java:309)
at
com.sun.mail.util.BASE64DecoderStream.read(BASE64DecoderStream.java:144)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) at
sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) at
sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) at
java.io.InputStreamReader.read(InputStreamReader.java:184) at
com.sun.mail.handlers.text_plain.getContent(text_plain.java:98) at
javax.activation.DataSourceDataContentHandler.getContent(DataHandler.java:795)
at javax.activation.DataHandler.getContent(DataHandler.java:542) at
javax.mail.internet.MimeMessage.getContent(MimeMessage.java:1454) at
org.apache.camel.component.mail.MailBinding.extractBodyFromMail(MailBinding.java:250)
... 13 common frames omitted
The error is coming from JavaMail, probably due to an incorrectly formatted message. You can tell JavaMail to ignore such errors by setting the System property "mail.mime.base64.ignoreerrors" to "true".
Thanks #Claus Ibsen for logging the issue. The problem was reported fixed in version 2.17.5, 2.18.1, 2.19.0
I verified that it was fixed on version 2.18.1.
Related
I am trying to read from bigquery using Java BigqueryIO.read method. but getting below error.
public POutput expand(PBegin pBegin) {
final String queryOperation = "select query";
return pBegin
.apply(BigQueryIO.readTableRows().fromQuery(queryOperation));
}
2020-06-08 19:32:01.391 ISTError message from worker: java.io.IOException: Failed to start reading from source: org.apache.beam.runners.core.construction.UnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter#77f0db34 org.apache.beam.runners.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:792) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:361) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:194) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159) org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77) org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1320) org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:151) org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:1053) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:748) Caused by: java.lang.UnsupportedOperationException: BigQuery source must be split before being read org.apache.beam.sdk.io.gcp.bigquery.BigQuerySourceBase.createReader(BigQuerySourceBase.java:173) org.apache.beam.runners.core.construction.UnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter$ResidualSource.advance(UnboundedReadFromBoundedSource.java:467) org.apache.beam.runners.core.construction.UnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter$ResidualSource.access$300(UnboundedReadFromBoundedSource.java:446) org.apache.beam.runners.core.construction.UnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter$Reader.advance(UnboundedReadFromBoundedSource.java:298) org.apache.beam.runners.core.construction.UnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter$Reader.start(UnboundedReadFromBoundedSource.java:291) org.apache.beam.runners.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:787) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:361) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:194) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159) org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77) org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1320) org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:151) org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:1053) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:748)
I suppose that this issue might be connected with missing tempLocation pipeline execution parameter when you are using DataflowRunner for cloud execution.
According to the documentation:
If tempLocation is not specified and gcpTempLocation is, tempLocation will not be populated.
Since this is just my presumption, I'll also encourage you to inspect native Apache Beam runtime logs to expand the overall issue evidence, as long as Stackdriver logs don't reflect a full picture of the problem.
There was raised a separate Jira tracker thread BEAM-9043, indicating this vaguely outputted error description.
Feel free to append more certain information to you origin question for any further concern or essential updates.
I have big problem with ES :(
When i insert bulk (size = 20) a lot of document to ES, ES server throw below exception.
I find many topic discuss about this, but nothing. :sosad: , Any help me, what actually happened ??? Thks so much.
Sr for my bad english.
I using ES 2.3 , client Transport 2.2.1.
Server config
http.port: 9200
http.max_content_length: 100mb
node.name: "es_test"
nod.master: true
node.data: true
index.store.type: niofs
index.number_of_shards: 5
index.number_of_replicas: 0
discovery.zen.ping.multicast.enabled: false
script.inline: on
script.indexed: on
bootstrap.mlockall: true
Erros1
[2016-03-31 07:45:02,601][ERROR][index.engine ] [es_test] [my_index][1] failed to merge
java.io.EOFException: read past EOF: NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm")
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:336)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54)
at org.apache.lucene.store.BufferedChecksumIndexInput.readByte(BufferedChecksumIndexInput.java:41)
at org.apache.lucene.store.DataInput.readInt(DataInput.java:101)
at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:195)
at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:256)
at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:115)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:99)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65)
at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4233)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
Suppressed: org.apache.lucene.index.CorruptIndexException: checksum status indeterminate: remaining=0, please run checkindex for more details (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm")))
at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:371)
at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:164)
... 8 more
[2016-03-31 07:45:02,608][WARN ][index.engine ] [es_test] [my_index][1] failed engine [already closed by tragic event on the index writer]
java.io.EOFException: read past EOF: NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm")
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:336)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54)
at org.apache.lucene.store.BufferedChecksumIndexInput.readByte(BufferedChecksumIndexInput.java:41)
at org.apache.lucene.store.DataInput.readInt(DataInput.java:101)
at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:195)
at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:256)
at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:115)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:99)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65)
at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4233)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
Suppressed: org.apache.lucene.index.CorruptIndexException: checksum status indeterminate: remaining=0, please run checkindex for more details (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm")))
at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:371)
at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:164)
... 8 more
[2016-03-31 07:45:02,609][ERROR][index.engine ] [es_test] [my_index][4] failed to merge
java.io.EOFException: read past EOF: NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/4/index/_190.fdx")
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:336)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54)
at org.apache.lucene.store.BufferedChecksumIndexInput.readByte(BufferedChecksumIndexInput.java:41)
at org.apache.lucene.store.DataInput.readInt(DataInput.java:101)
at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:195)
at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:256)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.<init>(CompressingStoredFieldsReader.java:133)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsReader(CompressingStoredFieldsFormat.java:121)
at org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsReader(Lucene50StoredFieldsFormat.java:173)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:117)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65)
at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4233)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
Suppressed: org.apache.lucene.index.CorruptIndexException: checksum status indeterminate: remaining=0, please run checkindex for more details (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/4/index/_190.fdx")))
at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:371)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.<init>(CompressingStoredFieldsReader.java:140)
... 10 more
Erros2
[2016-03-31 20:04:07,419][DEBUG][action.admin.cluster.node.stats] [es_test] failed to execute on node [mplUA6JET92RPgmNx-DPMA]
RemoteTransportException[[es_test][ip:9300][cluster:monitor/nodes/stats[n]]]; nested: AlreadyClosedException[this IndexReader is closed];
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed
at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
at org.apache.lucene.index.CompositeReader.getContext(CompositeReader.java:101)
at org.apache.lucene.index.CompositeReader.getContext(CompositeReader.java:55)
at org.apache.lucene.index.IndexReader.leaves(IndexReader.java:438)
at org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:330)
at org.elasticsearch.index.shard.IndexShard.completionStats(IndexShard.java:765)
at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:164)
at org.elasticsearch.indices.IndicesService.stats(IndicesService.java:253)
at org.elasticsearch.node.service.NodeService.stats(NodeService.java:157)
at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:82)
at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:44)
at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:92)
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:230)
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:226)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I suggest that you receive the error, since upgrading to ES 2.3?
The cause is most likely that your transport client version is older than your clusters version.
When using the transport Client, the versions have to be compatible.
I am very new to Camus and Hadoop, and am running into an exception error. I am trying to write some avro files to a hdfs, and keep getting the following error block:
[EtlMultiOutputRecordWriter] - ExceptionWritable key: topic=_schemas partition=0leaderId=0 server= service= beginOffset=0 offset=0 msgSize=1024 server= checksum=0 time=1450371931447 value: java.lang.Exception
at com.linkedin.camus.etl.kafka.common.KafkaReader.getNext(KafkaReader.java:108)
at com.linkedin.camus.etl.kafka.mapred.EtlRecordReader.nextKeyValue(EtlRecordReader.java:232)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
... 14 more
I looked up line 108 in com.linkedin.camus.etl.kafka.common.KafkaReader.getNext and found it to be this: MessageAndOffset msgAndOffset = messageIter.next();.
I am using io.confluent.camus.etl.kafka.coders.AvroMessageDecoder for my decoder and com.linkedin.camus.example.DummySchemaRegistry for my coder.
At the end of the logs I get another line indicating an error from one of the hdfs files: Error from file [hdfs://localhost:9000/user/username/exec/2015-12-17-17-05-25/errors-m-00000]. The error-m-00000 file contains a somewhat readable beginning, but then changes to an undecipherable string:
SEQ*com.linkedin.camus.etl.kafka.common.EtlKey5com.linkedin.camus.etl.kafka.common.ExceptionWritable*org.apache.hadoop.io.compress.DefaultCodec|Ò ∫±ß˝}pºHí$ò¸·:0schemasQ∞∆øÿxúïîÀN√0E7l‡+∫»¢lFMõ>á*êxU®™ËzÍmàc[ÆÕ„XÚÕÿqZ%#[ÿD±gÓô…¯∆üGœ¯Ç¿Q,·Úçë2ô'«hZL¿3ëSöXÿ5ê·ê„Sé‡ÇÖpÎS¬î4,…LËÕ¥Î{û}wFßáâ*M)>%&uZÑCfi“˚#rKÌÔ¡flÌu^Í%†B∂"Xa*•⁄0ÔQÕpùGzùidy&ñªkT…śԈ≥-#0>›…∆RG∫.ˇÅ¨«JÚ®sÃ≥Ö¡\£Rîfi˚ßéT≥D#%T8ãW®ÚµÌ∫4N˙©W∫©mst√—Ô嶥óhÓ$C~#S+Ñâ{ãÇfl¡ßí⁄L´ÏíÙºÙΩ5wfÃjM¬∏_Äò5RØ£
Ë"Eeúÿëx{ÆÏ«{XW÷XM€O¨-C#É¡Òl•ù9§‰õö2ó:wɲ%Œ-N∫ˇbFXˆ∑:àá5fyQÑ‘ö™:roõ1⁄5•≠≈˚yM0±ú?»ÃW◊.h≈I´êöNæ
[û3
At the end it appears that a hadoop job has run, but a commit never takes place, based of the timing report:
Job time (seconds):
pre setup 1.0 (11%)
get splits 1.0 (11%)
hadoop job 4.0 (44%)
commit 0.0 (0%)
Total: 0 minutes 9 seconds
Any help or an idea of where to look to resolve this would be greatly appreciated. Thank you.
We are seeing an intermittent error in CXF. The response is fairly large (several hundred KB), MTOM is enabled, and enabling DEBUG for the CXF request/response logging interceptors fixes the issue, similarly to this post (which is unresolved). Our project is leveraging CXF version 2.2.9.
javax.xml.ws.soap.SOAPFaultException: Unmarshalling Error: [was class java.io.IOException] Strange I/O stream, returned 0 bytes on read
at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:146)
at com.sun.proxy.$Proxy751.browseFiles(Unknown Source)
…
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.RuntimeException: [was class java.io.IOException] Strange I/O stream, returned 0 bytes on read
at com.ctc.wstx.util.ExceptionUtil.throwRuntimeException(ExceptionUtil.java:18)
at com.ctc.wstx.sr.StreamScanner.throwLazyError(StreamScanner.java:731)
at com.ctc.wstx.sr.BasicStreamReader.safeFinishToken(BasicStreamReader.java:3657)
at com.ctc.wstx.sr.BasicStreamReader.getTextCharacters(BasicStreamReader.java:830)
at com.sun.xml.bind.v2.runtime.unmarshaller.StAXStreamConnector.handleCharacters(StAXStreamConnector.java:323)
…
at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:124)
... 51 more
Caused by: java.io.IOException: Strange I/O stream, returned 0 bytes on read
at com.ctc.wstx.io.BaseReader.reportStrangeStream(BaseReader.java:148)
at com.ctc.wstx.io.UTF8Reader.loadMore(UTF8Reader.java:373)
…
I initially thought this was caused by a bad/invalid character (encoding?) in the response data; however, it is now looking more like a network issue. It is very odd, the service has been operating for a long time (years) without issue prior to running into this problem.
Why does this error occur? Is there a way to resolve this without enabling debug logging?
Likely an upgrade to a newer version of CXF may fix this. There were some bugs in the CXF mime streams. In particular, this looks very similar to:
https://issues.apache.org/jira/browse/CXF-3068
Based on this and considering the source code of com.ctc.wstx.io.UTF8Reader.loadMore, the problem is possible when the buffer passed to mBuffer through BaseReader(ReaderConfig, InputStream, byte[], int, int) is zero length.
Is the loadMore() method implementation correct? Should they do that if read() return 0?
Is there a maximum content length for text sent over CXF with SOAP 1.1? I'm running into an issue where one SOAP request is failing while the other succeeds. The only difference I have pinpointed between these request so far has been the bytes of text I am sending.
I see an error like the following:
checkException (UnexpectedServiceExceptionCheckImpl.java:35) - An unexpected exception was found from source=[DesignService.generate] type=[class javax.xml.ws.soap.SOAPFaultException] message=[Unmarshalling Error: [was class java.io.IOException] Strange I/O stream, returned 0 bytes on read ]:
javax.xml.ws.soap.SOAPFaultException: Unmarshalling Error: [was class java.io.IOException] Strange I/O stream, returned 0 bytes on read
at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:145)
at $Proxy146.generate(Unknown Source)
I don't believe there are any maximums except for memory usage. Are you seeing any errors? Can you trace the transaction using wireshark or something?