Timeout while writing to the queue-based output stream - java

My GAE application works fine but at some point it throws the error below. I am trying to figure out what could be the root cause of a Timeout while writing to the queue-based output stream error:
java.io.IOException: Timeout while writing to the queue-based output stream
org.restlet.engine.io.PipeStream$2.write(PipeStream.java:99)
java.io.OutputStream.write(OutputStream.java:116)
com.dropbox.core.util.IOUtil.copyStreamToStream(IOUtil.java:52)
com.dropbox.core.util.IOUtil.copyStreamToStream(IOUtil.java:63)
com.dropbox.core.util.IOUtil.copyStreamToStream(IOUtil.java:34)
com.dropbox.core.v1.DbxClientV1$Downloader.copyBodyAndClose(DbxClientV1.java:535)
com.dropbox.core.v1.DbxClientV1.getFile(DbxClientV1.java:427)
com.myapp.MyServerResource.getFile(MyServerResource.java:268)
com.myapp.MyServerResource$1.write(MyServerResource.java:140)
org.restlet.engine.io.IoUtils$2.run(IoUtils.java:537)
org.restlet.engine.Engine$1.run(Engine.java:158)
com.google.appengine.tools.development.RequestThreadFactory$1$1$2.run(RequestThreadFactory.java:110)
java.security.AccessController.doPrivileged(Native Method)
com.google.appengine.tools.development.RequestThreadFactory$1$1.run(RequestThreadFactory.java:107)
Is this really timeout issue with the URL fetch or an issue with output stream copy?

Related

Apache-Flink Quickstart - reading CSV file error : Futures timed out after [10000 milliseconds]

I want to read CSV file using Flink-API locally, by the following code:
csvPath="data/weather.csv";
List<Tuple2<String, Double>> csv= env.readCsvFile(csvPath)
.types(String.class,Double.class).collect();
I tried some files in different size(from 800mb to 6gb). Sometimes the operation is completed successfully and sometimes it is not, because of the following timeout exception:
Exception in thread "main" java.util.concurrent.TimeoutException: Futures timed out after [10000 milliseconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:153)
at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:169)
at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:169)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.ready(package.scala:169)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.shutdown(FlinkMiniCluster.scala:439)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.stop(FlinkMiniCluster.scala:408)
at org.apache.flink.client.LocalExecutor.stop(LocalExecutor.java:127)
at org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:195)
at org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:91)
at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:923)
at org.apache.flink.api.java.DataSet.collect(DataSet.java:410)
at org.apache.flink.simpleCSV.run(simpleCSV.java:83)
how can I fix this problem? increase this timeout programmatically? Or should I put a config file somewhere? Is there a specific heap size that I should set based on the file size?
collect() transfers the data from the cluster to the local client. This does only work for very small data sets (< 10 MB).
If you have larger data sets, you need to process them on the cluster and emit the results through an output format, e.g., write it to a file.
If you are debugging this program, you can set a break point at the constructor of org.apache.flink.api.java.LocalEnvironment (the constructor with config) and run the following command to change the timeout to 200 seconds (Alt+F8 in IntelliJ Idea):
config.setString("akka.ask.timeout", "200 s")
To find LocalEnvironment class in IntelliJ Idea, press Ctr+n, and check "Include non-project classes in the pop-up window, then type "LocalEnvironment" in the edit box.

I have "broken pipe" exception using Sockets and Netty. What's wrong?

guyz!
I have some protobuf objects to send to server.
Server uses Netty.
When I use Netty for client with Netty's ProtobufDecoder, all goes nice.
But when I try to send protobuf object through vanilla Java Socket, I have "broken pipe" exception. But Socket object stay connected and I can send another object and get exception again. =(
Here is netty-based client's pipeline setup:
ChannelPipeline pipeline = channel.pipeline();
pipeline.addLast("frameDecoder", new LengthFieldBasedFrameDecoder(134217728, 0, 4, 0, 4));
pipeline.addLast("protobufDecoder", new ProtobufDecoder(mSocketListener.getInMessage()));
I don't know, where to specify maxFrameLength and other params.
Here is error log:
Exception in thread "main" java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:141)
at SocketClientMain.main(SocketClientMain.java:28)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
What's going wrong?
May be I have broken hands?
Or I need to ask server-owner for some info?
java.net.SocketException: Broken pipe is caused by trying to write to a connection, while the other side has already closed that same connection. It is not you, who closed the connection, else another exception would have been thrown. You can not recover the connection, you have to open a new one.
Or in case you keep getting the exception, it just simply means the application protocol is poorly defined or poorly implemented.

SSTable loader streaming failed giving java.io.IOException: Connection reset by peer

I am trying to use sstableloader to stream data to a Cassandra database, which is in fact in the same node. It used to work when i was using DSE 2.2 but when i upgraded it to DSE 4.5 and made all the relevant changes in the cassandra.yaml file, it stopped working and now it is throwing an error like this:
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of demo/test_yale/demo-test_yale-jb-2-Data.db demo/test_yale/demo-test_yale-jb-1-Data.db to [/127.0.0.1]
Streaming session ID: 02225ef0-1c17-11e4-a1ea-5f2d4f6a32c1
progress: [/127.0.0.1 1/2 (88%)] [total: 88% - 2147483647MB/s (avg: 14MB/s)]ERROR 16:36:29,029 [Stream #02225ef0-1c17-11e4-a1ea-5f2d4f6a32c1] Streaming error occurred
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)
at java.nio.channels.Channels.writeFully(Channels.java:98)
at java.nio.channels.Channels.access$000(Channels.java:61)
at java.nio.channels.Channels$1.write(Channels.java:174)
at com.ning.compress.lzf.LZFChunk.writeCompressedHeader(LZFChunk.java:77)
at com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:132)
at com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:203)
at com.ning.compress.lzf.LZFOutputStream.write(LZFOutputStream.java:97)
at org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:151)
at org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:101)
at org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:59)
at org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:42)
at org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:383)
at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:363)
at java.lang.Thread.run(Thread.java:745)
WARN 16:36:29,032 [Stream #02225ef0-1c17-11e4-a1ea-5f2d4f6a32c1] Stream failed
Streaming to the following hosts failed:
[/127.0.0.1]
java.util.concurrent.ExecutionException: org.apache.cassandra.streaming.StreamException: Stream failed
I have even tried assigning actual ip address of the node for the listen_address, broadcast_address, and rpc_address in the cassandra.yaml file but the same error occurs.
Can anyone be of assistance please?
It's worth looking at your system.log as specified in cassandra/conf/logback.xml, as suggested by Zanson.
In my case the issue was simply with exhausting disk space on the node:
ERROR [STREAM-IN-/xx.xx.xx.xx] 2016-08-02 10:50:31,125 StreamSession.java:505 - [Stream #8420bfa0-589c-11e6-9512-235b1f79cf1b] Streaming error occurred
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes(Native Method) ~[na:1.8.0_101]
at java.io.RandomAccessFile.write(RandomAccessFile.java:525) ~[na:1.8.0_101]

"IOException: Strange I/O stream" in CXF service client

We are seeing an intermittent error in CXF. The response is fairly large (several hundred KB), MTOM is enabled, and enabling DEBUG for the CXF request/response logging interceptors fixes the issue, similarly to this post (which is unresolved). Our project is leveraging CXF version 2.2.9.
javax.xml.ws.soap.SOAPFaultException: Unmarshalling Error: [was class java.io.IOException] Strange I/O stream, returned 0 bytes on read
at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:146)
at com.sun.proxy.$Proxy751.browseFiles(Unknown Source)
…
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.RuntimeException: [was class java.io.IOException] Strange I/O stream, returned 0 bytes on read
at com.ctc.wstx.util.ExceptionUtil.throwRuntimeException(ExceptionUtil.java:18)
at com.ctc.wstx.sr.StreamScanner.throwLazyError(StreamScanner.java:731)
at com.ctc.wstx.sr.BasicStreamReader.safeFinishToken(BasicStreamReader.java:3657)
at com.ctc.wstx.sr.BasicStreamReader.getTextCharacters(BasicStreamReader.java:830)
at com.sun.xml.bind.v2.runtime.unmarshaller.StAXStreamConnector.handleCharacters(StAXStreamConnector.java:323)
…
at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:124)
... 51 more
Caused by: java.io.IOException: Strange I/O stream, returned 0 bytes on read
at com.ctc.wstx.io.BaseReader.reportStrangeStream(BaseReader.java:148)
at com.ctc.wstx.io.UTF8Reader.loadMore(UTF8Reader.java:373)
…
I initially thought this was caused by a bad/invalid character (encoding?) in the response data; however, it is now looking more like a network issue. It is very odd, the service has been operating for a long time (years) without issue prior to running into this problem.
Why does this error occur? Is there a way to resolve this without enabling debug logging?
Likely an upgrade to a newer version of CXF may fix this. There were some bugs in the CXF mime streams. In particular, this looks very similar to:
https://issues.apache.org/jira/browse/CXF-3068
Based on this and considering the source code of com.ctc.wstx.io.UTF8Reader.loadMore, the problem is possible when the buffer passed to mBuffer through BaseReader(ReaderConfig, InputStream, byte[], int, int) is zero length.
Is the loadMore() method implementation correct? Should they do that if read() return 0?

Problem getting Java Streams in HP Tandem (Non-Stop)

We are porting a simple Java application between Tandem NonStop systems, from G-Series to H-Series. Java version is 1.5.0_02.
When performing basic I/O tasks like getting output stream from or opening a client socket, we receive exceptions like
java.io.IOException: Value out of range
or
java.net.SocketException: Value out of range
("value out of range" is Tandem native jargon for, well, quite everything I suppose).
Has anybody got similar issues? i.e. I/O corruption while for example messing with JNI?
I suppose there is something wrong with the system, but where might it be?
Thank you.
EDIT:
adding snippets as requested
sample snippet (a) - using Runtime.exec () (adapted)
Properties envVars = new Properties();
Process p = r.exec("/bin/env");
envVars.load(p.getInputStream());
Stack trace (a):
java.io.IOException: Value out of range (errno:4034)
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:194)
at java.lang.UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:221)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:254)
at java.io.BufferedInputStream.read(BufferedInputStream.java:313)
at sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:411)
at sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:453)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:183)
at java.io.InputStreamReader.read(InputStreamReader.java:167)
at java.io.BufferedReader.fill(BufferedReader.java:136)
at java.io.BufferedReader.readLine(BufferedReader.java:299)
at java.io.BufferedReader.readLine(BufferedReader.java:362)
at util.Environment.getVariables(Environment.java:39)
Last line fails, and output gets redirected to console (!).
sample snippet (b) - using HttpURLConnection:
public WorkerThread (HttpURLConnection conn, String requestData, Logger logger)
{
this.conn = conn;
...
}
public void run ()
{
OutputStream out = conn.getOutputStream ();
}
Stack trace (b):
java.net.SocketException: Value out of range (errno:4034)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
at java.net.Socket.connect(Socket.java:507)
at sun.net.NetworkClient.doConnect(NetworkClient.java:155)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:365)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:477)
at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:280)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:337)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:176)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:736)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:162)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:828)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:230)
Case (a) can be avoided because it was a workaround for other issues with previous JRE version (!), but same behaviour with sockets is really nasty.
Error code 4034 seem to indicate that a specific server is not running in your NonStop cluster. Are you sure that your system is setup properly?
Update: The problem was caused by a spurious .so library.

Categories