Nextflow pull Singularity image errors - java

I'm running into errors pulling a Singularity image in a Nextflow pipeline that has worked on other clusters. I believe this error has to do with Java, though I'm not sure how to troubleshoot this and am using updated Java and Nextflow versions. Any suggestions on how to troubleshoot errors related to a Singularity pull would be great. Thank you in advance!
Pulling Singularity image docker://ksw9/mtb-call [cache /labs/test/Pipeline/tmp/ksw9-mtb-call.img]
Exception in thread "Thread-2" groovy.lang.GroovyRuntimeException: exception while reading process stream
at org.codehaus.groovy.runtime.ProcessGroovyMethods$TextDumper.run(ProcessGroovyMethods.java:500)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.io.IOException: Stream closed
at java.base/java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:168)
at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:281)
at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:343)
at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:270)
at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:313)
at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:188)
at java.base/java.io.InputStreamReader.read(InputStreamReader.java:176)
at java.base/java.io.BufferedReader.fill(BufferedReader.java:162)
at java.base/java.io.BufferedReader.readLine(BufferedReader.java:329)
at java.base/java.io.BufferedReader.readLine(BufferedReader.java:396)
at org.codehaus.groovy.runtime.ProcessGroovyMethods$TextDumper.run(ProcessGroovyMethods.java:493)

This looks more like a network connection issue, rather than an issue with Java or Nextflow exactly. The stream appears to have been closed unexpectedly:
Caused by: java.io.IOException: Stream closed
Usually this happens if your network connection is lost or if the remote service terminates for whatever reason. If your university (or research institute) imposes some resource limitations like a download quota, then exceeding this limit could cause the stream to be closed unexpectedly.
Using Nextflow, it is recommended to provide a centralized cache directory using either the NXF_SINGULARITY_CACHEDIR environment variable or by defining the singularity.cacheDir in your nextflow.config1. For example:
singularity {
cacheDir = '/path/to/container/images'
}
If you're unable to pull the images directly, another way would be to pull them to your local machine and scp the images into the above cacheDir.

Related

Getting 'java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection' while connecting oracle database

I'm facing oracle database connectivity issue while running my automation script thru jenkins pipleline whereas it is working fine when I run the script in local.
Error Log:
java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
Caused by: java.net.UnknownHostException: *********: Name or service not known
After googling it, able to understand that there could be a reason one among of these. firewall blocking, port disabled or proxy issue but not sure how to confirm it.
Please help me how to fix this issue.
Thanks,
Karunagara Pandi G
The "Caused by" gives you the answer: the configured database host is unknown!
Either because you have a typo in the configuration ("hoost" instead of "host"), the respective machine is (was) currently offline, or the (local) DNS has (had) lost the name of the machine. Another option is that someone renamed the database host (that would be similar to the typo, only that it was not really your fault).
Determine which options (maybe there are more …) fits to your situation and fix it.

FTPClient.storeFile() Began Failing [duplicate]

This question already has answers here:
FtpClient storeFile always return False
(5 answers)
Closed 1 year ago.
I have a Java program that uploads new/changed files to my Web site via FTP. It currently uses the Apache Commons Net Library, version 3.8.0.
After moving to a new city, the program, which I’ve been using for almost 20 years, began failing. It still connects to the FTP server and signs in successfully. But when it tries to upload a file, it pauses for 20-30 seconds, then fails. It always fails on the first file, 100% of the time.
The failing call is to org.apache.commons.net.ftp.FTPClient.storeFile(). The documentation says storeFile() turns True if successfully completed, false if not. Curiously, the method is also documented to throw various forms of IOException. The documentation doesn’t say when or why the method decides to return a boolean versus throwing an exception.
My problem is that storeFile() is returning a false (indicating failure), and never throws an exception. So, I have no error message to tell me what caused the failure. File name & path look OK. The Web hosting company tried to determine why the failure was occurring, but was unsuccessful.
This problem has been going on for weeks now. Anyone have any ideas on how to debug this?
If the cause of your problem is moving to a new city, and you can still open the control connection, the most likely culprit is a change to your underlying ISP and network that is blocking the data transfer stream from opening.
FTP port 21 is used for opening connections and is normally allowed by all networks but then a new, random, unprivileged port is negotiated over the control connection and then used for the actual DATA transfers. I bet your "storeFile()" is trying to open a data connection and hitting a block which is probably causing a timeout. You may be interpretting this as "never throws an exception" but in reality it might throw a Timeout Exception after you sit around and wait long enough.
One thing I would recommend is find a way to have your FTP client use PASSIVE mode for the FTP data transfer. This is designed into the protocol to avoid these types of problems. You can read about it in detail on the wiki https://en.wikipedia.org/wiki/File_Transfer_Protocol under "Communications and Data Transfer"

SFTP connect error - The message store has reached EOF

I am getting exception "The message store has reached EOF" when I try to connect to remote host for SFTP using "com.enterprisedt.net.ftp.ssh.SSHFTPClient" class (edtFTPj/PRO - a commercial Java file transfer client).
I am able to successfully connect to another remote host for SFTP with the same code.
Is there any configuration specific to the particular host which is causing the problem? If that is the case, is there any way to confirm it? I don't have any other log other than this exception message. However, I can change the code to try any suggested debugging options.
Please note that I can't use other SFTP library as my existing code is already using this library.
com.enterprisedt.net.j2ssh.authentication.AuthenticationProtocolException: Failed to read messages
at com.enterprisedt.net.j2ssh.authentication.AuthenticationProtocolClient.a(AuthenticationProtocolClient.java:265)
Caused by: com.enterprisedt.net.j2ssh.transport.MessageStoreEOFException: The message store has reached EOF
at com.enterprisedt.net.j2ssh.transport.SshMessageStore.getMessage(SshMessageStore.java:177)
at com.enterprisedt.net.j2ssh.transport.SshMessageStore.getMessage(SshMessageStore.java:110)
at com.enterprisedt.net.j2ssh.authentication.AuthenticationProtocolClient.a(AuthenticationProtocolClient.java:261)
... 31 more
I think there is a problem with the library itself. A concurrency problem. I could reproduce this and other random errors consistently when you try to open many connections at the same time (in the same millisecond), for example, when you use multiple threads to do that. This happens even if you use several different SSHFTPClient instances. I think there should be a static thing inside those classes.
I had to synchronize the access of any SSHFTPClient instances using a third variable, like this:
final static String sincronizadorUnico = "";
synchronized (sincronizadorUnico) {
// use the code to connect
}
The it will work. I will report the bug to the library. I'm using 7.1.0 version of library.

Program never throws exception when debugger is attached

I have a Java based service that is throwing an unexpected SSL exception "Socket is closed"... or sometimes "Data is recieved in a non-data state" when I run it.
When I configure a remote debugger by adding jvmArgs: -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5050 , and then run it it never throws this exception. Is there something about this option that modifies the behaviour of the service?
Exception:
javax.net.ssl.SSLProtocolException: Data received in non-data state: 6
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1061)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:884)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
at org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:149)
at org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:110)
at org.apache.http.impl.io.AbstractSessionInputBuffer.read(AbstractSessionInputBuffer.java:191)
at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:164)
at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:138)
at java.security.DigestInputStream.read(DigestInputStream.java:161)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at com.amazonaws.services.s3.internal.ChecksumValidatingInputStream.read(ChecksumValidatingInputStream.java:97)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at java.io.FilterInputStream.read(FilterInputStream.java:107)
at javax.crypto.CipherInputStream.getMoreData(CipherInputStream.java:103)
at javax.crypto.CipherInputStream.read(CipherInputStream.java:224)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at java.io.FilterInputStream.read(FilterInputStream.java:107)
at <mypackagenameremovedforanonymity>.GetObjectActivity.enact(GetObjectActivity.java:118)
Context: I am reading from an InputStream that wraps the SSL socket
This may be an issue that others have seen with the AWS SDK and Garbage Collection. I had the same kind of issue. Reading from S3 input streams would fail with various socket/SSL errors and when I tried to isolate or debug it, the problem would go away. Turns out the the S3 client connection was getting garbage collected because the input stream was not holding on to it. I found the following link and it solved my problem.
https://forums.aws.amazon.com/thread.jspa?messageID=438171
Rick
P.S. Just to be clear, the above link is for running on the Android, but the problem and solution are generic across all platforms (I ran into it on JDK 7 running on Windows).
If you are running your application in linux, try this:
run your app and after check the program call using 'ps -aux | grep java'
run in debug mode and after check the program call using 'ps -aux | grep java'
this will make you sure about the jvm path
Another very important thing is checking your system properties by adding the following code to your app:
System.getProperties().list(System.out);
after that, compare the output running your app normally and in debug mode.
When the error, Data received in non-data state is thrown, check whether the outputstream of socket is closed but the input stream is not closed and continuing to send/receive data and socket is also not closed. Closing the streams of socket along with socket at the end could help to resolve the issue.

File pool (like Connection Pool)

My English is like 3years old baby.
Recently, I made a website with Many File Access.
Unfortunately, My tomcat gave me this following error message
Fatal: Socket accept failed
java.net.SocketException: Too many open files
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408)
at java.net.ServerSocket.implAccept(ServerSocket.java:462)
at java.net.ServerSocket.accept(ServerSocket.java:430)
at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:352)
at java.lang.Thread.run(Thread.java:662)
org.apache.tomcat.util.net.JIoEndpoint$Acceptor run
This happens when I send request in short time, I guess there too many stream opened for this job.
Does anybody know how to solve this problem.
My Environment are { tomcat 6.0.35, java 1.6.0_31, centos 5 }
Ah, This only happens on Linux;
thank you.
Check the limit allocated by the system
cat /proc/sys/fs/file-nr
(last number)
Allocate more if needed
Edit /etc/sysctl.conf
Add/change fs.file-max = xxxxx
Apply changes sysctl -p
Check cat /proc/sys/fs/file-max
You may also have user limits set.
It's quite possible that you are exceeding the default maximum number of file descriptor.
Explanation and how to increase the values:
http://honglus.blogspot.com.au/2010/08/tune-max-open-files-parameter-in-linux.html
http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/

Categories