FTPSclient not working properly on new FTPS server - java

We will be moving to our own ftps server in the near future from a customer's ftps server. I just tried the existing code but it doesn't work correctly with our new server.
The login seems to work and I can use printWorkingDirectory() and see that I'm in the correct directory on the server. But we can't upload files or list the files in the working directory. listFiles() and printWorkingDirectory() return an empty string or an empty array, although files are in the working directory.
IIRC we had similar issues on the old FTPS server and could fix it by entering passive mode. But neither enterLocalPassiveMode() nor enterLocalActiveMode() work. Also leaving that line out completely doesn't help
The server itself works fine when connecting with FileZilla, so I guess the problem lies within our code but since we don't get an error message it's hard to guess what the problem is.
Here's the version that worked on the old server (using cfscript on Lucee 5.1.2.24):
var oFTPSclient = CreateObject("java", "org.apache.commons.net.ftp.FTPSClient").init("ssl",false);
oFTPSclient.connect( 'our.ftp.server');
oFTPSclient.enterLocalPassiveMode();
oFTPSclient.login( 'user', 'password');
var qFiles = DirectoryList(
path = '/path/to/files/to/upload',
filter = "*.jpg",
listInfo = "query"
);
// Upload files
for( var oFile IN qFiles) {
// Create path to source file
var sSourceFilePath = ExpandPath( oFile.directory & '/' & oFile.name);
// Create java file object
var oJavaFile = CreateObject( "java", "java.io.File").init( sSourceFilePath);
// open input stream from file
var oFileInputStream = CreateObject("java", "java.io.FileInputStream").init( oJavaFile);
// Set file transfer to binary
oFTPSclient.setFileType( oFTPSclient.BINARY_FILE_TYPE);
// store current file to server
oFTPSclient.storeFile( oFile.name, oFileInputStream);
oFileInputStream.close();
}
oFTPSclient.disconnect();
When we try to upload files, the loop finishes, but way to quickly (our connection is rather slow here, so a few megabytes should not be uploaded instantly) and the disconnect() call at the end times out.
Any ideas where to start debugging? Or is my initial assumption that it's a problem with the code wrong and it's a problem with the server and we should rather look into that?
Update:
Due to the length, I have posted FileZilla's verbose log here: http://pastebin.com/b9TfR7cg

Related

Delete file after staring connection using FileInputStream

I have a temporary file which I want to send the client from the controller in the Play Framework. Can I delete the file after opening a connection using FileInputStream? For example can I do something like this -
File file = getFile();
InputStream is = new FileInputStream(file);
file.delete();
renderBinary(is, "name.txt");
What if file is a large file? If I delete the file, will subsequent reads() on InputStream give an error? I have tried with files of around 1MB I don't get an error.
Sorry if this is a very naive question, but I could not find anything related to this and I am pretty new to Java
I just encountered this exact same scenario in some code I was asked to work on. The programmer was creating a temp file, getting an input stream on it, deleting the temp file and then calling renderBinary. It seems to work fine even for very large files, even into the gigabytes.
I was surprised by this and am still looking for some documentation that indicates why this works.
UPDATE: We did finally encounter a file that caused this thing to bomb. I think it was over 3 Gb. At that point, it became necessary to NOT delete the file while the rendering was in process. I actually ended up using the Amazon Queue service to queue up messages for these files. The messages are then retrieved by a scheduled deletion job. Works out nicely, even with clustered servers on a load balancer.
It seems counter-intuitive that the FileInputStream can still read after the file is removed.
DiskLruCache, a popular library in the Android world originating from the libcore of the Android platform, even relies on this "feature", as follows:
// Open all streams eagerly to guarantee that we see a single published
// snapshot. If we opened streams lazily then the streams could come
// from different edits.
InputStream[] ins = new InputStream[valueCount];
try {
for (int i = 0; i < valueCount; i++) {
ins[i] = new FileInputStream(entry.getCleanFile(i));
}
} catch (FileNotFoundException e) {
....
As #EJP pointed out in his comment on a similar question, "That's how Unix and Linux behave. Deleting a file is really deleting its name from the directory: the inode and the data persist while any processes have it open."
But I don't think it is a good idea to rely on it.

Java ftp.listFiles() get an empty Array

i get an empty Array back from ftp.listFiles(). I tried something. If i change the Type to passive mode, i get the same error, Array is empty. If i run the code on a other machine, the problem is still the same. if i use a windows FTP Client (LeechFtp or WIndows Command Line) i can browse an get the directorylist. If i run the code without the changeWorkingDirectory-Command, i will get the Filelist from ftp root but i donĀ“t get the List from subdirs.
ftp = new FTPClient();
ftp.setDefaultPort(21);
ftp.connect("ftp.myftpsite.com");
ftp.enterLocalPassiveMode();
ftp.login("username", "password");
ftp.setFileType(FTP.BINARY_FILE_TYPE);
ftp.changeWorkingDirectory("pub/inbound");
FTPFile[] files = ftp.listFiles();
System.out.println(files.length);
The ftp.changeWorkingDirectory returns TRUE.
If:
you're using an old version (pre 1.5) of Apache Commons Net;
the files that you can't list have yesterday's (29 Feb) timestamp;
then you're probably running into this bug: FTPClient#listFiles returns null element when file's timestamp is "02/29".
Upgrading Commons Net should make the problem go away.

Java file IO and "access denied" errors

I have been tearing my hair out on this and thus I am looks for some help .
I have a loop of code that performs the following
//imports ommitted
public void afterPropertiesSet() throws Exception{
//building of URL list ommitted
// urlMap is a HashMap <String,String> created and populated just prior
for ( Object urlVar : urlMap.keySet() ){
String myURLvar = urlMap.get(urlVar.toString);
System.out.println ("URL is "+myURLvar );
BufferedImage imageVar = ImageIO.read(myURLvar);//URL confirmed to be valid even for executions that fail
String fileName2Save = "filepath"// a valid file path
System.out.println ("Target path is "+fileName2Save );
File file2Save = new File (fileName2Save);
fileName2Save.SetWriteable(true);//set these just to be sure
fileName2Save.SetReadable(true);
try{
ImageIO.write (imageVar,"png",file2save)//error thrown here
}catch (Exception e){
System.out.println("R: "+file2Save.canRead()+" W: "+file2Save.canWrite()+" E:"+file2Save.canExecute()+" Exists: "+file2Save.exists+" is a file"+file2Save.isFile() );
System.out.println("parent Directory perms");// same as above except on parent directory of destination
}//end try
}//end for
}
This all runs on Windows 7 and JDK 1.6.26 and Netbeans,Tomcat 7.0.14 . The target directory is actually inside my netbeans project directory in a folder for a normal web app ( outside WEB-INF) where I would expect normally to have permission to write files.
When the error occurs I get one of two results for the file a.) All false b.)all true. The Parent directory permission never change all true except for isFile.
The error thrown ( java.IO.error with "access denied" ") does not occur every time ... in fact 60% of the time the loop runs it throws no error. The remaining 40% of the time I get the error on 1 of the 60+ files it writes. Infrequently the same one. The order in which the URLs it starts from changes everytime so the order in which the files are written is variable. The file names have short concise names like "1.png". The images are small..less then 8k.
In order to make sure the permissions are correct I have :
Given "full control" to EVERYONE from the net beans project directory down
Run the JDK,JRE and Netbeans as Administrator
Disabled UAC
Yet the error persists. Google searches for this seem to run the gamut and often read like vodoo. Clearly I ( and Java and Netbeans etc ) should have permission to write a file to the directory .
Anyone have any insight ? This is all ( code and the web server hosting the URL) on a closed system so I can't cut and paste code or stacktrace.
Update: I confirmed the imageURL is valid by doing a println & toString prior to each read. I then confirmed that a.) the web server hosting the target URL returned the image with a http 200 code b.) that the URL returned the image when tested in a web browser. In testing I also put a if () in after the read to confirm that the values was not NULL or empty. I also put in tests for NULL on all the other values . They are always as expected even for a failure .The error always occurs inside the try block. The destination directory is the same every execution. Prior to every execution the directory is empty.
Update 2: Here is one of the stack traces ( in this case perms for file2Save are R: True W:True E: True isFile:True exists:True )
java.io.FileNotFoundException <fullFilepathhere> (Access is denied)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
at javax.imageio.stream.FileImageOutputStream.<init>(FileImageOutputStream.java:53)
at com.sun.imageio.spi.FileImageOutputStreamSpi.createOutputStreamInstance(FileImageOutputStreamSpi.java:37)
at javax.imageio.ImageIO.createImageOutputStream(ImageIO.java:393)
at javax.imageio.ImageIO.write(ImageIO.java:1514)
at myPackage.myClass.afterPropertiesSet(thisClassexample.java:204)// 204 is the line number of the ImageIO write
This may not answer your problem since there can be many other possibilties to your limited information.
One common possibilty for not being able to write a file in web application is the file locking issue on Windows if the following four conditions are met simultaneously:
the target file exists under web root, e.g. WEB-INF folder and
the target file is served by the default servlet and
the target file has been requested at least once by client and
you are running under Windows
If you are trying to replace such a file that meets all of the four conditions, you will not be able to because some servlet containers such as tomcat and jetty will buffer the static contents and lock the files so you are unable to replace or change them.
If your web application has exactly this problem, you should not use the default servlet to serve the file contents. The default servlet is desigend to serve the static content which you do not want to change, e.g. css files, javascript files, background images, etc.
There is a trick to solve the file locking issue on Windows for jetty by disabling the NIO http://docs.codehaus.org/display/JETTY/Files+locked+on+Windows
The trick is useful for development process, e.g. you want to edit the css file and see the change without restarting your web application, but it is not recommended for production mode. If your web application relies on this trick in the production process, then you should seriously consider redesign your codes.
I cannot tell you what's going on or why... I have a feeling that it's something dependent on the way ImageIO tries to save the image. What you could do is saving the BufferedImage by leveraging the ByteArrayOutputStream as described below:
BufferedImage bufferedImage = ImageIO.read(new File("sample_image.gif"));
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write( bufferedImage, "gif", baos );
baos.flush(); //Is this necessary??
byte[] resultImageAsRawBytes = baos.toByteArray();
baos.close(); //Not sure how important this is...
OutputStream out = new FileOutputStream("myImageFile.gif");
out.write(resultImageAsRawBytes);
out.close();
I'm not really familiar with the ByteArrayOutputStream, but I guess its reset() function could be handy when dealing with saving multiple files. You could also try using its writeTo(OutputStream out) if you prefer. Documentation here.
Let me know how it goes...

REST - Why generated excel file is not actually created when tried as web application?

I created simple REST web app that would create Excel file and put into current directory.
But somehow the generated excel is not in the directory. Actually it is not even being created.
The output "done" is shown in GlassFish server log so the process actually get to the end without any error. The only thing I am suspecting is the file path I'm specifying for "myExcelFileThatWouldNotShowUp".
I gave full path or relative path I can think of but Excel file is not still showing up.
Interestingly, if I don't run this as web app (i.e. put the code in local main() function and run it, it works)
Thus I think something to do with GlassFish but can't really figure out :(
GlassFish v3
REST / JAX-RS
Excella framework to generate Excel spreadsheet from template (myTemplate.xls)
Code snippet
#Path("horizontalProcess")
#GET
#Produces("application/xml")
public String getProcessHorizontally() {
try {
URL templateFileUrl = this.getClass().getResource("myTemplate.xls");
// getPath() outputs...
// /C:/Users/m-takayashiki/Documents/NetBeansProjects/KogaAlpha/build/web/WEB-INF/classes/local/test/jaxrs/myTemplate.xls
System.out.println(templateFileUrl.getPath());
String templateFilePath = URLDecoder.decode(templateFileUrl.getPath(), "UTF-8");
//specify output path which is current dir and should create
//myExcelFileThatWouldNotShowup.xls but it is not..
String outputFileDir = "myExcelFileThatWouldNotShowUp";
//<<template path>>, <<output path>>, <<file format>>
ReportBook outputBook = new ReportBook(templateFilePath, outputFileDir, ExcelExporter.FORMAT_TYPE);
ReportSheet outputSheet = new ReportSheet("myExcelSheet");
outputBook.addReportSheet(outputSheet);
//this is printed out so process actually gets here
System.out.println("done!!");
}
catch(Exception e) {
System.out.println(e);
}
return null;
}//end method
That's why one shouldn't code when tired/overtime worked....
I forgot to add 2 lines of code at the end that actually generate the excel...
cost me couple hours of debugging.. (don't ask how i was debugging :p
ReportProcessor reportProcessor = new ReportProcessor();
reportProcessor.process(outputBook);
Btw, generated file are stored in the dir below as default if you don't specify.
//C:\Users\m-takayashiki\.netbeans\6.9\config\GF3\domain1

How to create a directory with multiple levels in one call in Java using FTP

I am using the FTPClient library from Apache and cannot figure out a simple way to create a new directory that is more than one level deep. Am I missing something?
Assuming the directory /tmp already exists on my remote host, the following command succeeds in creating /tmp/xxx
String path = "/tmp/xxx";
FTPClient ftpc = new FTPClient();
... // establish connection and login
ftpc.makeDirectory(path);
but the following fails:
String path = "/tmp/yyy/zzz";
FTPClient ftpc = new FTPClient();
... // establish connection and login
ftpc.makeDirectory(path);
In that latter case, even /tmp/yyy isn't created.
I know I can create /tmp/yyy and then create /tmp/yyy/zzz, but I can't figure out how to create directly /tmp/yyy/zzz.
Am I missing something obvious? Using mkd instead of makeDirectory didn't help.
Also, is it possible in one call to upload a file to /tmp/yyy/zzz/test.txt if the directory /tmp/yyy/zzz/ doesn't exist already?
You need to do them one at a time, first /tmp/yyy and then /tmp/yyy/zzz. There is no short-cut mechanism for what you want to do.
FTP servers typically only allows you to create 1 level of a directory at a time. Thus you'll have to break up the path yourself, and issue one makeDirectory() call for each of the components.
No.
The FTP protocol doesn't permit this. So no, you can't create a directory with multiple levels in one call.

Categories