Is there any way to download a folder : all files within the folder and subfolders in Liferay 6.2 without using a loop through all files existing in the folder ?
I need to do it programmatically.
Example :
Folder to download "XFolder"
XFolder
- SubFolder1
- File11
- File12
- SubFolder2
- File21
- File22
- File1
- File2
When choosing to download XFolder, the system searches the folder in document and media and saves all the folder content in a .zip file in disk.
The content should have the same structure above.
Thank you for your help.
You can try to use the "treePath" value of folder and entry to figure out the files but you still will require some looping.
You will probably need dynamic queries for this.
the algorithm should go something like this.
Find out ID of your folder
Look through the treePath property of the Folder table and get a list of all paths that you are interested in.
get all folder ids
loop through all the folders that you are interested in and load their respective files. (Probably you could also do a query that would collect it all in one go)
I have many xml files on hdfs which i extracted from a sequence files using java program.
Initially, the files were few so I copied the extracted xml files onto my local and then ran a unix zip command then zipped the xmls into a single .zip file.
The no of xml files have now increased and now i cant copy them onto local because I will run out of memory.
My need is to just zip all of those xml files(on hdfs) into a single zipped file(to hdfs) without a need of copying it to local.
I couldnt find any lead to start.. Can anyone provide me a start point or any code(even java MR) they have so that I can go further. I could see this can be done using mapreduce but I have never programmed in it thats why trying other ways
Thanks in advance..
I have a Java program (running under Java 6), that monitors a directory parses the name of found files and runs actions (including copying the file) according to meta data and file content then, depending on the success or failure of the process, moves the files to a OK or KO directory.
I run my program as a simple user.
I tried, for the test, to put files belonging to root in my monitored directory.
Furthermore, I gave them 000 permissions.
The program would find the files but fail on the copy of the file.
For the record, the actual copy is done on this model:
FileOutputStream fos = new FileOutputstrem(DestFile)
FileInputStream stream = new FileInputStream(File);
byte buffer[] = new byte[bufferSize];
int nbRead;
while (-1 != (nbRead = fin.read(buffer)))
fos.write(buffer, 0, nbRead);
So far, seeing the program fail is exactly what I expected, 000 permissions on a un-owned file, that cannot be read.
But what is strange is that my files were moved to the KO box.
The move is done with
File failedFileName = new File(KOdirectory, myFile.getName());
myFile.renameTo(failedFileName);
Should that work? (given they are onwed by root and with 000 permissions)?
They end up in the KO directory, still owned by root with 000 permissions.
When I add read permissions (so my files are 444 root-owned) and reinject them into the monitored folder, the whole process runs smoothly and files end up in the OK directory (still root-owned and 444 permissions).
How is it possible to move files on which one has only reading rights?
How does this reading, moving, deleting works depending on the OS? on the distro?
Maybe I should add I run this on Ubuntu whose awkward root user (it exists, but not completely) concept might be messing with this.
Moving and renaming files does nothing to the file contents; instead, it changes the directory entries. So you need write permission on the directory, not the file itself.
Try it: if you remove the write permission to the directory, and give write permission to the file, you won't be able to rename or move the file anymore.
There are commands like mv or rm that actually check the file permission, and ask for confirmation if you want to move or change them. But that's extra code in the command and does not come from the operating system itself.
This is the same on all linux/unix systems. Reading/changing a file's content checks the permissions on the file; changing the file name or moving it to a different directory checks the permissions on the directory(/ies). This does not depend on the distro, it's the same on all linux systems, as well as Solaris, AIX, HP/UX and what other commercial unixes there are.
Moving a file from one directory to another only requires modification of directory entries for the directories in question. This means that you need only write and search permissions to the directories. The permissions or the owner of the file being moved do not matter.
You can read more about this in the appropriate man pages, such as the page for rename(2) and path_resolution(7).
A file has permission and this determines if you can read, modify or execute this file.
A file exists in one or more directories and it is the permission of the directory, not the file, which determines if the directory can be listed, modified or used.
So when you move a file, you are changing the directory, not the file.
I have a java program to copy a folder(along with all files in it) from one location to another programatically.Now assume user pasted a zip file in this folder and then unzips it.Meanwhile my program starts,then it copies only the files which got unzipped in the folder.I want to wait till this unzipping of files is finished.
Hence I am looking for a programmatical way using which I can detect that the zip file has been completely uncompressed and then only resume with the normal copying of files.Is there any way to do this?
See if you can rename the file that's beeing unpacked. If it's not "renamable" then the file is probably locked.
There is no reliable way to do this because you have no control over the unzipper; for example, the user could decide to unzip only some of the files. Since the unzipper won't tell you what the user selected in the UI, there is no reliable way to know when "all" files have been unpacked.
Workarounds:
Users must unzip files in a different folder and then move the files into the folder that you watch. Move operations on the same disk are atomic, so this will make sure you get only complete files.
Accept ZIP archives as input and unpack them yourself. That way, you have full control over the unpack process and can make sure you only process complete files. Also, since the ZIP "table of contents" is at the end of the ZIP archive, this will also make sure that you only process complete ZIP archives.
I would like a zip file Test.zip containing 2 folders, say, A and B, to be unzipped outside of Test folder.
For now A and B are unzipped within Test folder i.e Test->A and Test->B, whereas I want it in a different folder like Test2->A and Test2->B. right now I am getting an output like Test2->Test->A.
How can i achieve this? Please help.
It sounds to me like your Test.zip file simply contains a folder named "Test" that in turn contains A and B. Could you verify if this is the case?
If that's so, maybe you could detect if the zip file contains a single directory with the same name as the file. If that is so, extract from that subdirectory into your target. If not, extract directly from the zip root.