The problem statement is, you have to list the name of the files from the given directory, you have given one directory structure which has some subdirectory and some file in them.
I did some part of the code but it is not working can you please help me what is the correct way of doing it.
code
public class Test {
public static void main(String[] args) {
RunableExample run = new RunableExample();
Thread th = new Thread(run, "thread1");
String directoryName = "C:\\Users\\GUR35893\\Desktop\\CleanupMTM";
File directory = new File(directoryName);
File[] fList = directory.listFiles();
RunableExample.MyList = new ArrayList<File>();
for (File file : fList) {
RunableExample.MyList.add(file);
}
try {
th.start();
} catch (Exception e) {
}
}
}
public class RunableExample implements Runnable {
public static List<File> MyList;
int count = 0;
File filepath;
public void run() {
try {
while (count < MyList.size()) {
System.out.println(Thread.currentThread().getName() + ">>>>"
+ MyList.size() + " >>>>> " + count);
filepath = MyList.get(count);
if (filepath != null && filepath.isFile()) {
System.out.println(Thread.currentThread().getName() + " >>"
+ filepath.getAbsolutePath());
} else {
synchronized (this) {
if (filepath != null) {
// System.out.println("Else");
RunableExample run3 = new RunableExample();
Thread th3 = new Thread(run3, "thread" + count);
File[] fList = filepath.listFiles();
// System.out.println("Else1");
for (File file : fList) {
MyList.add(file);
}
th3.start();
}
}
}
count++;
}
} catch (Exception e) {
e.printStackTrace();
System.out.println(e);
}
}
}
If you have a directory (including sub-directories) and you want list all files.
The simplest yet effective approach would be iterate through a directory, there will be just 2 options either its a file or its a directory.
If it's a file, simply name it, don't spawn a new thread for it.
If it's a directory, spawn a new thread and re-use the same code for traversing the files or sub-directories in that directory in the newly spawned thread.
If you could give a sample output then maybe we can help further. But till then, I don't see any use of synchronization in the code.
Implementation of #Himanshu Answer.
import java.io.File;
class Lister extends Thread{
String basepath;
Lister(String basepath){
this.basepath = basepath;
}
#Override
public void run(){
File rootDir = new File(basepath);
for(File f : rootDir.listFiles()){
if(f.isDirectory())
new Lister(f.toString()).start();
else
System.out.println(f);
}
}
}
class Main {
public static void main(String[] args) {
new Lister("/").start();
}
}
This code works, but make sure it don't memory overflow for huge directory trees. For that you can add extra checks to spawn only directory you need.
I am creating a rollback feature and here is what I have and wanna achieve:
a tmp folder is created in the same location as the data folder;
before doing any operation I copy all the contents from data folder to tmp folder (small amount of data).
On rollback I want to delete the data folder and rename tmp folder to data folder.
This is what I tried
String contentPath = "c:\\temp\\data";
String tmpContentPath = "c:\\temp\\data.TMP";
if (Files.exists(Paths.get(tmpContentPath)) && Files.list(Paths.get(tmpContentPath)).count() > 0) {
FileUtils.deleteDirectory(new File(contentPath));
Files.move(Paths.get(tmpContentPath), Paths.get(contentPath), java.nio.file.StandardCopyOption.REPLACE_EXISTING);
}
but this throws FileAlreadyExistsException even though I deleted the target directory in the same method.
Once the program exits I don't see the c:\temp\data directory, so the directory is actually deleted.
Now if I try StandardCopyOption.ATOMIC_MOVE it throws an java.nio.file.AccessDeniedException.
What is the best way to move tmp dir to data dir in these kind of situations?
Actually in java 7 or above you can just use the Files to achieve the folder moving even there is a conflict, which means the target folder already exists.
private static void moveFolder(Path thePath, Path targetPath) {
if (Files.exists(targetPath)) { // if the target folder exists, delete it first;
deleteFolder(targetPath);
}
try {
Files.move(thePath, targetPath);
} catch (IOException ignored) {
ignored.printStackTrace();
}
}
private static void deleteFolder(Path path) {
try {
if (Files.isRegularFile(path)) { // delete regular file directly;
Files.delete(path);
return;
}
try (Stream<Path> paths = Files.walk(path)) {
paths.filter(p -> p.compareTo(path) != 0).forEach(p -> deleteFolder(p)); // delete all the children folders or files;
Files.delete(path); // delete the folder itself;
}
} catch (IOException ignored) {
ignored.printStackTrace();
}
}
Try This
public class MoveFolder
{
public static void main(String[] args) throws IOException
{
File sourceFolder = new File("c:\\temp\\data.TMP");
File destinationFolder = new File("c:\\temp\\data");
if (destinationFolder.exists())
{
destinationFolder.delete();
}
copyAllData(sourceFolder, destinationFolder);
}
private static void copyAllData(File sourceFolder, File destinationFolder)
throws IOException
{
destinationFolder.mkdir();
String files[] = sourceFolder.list();
for (String file : files)
{
File srcFile = new File(sourceFolder, file);
File destFile = new File(destinationFolder, file);
copyAllData(srcFile, destFile); //call recursive
}
}
}
Figured out the issue. In my code before doing a rollback, I am doing a backup, in that method I am using this section to do the copy
if (Files.exists(Paths.get(contentPath)) && Files.list(Paths.get(contentPath)).count() > 0) {
copyPath(Paths.get(contentPath), Paths.get(tmpContentPath));
}
Changed it to
try (Stream<Path> fileList = Files.list(Paths.get(contentPath))) {
if (Files.exists(Paths.get(contentPath)) && fileList.count() > 0) {
copyPath(Paths.get(contentPath), Paths.get(tmpContentPath));
}
}
to fix the issue
My code won't compile. I think it has to do with directory path, because I keep getting error message. I am trying to print out my sample directory( SampleDir) located in Desktop. Can someone help me with the directory path? Thank you in advance!
public class WalkDirectory {
public static void main(String[] args) {
File [] files = new File("C:/SampleDir").listFiles();
showFiles(files);
}
private static void showFiles(File[] files) {
for(File file: files) {
if(file.isDirectory()) {
System.out.println("Directory: " + file.getName());
showFiles(file.listFiles()); // files from the existing directory or current directory
}
else {
System.out.println("File: " + file.getName());
}
}
}
Your } characters are misplaced.
The code wad edited and in the edited code, it misses a } character at the end. For info, in the original, one was misplaced and another was missing (the last) I believe.
Try that :
import java.io.File;
public class WalkDirectory {
public static void main(String[] args) {
File[] files = new File("C:/SampleDir").listFiles();
showFiles(files);
}
private static void showFiles(File[] files) {
for (File file : files) {
if (file.isDirectory()) {
System.out.println("Directory: " + file.getName());
showFiles(file.listFiles()); // files from the existing directory or current directory
}
else {
System.out.println("File: " + file.getName());
}
}
}
}
EDIT
Exception in thread "main" java.lang.NullPointerException at
WalkDirectory.showFiles(WalkDirectory.java:16) at
WalkDirectory.main(WalkDirectory.java:11)
I suppose that the NPE is triggered in the foreach
for (File file : files)
because files array is nulL.
You should write that to check that the folder exists :
public static void main(String[] args) {
final File dirWithFiles = new File("C:/SampleDir");
//check folder exist and is a directory
if (!dirWithFiles.exist()) {
System.out.println("dir " + dirWithFiles + " does not exit");
return;
}
if (!dirWithFiles.isDirectory()) {
System.out.println("dir " + dirWithFiles + " is not a directory");
return;
}
// end check
File[] files = dirWithFiles.listFiles();
showFiles(files);
}
If the folder control fails, you should check in your filesystem that the input folder used in the application exists.
If you are in Windows env, I think the problem is in how you declare the path: "C:/SampleDir"..
Try with something like that:
String path = "C:\\Documents and Settings\\Your User\\Desktop\\SampleDir";
File[] files = new File(path).listFiles();
I'm looking for a way to get all the names of directories in a given directory, but not files.
For example, let's say I have a folder called Parent, and inside that I have 3 folders: Child1 Child2 and Child3.
I want to get the names of the folders, but don't care about the contents, or the names of subfolders inside Child1, Child2, etc.
Is there a simple way to do this?
If you are on java 7, you might wanna try using the support provided in
package java.nio.file
If your directory has many entries, it will be able to start listing them without reading them all into memory first. read more in the javadoc: http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#newDirectoryStream(java.nio.file.Path,%20java.lang.String)
Here is also that example adapted to your needs:
public static void main(String[] args) {
DirectoryStream.Filter<Path> filter = new DirectoryStream.Filter<Path>() {
#Override
public boolean accept(Path file) throws IOException {
return (Files.isDirectory(file));
}
};
Path dir = FileSystems.getDefault().getPath("c:/");
try (DirectoryStream<Path> stream = Files.newDirectoryStream(dir, filter)) {
for (Path path : stream) {
// Iterate over the paths in the directory and print filenames
System.out.println(path.getFileName());
}
} catch (IOException e) {
e.printStackTrace();
}
}
You can use String[] directories = file.list() to list all file names,
then use loop to check each sub-files and use file.isDirectory() function to get subdirectories.
For example:
File file = new File("C:\\Windows");
String[] names = file.list();
for(String name : names)
{
if (new File("C:\\Windows\\" + name).isDirectory())
{
System.out.println(name);
}
}
public static void displayDirectoryContents(File dir) {
try {
File[] files = dir.listFiles();
for (File file : files) {
if (file.isDirectory()) {
System.out.println("Directory Name==>:" + file.getCanonicalPath());
displayDirectoryContents(file);
} else {
System.out.println("file Not Acess===>" + file.getCanonicalPath());
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
====inside class/Method provide File=URL ======
File currentDir = new File("/home/akshya/NetBeansProjects/");
displayDirectoryContents(currentDir);
}
I have this function that prints the name of all the files in a directory recursively. The problem is that my code is very slow because it has to access a remote network device with every iteration.
My plan is to first load all the files from the directory recursively and then after that go through all files with the regex to filter out all the files I don't want. Is there a better solution?
public static printFnames(String sDir) {
File[] faFiles = new File(sDir).listFiles();
for (File file : faFiles) {
if (file.getName().matches("^(.*?)")) {
System.out.println(file.getAbsolutePath());
}
if (file.isDirectory()) {
printFnames(file.getAbsolutePath());
}
}
}
This is just a test. Later on I'm not going to use the code like this; instead I'm going to add the path and modification date of every file which matches an advanced regex to an array.
Assuming this is actual production code you'll be writing, then I suggest using the solution to this sort of thing that's already been solved - Apache Commons IO, specifically FileUtils.listFiles(). It handles nested directories, filters (based on name, modification time, etc).
For example, for your regex:
Collection files = FileUtils.listFiles(
dir,
new RegexFileFilter("^(.*?)"),
DirectoryFileFilter.DIRECTORY
);
This will recursively search for files matching the ^(.*?) regex, returning the results as a collection.
It's worth noting that this will be no faster than rolling your own code, it's doing the same thing - trawling a filesystem in Java is just slow. The difference is, the Apache Commons version will have no bugs in it.
In Java 8, it's a 1-liner via Files.find() with an arbitrarily large depth (eg 999) and BasicFileAttributes of isRegularFile()
public static printFnames(String sDir) {
Files.find(Paths.get(sDir), 999, (p, bfa) -> bfa.isRegularFile()).forEach(System.out::println);
}
To add more filtering, enhance the lambda, for example all jpg files modified in the last 24 hours:
(p, bfa) -> bfa.isRegularFile()
&& p.getFileName().toString().matches(".*\\.jpg")
&& bfa.lastModifiedTime().toMillis() > System.currentMillis() - 86400000
This is a very simple recursive method to get all files from a given root.
It uses the Java 7 NIO Path class.
private List<String> getFileNames(List<String> fileNames, Path dir) {
try(DirectoryStream<Path> stream = Files.newDirectoryStream(dir)) {
for (Path path : stream) {
if(path.toFile().isDirectory()) {
getFileNames(fileNames, path);
} else {
fileNames.add(path.toAbsolutePath().toString());
System.out.println(path.getFileName());
}
}
} catch(IOException e) {
e.printStackTrace();
}
return fileNames;
}
With Java 7, a faster way to walk through a directory tree was introduced with the Paths and Files functionality. They're much faster than the "old" File way.
This would be the code to walk through and check path names with a regular expression:
public final void test() throws IOException, InterruptedException {
final Path rootDir = Paths.get("path to your directory where the walk starts");
// Walk thru mainDir directory
Files.walkFileTree(rootDir, new FileVisitor<Path>() {
// First (minor) speed up. Compile regular expression pattern only one time.
private Pattern pattern = Pattern.compile("^(.*?)");
#Override
public FileVisitResult preVisitDirectory(Path path,
BasicFileAttributes atts) throws IOException {
boolean matches = pattern.matcher(path.toString()).matches();
// TODO: Put here your business logic when matches equals true/false
return (matches)? FileVisitResult.CONTINUE:FileVisitResult.SKIP_SUBTREE;
}
#Override
public FileVisitResult visitFile(Path path, BasicFileAttributes mainAtts)
throws IOException {
boolean matches = pattern.matcher(path.toString()).matches();
// TODO: Put here your business logic when matches equals true/false
return FileVisitResult.CONTINUE;
}
#Override
public FileVisitResult postVisitDirectory(Path path,
IOException exc) throws IOException {
// TODO Auto-generated method stub
return FileVisitResult.CONTINUE;
}
#Override
public FileVisitResult visitFileFailed(Path path, IOException exc)
throws IOException {
exc.printStackTrace();
// If the root directory has failed it makes no sense to continue
return path.equals(rootDir)? FileVisitResult.TERMINATE:FileVisitResult.CONTINUE;
}
});
}
The fast way to get the content of a directory using Java 7 NIO:
import java.nio.file.DirectoryStream;
import java.nio.file.Files;
import java.nio.file.FileSystems;
import java.nio.file.Path;
...
Path dir = FileSystems.getDefault().getPath(filePath);
DirectoryStream<Path> stream = Files.newDirectoryStream(dir);
for (Path path : stream) {
System.out.println(path.getFileName());
}
stream.close();
Java's interface for reading filesystem folder contents is not very performant (as you've discovered). JDK 7 fixes this with a completely new interface for this sort of thing, which should bring native level performance to these sorts of operations.
The core issue is that Java makes a native system call for every single file. On a low latency interface, this is not that big of a deal - but on a network with even moderate latency, it really adds up. If you profile your algorithm above, you'll find that the bulk of the time is spent in the pesky isDirectory() call - that's because you are incurring a round trip for every single call to isDirectory(). Most modern OSes can provide this sort of information when the list of files/folders was originally requested (as opposed to querying each individual file path for it's properties).
If you can't wait for JDK7, one strategy for addressing this latency is to go multi-threaded and use an ExecutorService with a maximum # of threads to perform your recursion. It's not great (you have to deal with locking of your output data structures), but it'll be a heck of a lot faster than doing this single threaded.
In all of your discussions about this sort of thing, I highly recommend that you compare against the best you could do using native code (or even a command line script that does roughly the same thing). Saying that it takes an hour to traverse a network structure doesn't really mean that much. Telling us that you can do it native in 7 second, but it takes an hour in Java will get people's attention.
This will work just fine and it’s recursive.
File root = new File("ROOT PATH");
for (File file : root.listFiles())
{
getFilesRecursive(file);
}
private static void getFilesRecursive(File pFile)
{
for(File files : pFile.listFiles())
{
if(files.isDirectory())
{
getFilesRecursive(files);
}
else
{
// Do your thing
//
// You can either save in HashMap and
// use it as per your requirement
}
}
}
I personally like this version of FileUtils. Here's an example that finds all mp3s or flacs in a directory or any of its subdirectories:
String[] types = {"mp3", "flac"};
Collection<File> files2 = FileUtils.listFiles(/path/to/your/dir, types , true);
This will work fine
public void displayAll(File path){
if(path.isFile()){
System.out.println(path.getName());
}else{
System.out.println(path.getName());
File files[] = path.listFiles();
for(File dirOrFile: files){
displayAll(dirOrFile);
}
}
}
Java 8
public static void main(String[] args) throws IOException {
Path start = Paths.get("C:\\data\\");
try (Stream<Path> stream = Files.walk(start, Integer.MAX_VALUE)) {
List<String> collect = stream
.map(String::valueOf)
.sorted()
.collect(Collectors.toList());
collect.forEach(System.out::println);
}
}
public class GetFilesRecursive {
public static List <String> getFilesRecursively(File dir){
List <String> ls = new ArrayList<String>();
for (File fObj : dir.listFiles()) {
if(fObj.isDirectory()) {
ls.add(String.valueOf(fObj));
ls.addAll(getFilesRecursively(fObj));
} else {
ls.add(String.valueOf(fObj));
}
}
return ls;
}
public static List <String> getListOfFiles(String fullPathDir) {
List <String> ls = new ArrayList<String> ();
File f = new File(fullPathDir);
if (f.exists()) {
if(f.isDirectory()) {
ls.add(String.valueOf(f));
ls.addAll(getFilesRecursively(f));
}
} else {
ls.add(fullPathDir);
}
return ls;
}
public static void main(String[] args) {
List <String> ls = getListOfFiles("/Users/srinivasab/Documents");
for (String file:ls) {
System.out.println(file);
}
System.out.println(ls.size());
}
}
This function will probably list all the file name and its path from its directory and its subdirectories.
public void listFile(String pathname) {
File f = new File(pathname);
File[] listfiles = f.listFiles();
for (int i = 0; i < listfiles.length; i++) {
if (listfiles[i].isDirectory()) {
File[] internalFile = listfiles[i].listFiles();
for (int j = 0; j < internalFile.length; j++) {
System.out.println(internalFile[j]);
if (internalFile[j].isDirectory()) {
String name = internalFile[j].getAbsolutePath();
listFile(name);
}
}
} else {
System.out.println(listfiles[i]);
}
}
}
it feels like it's stupid access the
filesystem and get the contents for
every subdirectory instead of getting
everything at once.
Your feeling is wrong. That's how filesystems work. There is no faster way (except when you have to do this repeatedly or for different patterns, you can cache all the file paths in memory, but then you have to deal with cache invalidation i.e. what happens when files are added/removed/renamed while the app runs).
Just so you know isDirectory() is quite a slow method. I'm finding it quite slow in my file browser. I'll be looking into a library to replace it with native code.
Another optimized code
import java.io.File;
import java.util.ArrayList;
import java.util.List;
public class GetFilesRecursive {
public static List <String> getFilesRecursively(File dir){
List <String> ls = new ArrayList<String>();
if (dir.isDirectory())
for (File fObj : dir.listFiles()) {
if(fObj.isDirectory()) {
ls.add(String.valueOf(fObj));
ls.addAll(getFilesRecursively(fObj));
} else {
ls.add(String.valueOf(fObj));
}
}
else
ls.add(String.valueOf(dir));
return ls;
}
public static void main(String[] args) {
List <String> ls = getFilesRecursively(new File("/Users/srinivasab/Documents"));
for (String file:ls) {
System.out.println(file);
}
System.out.println(ls.size());
}
}
One more example of listing files and directories using Java 8 filter
public static void main(String[] args) {
System.out.println("Files!!");
try {
Files.walk(Paths.get("."))
.filter(Files::isRegularFile)
.filter(c ->
c.getFileName().toString().substring(c.getFileName().toString().length()-4).contains(".jpg")
||
c.getFileName().toString().substring(c.getFileName().toString().length()-5).contains(".jpeg")
)
.forEach(System.out::println);
} catch (IOException e) {
System.out.println("No jpeg or jpg files");
}
System.out.println("\nDirectories!!\n");
try {
Files.walk(Paths.get("."))
.filter(Files::isDirectory)
.forEach(System.out::println);
} catch (IOException e) {
System.out.println("No Jpeg files");
}
}
Test folder
I tested some method with 60,000 files in 284 folders on Windows 11:
public class App {
public static void main(String[] args) throws Exception {
Path path = Paths.get("E:\\书籍");
// 1.walkFileTree
long start1 = System.currentTimeMillis();
Files.walkFileTree(path, new SimpleFileVisitor<Path>() {
#Override
public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) {
// if(pathMatcher.matches(file))
// files.add(file.toFile());
return FileVisitResult.CONTINUE;
}
#Override
public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) {
// System.out.println(dir.getFileName());
return FileVisitResult.CONTINUE;
}
#Override
public FileVisitResult visitFileFailed(Path file, IOException e) {
return FileVisitResult.CONTINUE;
}
});
long end1 = System.currentTimeMillis();
// 2. newDirectoryStream
long start2 = System.currentTimeMillis();
search(path.toFile());
long end2 = System.currentTimeMillis();
// 3. listFiles
long start3 = System.currentTimeMillis();
getFileNames(path);
long end3 = System.currentTimeMillis();
System.out.println("\r执行耗时:" + (end1 - start1));
System.out.println("\r执行耗时:" + (end2 - start2));
System.out.println("\r执行耗时:" + (end3 - start3));
}
private static void getFileNames(Path dir) {
try(DirectoryStream<Path> stream = Files.newDirectoryStream(dir)) {
for (Path path : stream) {
if(Files.isDirectory(path)) {
getFileNames(path);
}
}
} catch(IOException e) {
e.printStackTrace();
}
}
public static void search(File file) {
Queue<File> q = new LinkedList<>();
q.offer(file);
while (!q.isEmpty()) {
try {
for (File childfile : q.poll().listFiles()) {
// System.out.println(childfile.getName());
if (childfile.isDirectory()) {
q.offer(childfile);
}
}
} catch (Exception e) {
}
}
}
}
Result (milliseconds):
walkFileTree
listFiles
newDirectoryStream
68
451
493
64
464
482
61
478
457
67
477
488
59
474
466
Known performance issues:
From Kevin Day's answer:
If you profile your algorithm above, you'll find that the bulk of the time is spent in the pesky isDirectory() call - that's because you are incurring a round trip for every single call to isDirectory().
listfiles() will create new File Object for every entry
In Guava you don't have to wait for a Collection to be returned to you, but can actually iterate over the files. It is easy to imagine a IDoSomethingWithThisFile interface in the signature of the below function:
public static void collectFilesInDir(File dir) {
TreeTraverser<File> traverser = Files.fileTreeTraverser();
FluentIterable<File> filesInPostOrder = traverser.preOrderTraversal(dir);
for (File f: filesInPostOrder)
System.out.printf("File: %s\n", f.getPath());
}
TreeTraverser also allows you to between various traversal styles.
import java.io.*;
public class MultiFolderReading {
public void checkNoOfFiles (String filename) throws IOException {
File dir = new File(filename);
File files[] = dir.listFiles(); // Files array stores the list of files
for(int i=0; i<files.length; i++)
{
if(files[i].isFile()) // Check whether files[i] is file or directory
{
System.out.println("File::" + files[i].getName());
System.out.println();
}
else if(files[i].isDirectory())
{
System.out.println("Directory::" + files[i].getName());
System.out.println();
checkNoOfFiles(files[i].getAbsolutePath());
}
}
}
public static void main(String[] args) throws IOException {
MultiFolderReading mf = new MultiFolderReading();
String str = "E:\\file";
mf.checkNoOfFiles(str);
}
}
The more efficient way I found in dealing with millions of folders and files is to capture a directory listing through a DOS command in some file and parse it.
Once you have parsed the data then you can do analysis and compute statistics.