How can I find out when a file was created using java, as I wish to delete files older than a certain time period, currently I am deleting all files in a directory, but this is not ideal:
public void DeleteFiles() {
File file = new File("D:/Documents/NetBeansProjects/printing~subversion/fileupload/web/resources/pdf/");
System.out.println("Called deleteFiles");
DeleteFiles(file);
File file2 = new File("D:/Documents/NetBeansProjects/printing~subversion/fileupload/Uploaded/");
DeleteFilesNonPdf(file2);
}
public void DeleteFiles(File file) {
System.out.println("Now will search folders and delete files,");
if (file.isDirectory()) {
for (File f : file.listFiles()) {
DeleteFiles(f);
}
} else {
file.delete();
}
}
Above is my current code, I am trying now to add an if statement in that will only delete files older than say a week.
EDIT:
#ViewScoped
#ManagedBean
public class Delete {
public void DeleteFiles() {
File file = new File("D:/Documents/NetBeansProjects/printing~subversion/fileupload/web/resources/pdf/");
System.out.println("Called deleteFiles");
DeleteFiles(file);
File file2 = new File("D:/Documents/NetBeansProjects/printing~subversion/fileupload/Uploaded/");
DeleteFilesNonPdf(file2);
}
public void DeleteFiles(File file) {
System.out.println("Now will search folders and delete files,");
if (file.isDirectory()) {
System.out.println("Date Modified : " + file.lastModified());
for (File f : file.listFiles()) {
DeleteFiles(f);
}
} else {
file.delete();
}
}
Adding a loop now.
EDIT
I have noticed while testing the code above I get the last modified in :
INFO: Date Modified : 1361635382096
How should I code the if loop to say if it is older than 7 days delete it when it is in the above format?
You can use File.lastModified() to get the last modified time of a file/directory.
Can be used like this:
long diff = new Date().getTime() - file.lastModified();
if (diff > x * 24 * 60 * 60 * 1000) {
file.delete();
}
Which deletes files older than x (an int) days.
Commons IO has built-in support for filtering files by age with its AgeFileFilter. Your DeleteFiles could just look like this:
import java.io.File;
import org.apache.commons.io.FileUtils;
import org.apache.commons.io.filefilter.AgeFileFilter;
import static org.apache.commons.io.filefilter.TrueFileFilter.TRUE;
// a Date defined somewhere for the cutoff date
Date thresholdDate = <the oldest age you want to keep>;
public void DeleteFiles(File file) {
Iterator<File> filesToDelete =
FileUtils.iterateFiles(file, new AgeFileFilter(thresholdDate), TRUE);
for (File aFile : filesToDelete) {
aFile.delete();
}
}
Update: To use the value as given in your edit, define the thresholdDate as:
Date tresholdDate = new Date(1361635382096L);
Using Apache utils is probably the easiest. Here is the simplest solution I could come up with.
public void deleteOldFiles() {
Date oldestAllowedFileDate = DateUtils.addDays(new Date(), -3); //minus days from current date
File targetDir = new File("C:\\TEMP\\archive\\");
Iterator<File> filesToDelete = FileUtils.iterateFiles(targetDir, new AgeFileFilter(oldestAllowedFileDate), null);
//if deleting subdirs, replace null above with TrueFileFilter.INSTANCE
while (filesToDelete.hasNext()) {
FileUtils.deleteQuietly(filesToDelete.next());
} //I don't want an exception if a file is not deleted. Otherwise use filesToDelete.next().delete() in a try/catch
}
Example using Java 8's Time API
LocalDate today = LocalDate.now();
LocalDate eailer = today.minusDays(30);
Date threshold = Date.from(eailer.atStartOfDay(ZoneId.systemDefault()).toInstant());
AgeFileFilter filter = new AgeFileFilter(threshold);
File path = new File("...");
File[] oldFolders = FileFilterUtils.filter(filter, path);
for (File folder : oldFolders) {
System.out.println(folder);
}
Using lambdas (Java 8+)
Non recursive option to delete all files in current folder that are older than N days (ignores sub folders):
public static void deleteFilesOlderThanNDays(int days, String dirPath) throws IOException {
long cutOff = System.currentTimeMillis() - (days * 24 * 60 * 60 * 1000);
Files.list(Paths.get(dirPath))
.filter(path -> {
try {
return Files.isRegularFile(path) && Files.getLastModifiedTime(path).to(TimeUnit.MILLISECONDS) < cutOff;
} catch (IOException ex) {
// log here and move on
return false;
}
})
.forEach(path -> {
try {
Files.delete(path);
} catch (IOException ex) {
// log here and move on
}
});
}
Recursive option, that traverses sub-folders and deletes all files that are older than N days:
public static void recursiveDeleteFilesOlderThanNDays(int days, String dirPath) throws IOException {
long cutOff = System.currentTimeMillis() - (days * 24 * 60 * 60 * 1000);
Files.list(Paths.get(dirPath))
.forEach(path -> {
if (Files.isDirectory(path)) {
try {
recursiveDeleteFilesOlderThanNDays(days, path.toString());
} catch (IOException e) {
// log here and move on
}
} else {
try {
if (Files.getLastModifiedTime(path).to(TimeUnit.MILLISECONDS) < cutOff) {
Files.delete(path);
}
} catch (IOException ex) {
// log here and move on
}
}
});
}
Here's Java 8 version using Time API. It's been tested and used in our project:
public static int deleteFiles(final Path destination,
final Integer daysToKeep) throws IOException {
final Instant retentionFilePeriod = ZonedDateTime.now()
.minusDays(daysToKeep).toInstant();
final AtomicInteger countDeletedFiles = new AtomicInteger();
Files.find(destination, 1,
(path, basicFileAttrs) -> basicFileAttrs.lastModifiedTime()
.toInstant().isBefore(retentionFilePeriod))
.forEach(fileToDelete -> {
try {
if (!Files.isDirectory(fileToDelete)) {
Files.delete(fileToDelete);
countDeletedFiles.incrementAndGet();
}
} catch (IOException e) {
throw new UncheckedIOException(e);
}
});
return countDeletedFiles.get();
}
For a JDK 8 solution using both NIO file streams and JSR-310
long cut = LocalDateTime.now().minusWeeks(1).toEpochSecond(ZoneOffset.UTC);
Path path = Paths.get("/path/to/delete");
Files.list(path)
.filter(n -> {
try {
return Files.getLastModifiedTime(n)
.to(TimeUnit.SECONDS) < cut;
} catch (IOException ex) {
//handle exception
return false;
}
})
.forEach(n -> {
try {
Files.delete(n);
} catch (IOException ex) {
//handle exception
}
});
The sucky thing here is the need for handling exceptions within each lambda. It would have been great for the API to have UncheckedIOException overloads for each IO method. With helpers to do this one could write:
public static void main(String[] args) throws IOException {
long cut = LocalDateTime.now().minusWeeks(1).toEpochSecond(ZoneOffset.UTC);
Path path = Paths.get("/path/to/delete");
Files.list(path)
.filter(n -> Files2.getLastModifiedTimeUnchecked(n)
.to(TimeUnit.SECONDS) < cut)
.forEach(n -> {
System.out.println(n);
Files2.delete(n, (t, u)
-> System.err.format("Couldn't delete %s%n",
t, u.getMessage())
);
});
}
private static final class Files2 {
public static FileTime getLastModifiedTimeUnchecked(Path path,
LinkOption... options)
throws UncheckedIOException {
try {
return Files.getLastModifiedTime(path, options);
} catch (IOException ex) {
throw new UncheckedIOException(ex);
}
}
public static void delete(Path path, BiConsumer<Path, Exception> e) {
try {
Files.delete(path);
} catch (IOException ex) {
e.accept(path, ex);
}
}
}
JavaSE Canonical Solution.
Delete files older than expirationPeriod days.
private void cleanUpOldFiles(String folderPath, int expirationPeriod) {
File targetDir = new File(folderPath);
if (!targetDir.exists()) {
throw new RuntimeException(String.format("Log files directory '%s' " +
"does not exist in the environment", folderPath));
}
File[] files = targetDir.listFiles();
for (File file : files) {
long diff = new Date().getTime() - file.lastModified();
// Granularity = DAYS;
long desiredLifespan = TimeUnit.DAYS.toMillis(expirationPeriod);
if (diff > desiredLifespan) {
file.delete();
}
}
}
e.g. - to removed all files older than 30 days in folder "/sftp/logs" call:
cleanUpOldFiles("/sftp/logs", 30);
You can get the creation date of the file using NIO, following is the way:
BasicFileAttributes attrs = Files.readAttributes(file, BasicFileAttributes.class);
System.out.println("creationTime: " + attrs.creationTime());
More about it can be found here : http://docs.oracle.com/javase/tutorial/essential/io/fileAttr.html
Another approach with Apache commons-io and joda:
private void deleteOldFiles(String dir, int daysToRemainFiles) {
Collection<File> filesToDelete = FileUtils.listFiles(new File(dir),
new AgeFileFilter(DateTime.now().withTimeAtStartOfDay().minusDays(daysToRemainFiles).toDate()),
TrueFileFilter.TRUE); // include sub dirs
for (File file : filesToDelete) {
boolean success = FileUtils.deleteQuietly(file);
if (!success) {
// log...
}
}
}
Using Java NIO Files with lambdas & Commons IO
final long time = System.currentTimeMillis();
// Only show files & directories older than 2 days
final long maxdiff = TimeUnit.DAYS.toMillis(2);
List all found files and directories:
Files.newDirectoryStream(Paths.get("."), p -> (time - p.toFile().lastModified()) < maxdiff)
.forEach(System.out::println);
Or delete found files with FileUtils:
Files.newDirectoryStream(Paths.get("."), p -> (time - p.toFile().lastModified()) < maxdiff)
.forEach(p -> FileUtils.deleteQuietly(p.toFile()));
Here is the code to delete files which are not modified since six months & also create the log file.
package deleteFiles;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.Date;
import java.util.logging.FileHandler;
import java.util.logging.Logger;
import java.util.logging.SimpleFormatter;
public class Delete {
public static void deleteFiles()
{
int numOfMonths = -6;
String path="G:\\Files";
File file = new File(path);
FileHandler fh;
Calendar sixMonthAgo = Calendar.getInstance();
Calendar currentDate = Calendar.getInstance();
Logger logger = Logger.getLogger("MyLog");
sixMonthAgo.add(Calendar.MONTH, numOfMonths);
File[] files = file.listFiles();
ArrayList<String> arrlist = new ArrayList<String>();
try {
fh = new FileHandler("G:\\Files\\logFile\\MyLogForDeletedFile.log");
logger.addHandler(fh);
SimpleFormatter formatter = new SimpleFormatter();
fh.setFormatter(formatter);
for (File f:files)
{
if (f.isFile() && f.exists())
{
Date lastModDate = new Date(f.lastModified());
if(lastModDate.before(sixMonthAgo.getTime()))
{
arrlist.add(f.getName());
f.delete();
}
}
}
for(int i=0;i<arrlist.size();i++)
logger.info("deleted files are ===>"+arrlist.get(i));
}
catch ( Exception e ){
e.printStackTrace();
logger.info("error is-->"+e);
}
}
public static void main(String[] args)
{
deleteFiles();
}
}
Need to point out a bug on the first solution listed, x * 24 * 60 * 60 * 1000 will max out int value if x is big. So need to cast it to long value
long diff = new Date().getTime() - file.lastModified();
if (diff > (long) x * 24 * 60 * 60 * 1000) {
file.delete();
}
Perhaps this Java 11 & Spring solution will be useful to someone:
private void removeOldBackupFolders(Path folder, String name) throws IOException {
var current = System.currentTimeMillis();
var difference = TimeUnit.DAYS.toMillis(7);
BiPredicate<Path, BasicFileAttributes> predicate =
(path, attributes) ->
path.getFileName().toString().contains(name)
&& (current - attributes.lastModifiedTime().toMillis()) > difference;
try (var stream = Files.find(folder, 1, predicate)) {
stream.forEach(
path -> {
try {
FileSystemUtils.deleteRecursively(path);
log.warn("Deleted old backup {}", path.getFileName());
} catch (IOException lambdaEx) {
log.error("", lambdaEx);
}
});
}
}
The BiPredicate is used to filter files (i.e. files & folder in Java) by name and age.
FileSystemUtils.deleteRecursively() is a Spring method that recursively removes files & folders. You can change that to something like NIO.2 Files.files.walkFileTree() if you don't want to use Spring dependencies.
I've set the maxDepth of Files.find() to 1 based on my use case. You can set to it unlimited Integer.MAX_VALUE and risk irreversibly deleting your dev FS if you are not careful.
Example logs based on var difference = TimeUnit.MINUTES.toMillis(3):
2022-05-20 00:54:15.505 WARN 24680 --- [ single-1] u.t.s.service.impl.BackupServiceImpl : Deleted old backup backup_20052022_1652989557462
2022-05-20 00:54:15.506 WARN 24680 --- [ single-1] u.t.s.service.impl.BackupServiceImpl : Deleted old backup backup_20052022_1652989558474
2022-05-20 00:54:15.507 WARN 24680 --- [ single-1] u.t.s.service.impl.BackupServiceImpl : Deleted old backup backup_20052022_1652989589723
2022-05-20 00:54:15.508 WARN 24680 --- [ single-1] u.t.s.service.impl.BackupServiceImpl : Deleted old backup backup_20052022_1652989674083
Notes:
The stream of Files.find() must be wrapped inside of a try-with-resource (utilizing AutoCloseable) or handled the old-school way inside of a try-finally to close the stream.
A good example of Files.walkFileTree() for copying (can be adapted for deletion): https://stackoverflow.com/a/60621544/3242022
Using Apache commons-io and joda:
if ( FileUtils.isFileOlder(f, DateTime.now().minusDays(30).toDate()) ) {
f.delete();
}
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 days ago.
Improve this question
I have the following csv file structure:
car
score
description
Opel
30
43
Volvo
500
434
Kia
50
3
Toyota
4
4
Mazda
5000
4
I want to find numbers match. For example if 3 numbers are found : 50, 500, 5000 I want to find this pattern. I tried this:
File filesList[] = directoryPath.listFiles(textFileFilter);
System.out.println("List of the text files in the specified directory:");
for(File file : filesList) {
try {
try (var br = new FileReader(file.getAbsolutePath(), StandardCharsets.UTF_16)){
List<CsvLine> beans = new CsvToBeanBuilder(br)
.withType(CsvLine.class)
.build()
.parse();
Path originalPath = null;
boolean found50 = false;
boolean found500 = false;
boolean found5000 = false;
for (CsvLine item : beans)
{
originalPath = file.toPath();
if (item.getAvgMonthlySearches() != null)
{
if (item.getValue().compareTo(BigDecimal.valueOf(50)) == 0)
{
found50 = true;
}
if (item.getValue().compareTo(BigDecimal.valueOf(500)) == 0)
{
found500 = true;
}
if (item.getValue().compareTo(BigDecimal.valueOf(5000)) == 0)
{
found5000 = true;
}
}
if(found50 == true && found500 == true && found5000 == true){
found50 = false;
found500 = false;
found5000 = false;
// Move here file into new subdirectory when file is invalid
Path copied = Paths.get(file.getParent() + "/invalid_files");
try {
// Use resolve method to keep the "processed" as folder
br.close();
Files.move(originalPath, copied.resolve(originalPath.getFileName()), StandardCopyOption.REPLACE_EXISTING);
break;
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}
if (file.exists())
{
// Move here file into new subdirectory when file processing is finished
Path copied = Paths.get(file.getParent() + "/processed");
try {
// Use resolve method to keep the "processed" as folder
br.close();
Files.move(originalPath, copied.resolve(originalPath.getFileName()), StandardCopyOption.REPLACE_EXISTING);
break;
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}
} catch (Exception e){
e.printStackTrace();
}
Path originalPath = file.toPath();
System.out.println(String.format("\nProcessed file : %s, moving the file to subfolder /processed\n",
originalPath));
}
#Getter
#Setter
public class CsvLine {
#CsvBindByPosition(position = 2)
private BigDecimal value;
}
I want if I match all values 50, 500 and 5000 on csv file lines to move the current file into a separate folder. I tried to run the code but nothing happens. Do you know where I'm wrong?
Here is a working example with more structured code according your needs.
Exception handling is ommitted since I have no idea about your requirements regarding to them, but you can easily adapt this code for your needs.
Also I have no idea what is this item.getAvgMonthlySearches(), since I don't see this field in your csv example. This field and checking of it also ommitted, but you can add it.
The code is decomposed to:
Main method with preparations and starting the entire directory processing
method for directory processing
method for file processing
method for line processing
method for moving the file
factory method for creating predicates ("matchers" or "conditions" that you can specify as much as you want, instead of using "found500" or "found5000" etc.)
I used this csv according to your question:
car;score;description
Opel;30;43
Volvo;500;434
Kia;50;3
Toyota;4;4
Mazda;5000;4
So, the model is the following:
#Data
public class CsvLine {
#CsvBindByPosition(position = 1)
private BigDecimal value;
}
The main method:
public static void main(String[] args) throws IOException {
// preparing
File directory = Path.of("/Users/kerbermeister/csv/").toFile();
// we want to process only csv files from directory
FilenameFilter csvFilter = (dir, name) -> name.toLowerCase().endsWith(".csv");
// specify what conditions should meet the value of every line
// to be considered as a valid line
List<Predicate<BigDecimal>> fileMatchConditions = List.of(
ne(new BigDecimal("50")),
ne(new BigDecimal("500")),
ne(new BigDecimal("5000"))
);
// start directory processing
processDirectory(directory, csvFilter, fileMatchConditions);
}
The primary process with the rest methods:
public static void processDirectory(File directory, FilenameFilter filter,
List<Predicate<BigDecimal>> fileMatchConditions) throws IOException {
Path processedFolderPath = createDirectory(Path.of(directory.getAbsolutePath() + "/processed"));
Path invalidFilesFolderPath = createDirectory(Path.of(directory.getAbsolutePath() + "/invalid_files"));
File[] files = directory.listFiles(filter);
if (Objects.nonNull(files)) {
for (File file : files) {
if (isFileValid(file, fileMatchConditions)) {
// if file is valid, then move it to the processed directory
moveFile(file, processedFolderPath, StandardCopyOption.REPLACE_EXISTING);
} else {
// if file is not, then move it to the invalid-files directory
moveFile(file, invalidFilesFolderPath, StandardCopyOption.REPLACE_EXISTING);
}
}
}
}
/**
* primary file validation
* parses the file
*/
public static boolean isFileValid(File file, List<Predicate<BigDecimal>> matchConditions) throws IOException {
try (Reader reader = Files.newBufferedReader(file.getAbsoluteFile().toPath())) {
List<CsvLine> lineBeans = new CsvToBeanBuilder<CsvLine>(reader)
.withType(CsvLine.class)
.withSkipLines(1) // skip header
.withSeparator(';') // line separator
.build()
.parse();
// unique violations found within this file
Set<BigDecimal> violations = new HashSet<>();
for (CsvLine line : lineBeans) {
// skip if value in the line is null
if (Objects.isNull(line.getValue())) {
continue;
}
if (!isLineValid(line, matchConditions)) {
violations.add(line.getValue());
}
// if we reached all the violations, then file is not valid
if (violations.size() == matchConditions.size()) {
return false;
}
}
}
return true;
}
/**
* if all predicates return true, then line is valid
*/
public static boolean isLineValid(CsvLine line, List<Predicate<BigDecimal>> conditions) {
return conditions.stream().allMatch(predicate -> predicate.test(line.getValue()));
}
/**
* move the file to the directory specified
*/
public static void moveFile(File file, Path moveTo, StandardCopyOption option) throws IOException {
Files.move(file.toPath(), moveTo.resolve(file.getName()), option);
}
/**
* factory method for Predicate that returns true if compareTo != 0
*/
public static <T extends Comparable<T>> Predicate<T> ne(T target) {
return (value) -> value.compareTo(target) != 0;
}
public static Path createDirectory(Path path) throws IOException {
if (!Files.exists(path) || !Files.isDirectory(path)) {
return Files.createDirectory(path);
}
return path;
}
So, I think you can adapt it to your needs.
I have a Folder named Class and its two subdirectories are Lectures and Grades, and within those two subdirectories are txt files. How do I access the subdirectories (and maybe other subdirectories of the subdirectories Lectures and Grades)from the main directory - Class? I know I can use the absolute path in the code, but I would like to start from the starting folder, Class. So far, I have this as my code:
public class Test {
public static void main(String[] args) {
File directory = new File("/Users/Desktop/Class");
File[] folder = directory.listFiles();
for (File aFile : folder) {
// Stuck here...
}
}
}
Sorry.. I'm new to Java..
You can recursively call the method to read the file in sub directories
public static void main(String[] args) {
File currentDir = new File("/Users/Desktop/Class"); // current directory
displayDirectoryFiles(currentDir);
}
public static void displayDirectoryFiles(File dir) {
try {
File[] files = dir.listFiles();
for (File file : files) {
if (file.isDirectory()) {
System.out.println("directory:" + file.getCanonicalPath());
displayDirectoryContents(file);
} else {
System.out.println(" file:" + file.getCanonicalPath());
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
Handle the exception properly, currently just printing stacktrace
have you looked at this answer: Java, List only subdirectories from a directory, not files ?
also this could be a solution, there are several ways: http://zetcode.com/articles/javalistdirectory/
// Recursive Java program to print all files
// in a folder(and sub-folders)
import java.io.File;
public class GFG
{
static void RecursivePrint(File[] arr,int index,int level)
{
// terminate condition
if(index == arr.length)
return;
// tabs for internal levels
for (int i = 0; i < level; i++)
System.out.print("\t");
// for files
if(arr[index].isFile())
System.out.println(arr[index].getName());
// for sub-directories
else if(arr[index].isDirectory())
{
System.out.println("[" + arr[index].getName() + "]");
// recursion for sub-directories
RecursivePrint(arr[index].listFiles(), 0, level + 1);
}
// recursion for main directory
RecursivePrint(arr,++index, level);
}
// Driver Method
public static void main(String[] args)
{
// Provide full path for directory(change accordingly)
String maindirpath = "C:\\Users\\Gaurav Miglani\\Desktop\\Test";
// File object
File maindir = new File(maindirpath);
if(maindir.exists() && maindir.isDirectory())
{
// array for files and sub-directories
// of directory pointed by maindir
File arr[] = maindir.listFiles();
System.out.println("**********************************************");
System.out.println("Files from main directory : " + maindir);
System.out.println("**********************************************");
// Calling recursive method
RecursivePrint(arr,0,0);
}
}
}
In addition to Ros5292 answer, this can be achieven using Apache Commons IO library.
public void iterateFilesAndDirectoriesUsingApacheCommons() {
Iterator it = FileUtils.iterateFilesAndDirs(new File("/Users/Desktop/"), TrueFileFilter.INSTANCE, TrueFileFilter.INSTANCE);
while (it.hasNext()) {
System.out.println(((File) it.next()).getName());
}
}
I was listing the files in one folder and then I have to transfer them to another. However, the problem I get is the following ... When trying to paste them to the other folder the path is like E: \ Files, which causes me to generate some kind of file and it does not stick to me like it should . I tried several ways and still can not do it, I leave my code to see if you can help me
Path algo = Paths.get("E:/Files/");
public void Copy(String origenArchivo, Path algo) {
Path origenPath = Paths.get(origenArchivo);
String s = algo.toAbsolutePath().toString();
System.out.println(s);
Path destinoPath = Paths.get(s);
System.out.println(destinoPath);
String x = destinoPath.toString() + "/";
Path conv = Paths.get(x);
System.out.println(conv);
try {
Files.copy(origenPath, conv, StandardCopyOption.REPLACE_EXISTING);
} catch (IOException ex) {
Logger.getLogger(Metodos.class.getName()).log(Level.SEVERE, null, ex);
}
}
File dir = new File("C:/Users/PC/Desktop/");
public void TravelToFantasy(File dir) {
File listFile[] = dir.listFiles();
if (listFile != null) {
for (int i = 0; i < listFile.length; i++) {
if (listFile[i].isDirectory()) {
Copy(listFile[i]);
} else {
System.out.println(listFile[i].getPath());
System.out.println(destino);
this.Copy(listFile[i].getPath(), algo);
}
}
}
}
I was trying to put the "/" to the path Paths.get gets me, but it always ends up leaving me the path as E:\Files
Thanks four you!
You cannot pass a directory to Files.copy.
You don’t need all that conversion to and from strings. Just use Path.resolve instead:
public void copy(String origenArchivo, Path algo)
throws IOException {
Path origenPath = Paths.get(origenArchivo);
Path conv = algo.resolve(origenPath.getFileName());
Files.copy(origenPath, conv, StandardCopyOption.REPLACE_EXISTING);
}
As a side note, the Path/Paths/Files classes are superior to the java.io.File class, because they provide meaningful information if an operation fails. You should not use java.io.File at all:
Path dir = Paths.get("C:/Users/PC/Desktop/");
public void TravelToFantasy(Path dir)
throws IOException {
try (DirectoryStream<Path> listFile = Files.newDirectoryStream(dir)) {
for (Path file : listFile) {
if (Files.isDirectory(file)) {
Copy(file);
} else {
System.out.println(file);
System.out.println(destino);
this.Copy(file, algo);
}
}
}
}
Use java.io.File.separator instead of "/", this will help your code to run any OS.
What is the maximum name length of the TempFile in java and MaximumFilesize is depending
on the machine where we mention the temp directory to be created or some other java based?
When to call the deleteOnExit() method--- but what is the use of this method because it gets called when the JVM comes down.But in Production based servers will run 24*7.So file will be created continuously and it will be problem for the server where we create file because of the memory.
To autoclean temp-files older (modified) than XX seconds...
import java.io.File;
import java.io.IOException;
import java.util.HashSet;
public class FileAutoCleaner {
final static FileAutoCleaner singleton = new FileAutoCleaner();
final HashSet<File> bag = new HashSet<File>();
public static FileAutoCleaner getInstance() {
return singleton;
}
// This create the temp file and add to bag for checking
public synchronized File createTempFile(String prefix, String suffix) throws IOException {
File tmp = File.createTempFile(prefix, suffix);
tmp.deleteOnExit();
bag.add(tmp);
return tmp;
}
// Periodically call this function to clean old files
public synchronized void cleanOldFiles(final int secondsOld) {
long now = (System.currentTimeMillis() / 1000);
for (File f : bag) {
long expired = (f.lastModified() / 1000) + secondsOld;
if (now >= expired) {
System.out.println("Deleted file=" + f.getAbsolutePath());
f.delete();
bag.remove(f);
}
}
}
public static void main(String[] args) throws Exception {
FileAutoCleaner fac = FileAutoCleaner.getInstance();
System.out.println(System.currentTimeMillis() / 1000);
fac.createTempFile("deleteme", "tmp");
for (int i = 0; i < 5; i++) {
System.out.println(System.currentTimeMillis() / 1000);
// delete if older than 2 seconds
fac.cleanOldFiles(2);
Thread.sleep(1000);
}
}
}
What is the maximum name length of the TempFile in java and MaximumFilesize is depenting on the machine where we mention the temp directory to be created or some other java based?
1775 static File generateFile(String prefix, String suffix, File dir) {
1776 long n = random.nextLong();
1777 if (n == Long.MIN_VALUE) {
1778 n = 0; // corner case
1779 } else {
1780 n = Math.abs(n);
1781 }
1782 return new File(dir, prefix + Long.toString(n) + suffix);
1783 }
so the file name could be any random long with prefix suffix
When to call the deleteOnExit() method--- but what is the use of this method because it gets called when the JVM comes down.But in Production based servers will run 24*7
There are some file thats needs to be created for application life,
For example when you launch eclipse you will see .lock file created to lock the work space it will get deleted when your eclipse exists
Maximum file sizes in java are limited to Long.MAX_VALUE but.... this, and filename length are limited by the underlying filesystem.... like EXT4 (Linux) or NTFS (Windows)
String tmpDir = System.getProperty("java.io.tmpdir");
File file=new File(tmpDir+"\"+fileName+".tmp");
How can I retrieve size of folder or file in Java?
java.io.File file = new java.io.File("myfile.txt");
file.length();
This returns the length of the file in bytes or 0 if the file does not exist. There is no built-in way to get the size of a folder, you are going to have to walk the directory tree recursively (using the listFiles() method of a file object that represents a directory) and accumulate the directory size for yourself:
public static long folderSize(File directory) {
long length = 0;
for (File file : directory.listFiles()) {
if (file.isFile())
length += file.length();
else
length += folderSize(file);
}
return length;
}
WARNING: This method is not sufficiently robust for production use. directory.listFiles() may return null and cause a NullPointerException. Also, it doesn't consider symlinks and possibly has other failure modes. Use this method.
Using java-7 nio api, calculating the folder size can be done a lot quicker.
Here is a ready to run example that is robust and won't throw an exception. It will log directories it can't enter or had trouble traversing. Symlinks are ignored, and concurrent modification of the directory won't cause more trouble than necessary.
/**
* Attempts to calculate the size of a file or directory.
*
* <p>
* Since the operation is non-atomic, the returned value may be inaccurate.
* However, this method is quick and does its best.
*/
public static long size(Path path) {
final AtomicLong size = new AtomicLong(0);
try {
Files.walkFileTree(path, new SimpleFileVisitor<Path>() {
#Override
public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) {
size.addAndGet(attrs.size());
return FileVisitResult.CONTINUE;
}
#Override
public FileVisitResult visitFileFailed(Path file, IOException exc) {
System.out.println("skipped: " + file + " (" + exc + ")");
// Skip folders that can't be traversed
return FileVisitResult.CONTINUE;
}
#Override
public FileVisitResult postVisitDirectory(Path dir, IOException exc) {
if (exc != null)
System.out.println("had trouble traversing: " + dir + " (" + exc + ")");
// Ignore errors traversing a folder
return FileVisitResult.CONTINUE;
}
});
} catch (IOException e) {
throw new AssertionError("walkFileTree will not throw IOException if the FileVisitor does not");
}
return size.get();
}
You need FileUtils#sizeOfDirectory(File) from commons-io.
Note that you will need to manually check whether the file is a directory as the method throws an exception if a non-directory is passed to it.
WARNING: This method (as of commons-io 2.4) has a bug and may throw IllegalArgumentException if the directory is concurrently modified.
In Java 8:
long size = Files.walk(path).mapToLong( p -> p.toFile().length() ).sum();
It would be nicer to use Files::size in the map step but it throws a checked exception.
UPDATE:
You should also be aware that this can throw an exception if some of the files/folders are not accessible. See this question and another solution using Guava.
public static long getFolderSize(File dir) {
long size = 0;
for (File file : dir.listFiles()) {
if (file.isFile()) {
System.out.println(file.getName() + " " + file.length());
size += file.length();
}
else
size += getFolderSize(file);
}
return size;
}
For Java 8 this is one right way to do it:
Files.walk(new File("D:/temp").toPath())
.map(f -> f.toFile())
.filter(f -> f.isFile())
.mapToLong(f -> f.length()).sum()
It is important to filter out all directories, because the length method isn't guaranteed to be 0 for directories.
At least this code delivers the same size information like Windows Explorer itself does.
Here's the best way to get a general File's size (works for directory and non-directory):
public static long getSize(File file) {
long size;
if (file.isDirectory()) {
size = 0;
for (File child : file.listFiles()) {
size += getSize(child);
}
} else {
size = file.length();
}
return size;
}
Edit: Note that this is probably going to be a time-consuming operation. Don't run it on the UI thread.
Also, here (taken from https://stackoverflow.com/a/5599842/1696171) is a nice way to get a user-readable String from the long returned:
public static String getReadableSize(long size) {
if(size <= 0) return "0";
final String[] units = new String[] { "B", "KB", "MB", "GB", "TB" };
int digitGroups = (int) (Math.log10(size)/Math.log10(1024));
return new DecimalFormat("#,##0.#").format(size/Math.pow(1024, digitGroups))
+ " " + units[digitGroups];
}
File.length() (Javadoc).
Note that this doesn't work for directories, or is not guaranteed to work.
For a directory, what do you want? If it's the total size of all files underneath it, you can recursively walk children using File.list() and File.isDirectory() and sum their sizes.
The File object has a length method:
f = new File("your/file/name");
f.length();
If you want to use Java 8 NIO API, the following program will print the size, in bytes, of the directory it is located in.
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
public class PathSize {
public static void main(String[] args) {
Path path = Paths.get(".");
long size = calculateSize(path);
System.out.println(size);
}
/**
* Returns the size, in bytes, of the specified <tt>path</tt>. If the given
* path is a regular file, trivially its size is returned. Else the path is
* a directory and its contents are recursively explored, returning the
* total sum of all files within the directory.
* <p>
* If an I/O exception occurs, it is suppressed within this method and
* <tt>0</tt> is returned as the size of the specified <tt>path</tt>.
*
* #param path path whose size is to be returned
* #return size of the specified path
*/
public static long calculateSize(Path path) {
try {
if (Files.isRegularFile(path)) {
return Files.size(path);
}
return Files.list(path).mapToLong(PathSize::calculateSize).sum();
} catch (IOException e) {
return 0L;
}
}
}
The calculateSize method is universal for Path objects, so it also works for files.
Note that if a file or directory is inaccessible, in this case the returned size of the path object will be 0.
Works for Android and Java
Works for both folders and files
Checks for null pointer everywhere where needed
Ignores symbolic link aka shortcuts
Production ready!
Source code:
public long fileSize(File root) {
if(root == null){
return 0;
}
if(root.isFile()){
return root.length();
}
try {
if(isSymlink(root)){
return 0;
}
} catch (IOException e) {
e.printStackTrace();
return 0;
}
long length = 0;
File[] files = root.listFiles();
if(files == null){
return 0;
}
for (File file : files) {
length += fileSize(file);
}
return length;
}
private static boolean isSymlink(File file) throws IOException {
File canon;
if (file.getParent() == null) {
canon = file;
} else {
File canonDir = file.getParentFile().getCanonicalFile();
canon = new File(canonDir, file.getName());
}
return !canon.getCanonicalFile().equals(canon.getAbsoluteFile());
}
I've tested du -c <folderpath> and is 2x faster than nio.Files or recursion
private static long getFolderSize(File folder){
if (folder != null && folder.exists() && folder.canRead()){
try {
Process p = new ProcessBuilder("du","-c",folder.getAbsolutePath()).start();
BufferedReader r = new BufferedReader(new InputStreamReader(p.getInputStream()));
String total = "";
for (String line; null != (line = r.readLine());)
total = line;
r.close();
p.waitFor();
if (total.length() > 0 && total.endsWith("total"))
return Long.parseLong(total.split("\\s+")[0]) * 1024;
} catch (Exception ex) {
ex.printStackTrace();
}
}
return -1;
}
for windows, using java.io this reccursive function is useful.
public static long folderSize(File directory) {
long length = 0;
if (directory.isFile())
length += directory.length();
else{
for (File file : directory.listFiles()) {
if (file.isFile())
length += file.length();
else
length += folderSize(file);
}
}
return length;
}
This is tested and working properly on my end.
private static long getFolderSize(Path folder) {
try {
return Files.walk(folder)
.filter(p -> p.toFile().isFile())
.mapToLong(p -> p.toFile().length())
.sum();
} catch (IOException e) {
e.printStackTrace();
return 0L;
}
public long folderSize (String directory)
{
File curDir = new File(directory);
long length = 0;
for(File f : curDir.listFiles())
{
if(f.isDirectory())
{
for ( File child : f.listFiles())
{
length = length + child.length();
}
System.out.println("Directory: " + f.getName() + " " + length + "kb");
}
else
{
length = f.length();
System.out.println("File: " + f.getName() + " " + length + "kb");
}
length = 0;
}
return length;
}
After lot of researching and looking into different solutions proposed here at StackOverflow. I finally decided to write my own solution. My purpose is to have no-throw mechanism because I don't want to crash if the API is unable to fetch the folder size. This method is not suitable for mult-threaded scenario.
First of all I want to check for valid directories while traversing down the file system tree.
private static boolean isValidDir(File dir){
if (dir != null && dir.exists() && dir.isDirectory()){
return true;
}else{
return false;
}
}
Second I do not want my recursive call to go into symlinks (softlinks) and include the size in total aggregate.
public static boolean isSymlink(File file) throws IOException {
File canon;
if (file.getParent() == null) {
canon = file;
} else {
canon = new File(file.getParentFile().getCanonicalFile(),
file.getName());
}
return !canon.getCanonicalFile().equals(canon.getAbsoluteFile());
}
Finally my recursion based implementation to fetch the size of the specified directory. Notice the null check for dir.listFiles(). According to javadoc there is a possibility that this method can return null.
public static long getDirSize(File dir){
if (!isValidDir(dir))
return 0L;
File[] files = dir.listFiles();
//Guard for null pointer exception on files
if (files == null){
return 0L;
}else{
long size = 0L;
for(File file : files){
if (file.isFile()){
size += file.length();
}else{
try{
if (!isSymlink(file)) size += getDirSize(file);
}catch (IOException ioe){
//digest exception
}
}
}
return size;
}
}
Some cream on the cake, the API to get the size of the list Files (might be all of files and folder under root).
public static long getDirSize(List<File> files){
long size = 0L;
for(File file : files){
if (file.isDirectory()){
size += getDirSize(file);
} else {
size += file.length();
}
}
return size;
}
in linux if you want to sort directories then du -hs * | sort -h
You can use Apache Commons IO to find the folder size easily.
If you are on maven, please add the following dependency in your pom.xml file.
<!-- https://mvnrepository.com/artifact/commons-io/commons-io -->
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.6</version>
</dependency>
If not a fan of Maven, download the following jar and add it to the class path.
https://repo1.maven.org/maven2/commons-io/commons-io/2.6/commons-io-2.6.jar
public long getFolderSize() {
File folder = new File("src/test/resources");
long size = FileUtils.sizeOfDirectory(folder);
return size; // in bytes
}
To get file size via Commons IO,
File file = new File("ADD YOUR PATH TO FILE");
long fileSize = FileUtils.sizeOf(file);
System.out.println(fileSize); // bytes
It is also achievable via Google Guava
For Maven, add the following:
<!-- https://mvnrepository.com/artifact/com.google.guava/guava -->
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>28.1-jre</version>
</dependency>
If not using Maven, add the following to class path
https://repo1.maven.org/maven2/com/google/guava/guava/28.1-jre/guava-28.1-jre.jar
public long getFolderSizeViaGuava() {
File folder = new File("src/test/resources");
Iterable<File> files = Files.fileTreeTraverser()
.breadthFirstTraversal(folder);
long size = StreamSupport.stream(files.spliterator(), false)
.filter(f -> f.isFile())
.mapToLong(File::length).sum();
return size;
}
To get file size,
File file = new File("PATH TO YOUR FILE");
long s = file.length();
System.out.println(s);
fun getSize(context: Context, uri: Uri?): Float? {
var fileSize: String? = null
val cursor: Cursor? = context.contentResolver
.query(uri!!, null, null, null, null, null)
try {
if (cursor != null && cursor.moveToFirst()) {
// get file size
val sizeIndex: Int = cursor.getColumnIndex(OpenableColumns.SIZE)
if (!cursor.isNull(sizeIndex)) {
fileSize = cursor.getString(sizeIndex)
}
}
} finally {
cursor?.close()
}
return fileSize!!.toFloat() / (1024 * 1024)
}