Java zip files from streams instantly without using byte[] - java

I want to compress multiples files into a zip files, I'm dealing with big files, and then download them into the client, for the moment I'm using this:
#RequestMapping(value = "/download", method = RequestMethod.GET, produces = "application/zip")
public ResponseEntity <StreamingResponseBody> getFile() throws Exception {
File zippedFile = new File("test.zip");
FileOutputStream fos = new FileOutputStream(zippedFile);
ZipOutputStream zos = new ZipOutputStream(fos);
InputStream[] streams = getStreamsFromAzure();
for (InputStream stream: streams) {
addToZipFile(zos, stream);
}
final InputStream fecFile = new FileInputStream(zippedFile);
Long fileLength = zippedFile.length();
StreamingResponseBody stream = outputStream - >
readAndWrite(fecFile, outputStream);
return ResponseEntity.ok()
.header(HttpHeaders.ACCESS_CONTROL_EXPOSE_HEADERS, HttpHeaders.CONTENT_DISPOSITION)
.header(HttpHeaders.CONTENT_DISPOSITION, "attachment;filename=" + "download.zip")
.contentLength(fileLength)
.contentType(MediaType.parseMediaType("application/zip"))
.body(stream);
}
private void addToZipFile(ZipOutputStream zos, InputStream fis) throws IOException {
ZipEntry zipEntry = new ZipEntry(generateFileName());
zos.putNextEntry(zipEntry);
byte[] bytes = new byte[1024];
int length;
while ((length = fis.read(bytes)) >= 0) {
zos.write(bytes, 0, length);
}
zos.closeEntry();
fis.close();
}
This take a lot of time before all files are zipped and then the downloading start, and for large files this kan take a lot of time, this is the line responsible for the delay:
while ((length = fis.read(bytes)) >= 0) {
zos.write(bytes, 0, length);
}
So is there a way to download files immediately while their being zipped ?

Try this instead. Rather than using the ZipOutputStream to wrap a FileOutputStream, writing your zip to a file, then copying it to the client output stream, instead just use the ZipOutputStream to wrap the client output stream so that when you add zip entries and data it goes directly to the client. If you want to also store it to a file on the server then you can make your ZipOutputStream write to a split output stream, to write both locations at once.
#RequestMapping(value = "/download", method = RequestMethod.GET, produces = "application/zip")
public ResponseEntity<StreamingResponseBody> getFile() throws Exception {
InputStream[] streamsToZip = getStreamsFromAzure();
// You could cache already created zip files, maybe something like this:
// String[] pathsOfResourcesToZip = getPathsFromAzure();
// String zipId = getZipId(pathsOfResourcesToZip);
// if(isZipExist(zipId))
// // return that zip file
// else do the following
StreamingResponseBody streamResponse = clientOut -> {
FileOutputStream zipFileOut = new FileOutputStream("test.zip");
ZipOutputStream zos = new ZipOutputStream(new SplitOutputStream(clientOut, zipFileOut));
for (InputStream in : streamsToZip) {
addToZipFile(zos, in);
}
};
return ResponseEntity.ok()
.header(HttpHeaders.ACCESS_CONTROL_EXPOSE_HEADERS, HttpHeaders.CONTENT_DISPOSITION)
.header(HttpHeaders.CONTENT_DISPOSITION, "attachment;filename=" + "download.zip")
.contentType(MediaType.parseMediaType("application/zip")).body(streamResponse);
}
private void addToZipFile(ZipOutputStream zos, InputStream fis) throws IOException {
ZipEntry zipEntry = new ZipEntry(generateFileName());
zos.putNextEntry(zipEntry);
byte[] bytes = new byte[1024];
int length;
while ((length = fis.read(bytes)) >= 0) {
zos.write(bytes, 0, length);
}
zos.closeEntry();
fis.close();
}
public static class SplitOutputStream extends OutputStream {
private final OutputStream out1;
private final OutputStream out2;
public SplitOutputStream(OutputStream out1, OutputStream out2) {
this.out1 = out1;
this.out2 = out2;
}
#Override public void write(int b) throws IOException {
out1.write(b);
out2.write(b);
}
#Override public void write(byte b[]) throws IOException {
out1.write(b);
out2.write(b);
}
#Override public void write(byte b[], int off, int len) throws IOException {
out1.write(b, off, len);
out2.write(b, off, len);
}
#Override public void flush() throws IOException {
out1.flush();
out2.flush();
}
/** Closes all the streams. If there was an IOException this throws the first one. */
#Override public void close() throws IOException {
IOException ioException = null;
for (OutputStream o : new OutputStream[] {
out1,
out2 }) {
try {
o.close();
} catch (IOException e) {
if (ioException == null) {
ioException = e;
}
}
}
if (ioException != null) {
throw ioException;
}
}
}
For the first request for a set of resources to be zipped you wont know the size that the resulting zip file will be so you can't send the length along with the response since you are streaming the file as it is zipped.
But if you expect there to be repeated requests for the same set of resources to be zipped, then you can cache your zip files and simply return them on any subsequent requests; You will also know the length of the cached zip file so you can send that in the response as well.
If you want to do this then you will have to be able to consistently create the same identifier for each combination of the resources to be zipped, so that you can check if those resources were already zipped and return the cached file if they were. You might be able to could sort the ids (maybe full paths) of the resources that will be zipped and concatenate them to create an id for the zip file.

Related

Spring boot: Download zipped large files while they are being processed on the server

In my app I'm zipping and then downloading larges files, the files are located in azure, so I read the files from a stream and then zip them one after another, so I can dowload the zip file after all files has been zipped, here's my code:
#RequestMapping(value = "{analyseId}/download", method = RequestMethod.GET, produces = "application/zip")
public ResponseEntity<Resource> download(#PathVariable List<String> paths) throws IOException {
String zipFileName = "zipFiles.zip";
File zipFile = new File(zipFileName);
FileOutputStream fos = new FileOutputStream(zipFile);
ZipOutputStream zos = new ZipOutputStream(fos);
for (String path : paths) {
InputStream fis = azureDataLakeStoreService.readFile(path);
addToZipFile(path , zos, fis);
}
zos.close();
fos.close();
BufferedInputStream zipFileInputStream = new BufferedInputStream(new FileInputStream(zipFile.getAbsolutePath()));
InputStreamResource resource = new InputStreamResource(zipFileInputStream);
zipFile.delete();
return ResponseEntity.ok()
.header(HttpHeaders.ACCESS_CONTROL_EXPOSE_HEADERS, HttpHeaders.CONTENT_DISPOSITION)
.header(HttpHeaders.CONTENT_DISPOSITION, "attachment;filename=" + zipFileName)
.contentType(MediaType.parseMediaType("application/octet-stream"))
.body(resource);
}
private static void addToZipFile(String path, ZipOutputStream zos, InputStream fis) throws IOException {
ZipEntry zipEntry = new ZipEntry(FilenameUtils.getName(path));
zos.putNextEntry(zipEntry);
byte[] bytes = new byte[1024];
int length;
while ((length = fis.read(bytes)) >= 0) {
zos.write(bytes, 0, length);
}
zos.closeEntry();
fis.close();
}
However on azure the request time out is set to 230 sec, and cannot be changed, however for big files it takes more than that to load and then zip the files on the server, so the connection with the client will be lost meanwhile.
So my question is since I'm getting the data from a stream, can we do all these operations simultaneously, means getting the stream and download it as the same time and not waiting till getting the whole file, or if there any other idea can any body share it here please.
Thanks.
The answer is to not download the file to the server and then send it to the client but streaming it to the client directly here's the code
#RequestMapping(value = "/download", method = RequestMethod.GET)
public StreamingResponseBody download(#PathVariable String path) throws IOException {
final InputStream fecFile = azureDataLakeStoreService.readFile(path);
return (os) -> {
readAndWrite(fecFile, os);
};
}
private void readAndWrite(final InputStream is, OutputStream os)
throws IOException {
byte[] data = new byte[2048];
int read = 0;
while ((read = is.read(data)) >= 0) {
os.write(data, 0, read);
}
os.flush();
}
I also added this configuration to ApplicationInit:
#Configuration
public static class WebConfig extends WebMvcConfigurerAdapter {
#Override
public void configureAsyncSupport(AsyncSupportConfigurer configurer) {
configurer.setDefaultTimeout(-1);
configurer.setTaskExecutor(asyncTaskExecutor());
}
#Bean
public AsyncTaskExecutor asyncTaskExecutor() {
return new SimpleAsyncTaskExecutor("async");
}
}

Downloading large files via Spring MVC

I have a rest method for downloading files which works. But, it seems that the download doesn't start on the web client until the file is completely copied to the output stream, which can take a while for large files.
#GetMapping(value = "download-single-report")
public void downloadSingleReport(HttpServletResponse response) {
File dlFile = new File("some_path");
try {
response.setContentType("application/pdf");
response.setHeader("Content-disposition", "attachment; filename="+ dlFile.getName());
InputStream inputStream = new FileInputStream(dlFile);
IOUtils.copy(inputStream, response.getOutputStream());
response.flushBuffer();
} catch (FileNotFoundException e) {
// error
} catch (IOException e) {
// error
}
}
Is there a way to "stream" the file such that the download starts as soon as I begin writing to the output stream?
I also have a similar method that takes multiple files and puts them in a zip, adding each zip entry to the zip stream, and the download also only begins after the zip has been created:
ZipEntry zipEntry = new ZipEntry(entryName);
zipOutStream.putNextEntry(zipEntry);
IOUtils.copy(fileStream, zipOutStream);
You can use InputStreamResource to return stream result. I tested and it is started copying to output immediately.
#GetMapping(value = "download-single-report")
public ResponseEntity<Resource> downloadSingleReport() {
File dlFile = new File("some_path");
if (!dlFile.exists()) {
return ResponseEntity.notFound().build();
}
try {
try (InputStream stream = new FileInputStream(dlFile)) {
InputStreamResource streamResource = new InputStreamResource(stream);
return ResponseEntity.ok()
.contentType(MediaType.APPLICATION_PDF)
.header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=\"" + dlFile.getName() + "\"")
.body(streamResource);
}
/*
// FileSystemResource alternative
FileSystemResource fileSystemResource = new FileSystemResource(dlFile);
return ResponseEntity.ok()
.contentType(MediaType.APPLICATION_PDF)
.header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=\"" + dlFile.getName() + "\"")
.body(fileSystemResource);
*/
} catch (IOException e) {
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).build();
}
}
The second alternative is a partial download method.
#GetMapping(value = "download-single-report-partial")
public void downloadSingleReportPartial(HttpServletRequest request, HttpServletResponse response) {
File dlFile = new File("some_path");
if (!dlFile.exists()) {
response.setStatus(HttpStatus.NOT_FOUND.value());
return;
}
try {
writeRangeResource(request, response, dlFile);
} catch (Exception ex) {
response.setStatus(HttpStatus.INTERNAL_SERVER_ERROR.value());
}
}
public static void writeRangeResource(HttpServletRequest request, HttpServletResponse response, File file) throws IOException {
String range = request.getHeader("Range");
if (StringUtils.hasLength(range)) {
//http
ResourceRegion region = getResourceRegion(file, range);
long start = region.getPosition();
long end = start + region.getCount() - 1;
long resourceLength = region.getResource().contentLength();
end = Math.min(end, resourceLength - 1);
long rangeLength = end - start + 1;
response.setStatus(206);
response.addHeader("Accept-Ranges", "bytes");
response.addHeader("Content-Range", String.format("bytes %s-%s/%s", start, end, resourceLength));
response.setContentLengthLong(rangeLength);
try (OutputStream outputStream = response.getOutputStream()) {
try (InputStream inputStream = new BufferedInputStream(new FileInputStream(file))) {
StreamUtils.copyRange(inputStream, outputStream, start, end);
}
}
} else {
response.setStatus(200);
response.addHeader("Accept-Ranges", "bytes");
response.setContentLengthLong(file.length());
try (OutputStream outputStream = response.getOutputStream()) {
try (InputStream inputStream = new BufferedInputStream(new FileInputStream(file))) {
StreamUtils.copy(inputStream, outputStream);
}
}
}
}
private static ResourceRegion getResourceRegion(File file, String range) {
List<HttpRange> httpRanges = HttpRange.parseRanges(range);
if (httpRanges.isEmpty()) {
return new ResourceRegion(new FileSystemResource(file), 0, file.length());
}
return httpRanges.get(0).toResourceRegion(new FileSystemResource(file));
}
Spring Framework Resource Response Process
Resource response managed by ResourceHttpMessageConverter class. In writeContent method, StreamUtils.copy is called.
package org.springframework.http.converter;
public class ResourceHttpMessageConverter extends AbstractHttpMessageConverter<Resource> {
..
protected void writeContent(Resource resource, HttpOutputMessage outputMessage)
throws IOException, HttpMessageNotWritableException {
try {
InputStream in = resource.getInputStream();
try {
StreamUtils.copy(in, outputMessage.getBody());
}
catch (NullPointerException ex) {
// ignore, see SPR-13620
}
finally {
try {
in.close();
}
catch (Throwable ex) {
// ignore, see SPR-12999
}
}
}
catch (FileNotFoundException ex) {
// ignore, see SPR-12999
}
}
}
out.write(buffer, 0, bytesRead); sends data immediately to output (I have tested on my local machine). When whole data is transferred, out.flush(); is called.
package org.springframework.util;
public abstract class StreamUtils {
..
public static int copy(InputStream in, OutputStream out) throws IOException {
Assert.notNull(in, "No InputStream specified");
Assert.notNull(out, "No OutputStream specified");
int byteCount = 0;
int bytesRead;
for(byte[] buffer = new byte[4096]; (bytesRead = in.read(buffer)) != -1; byteCount += bytesRead) {
out.write(buffer, 0, bytesRead);
}
out.flush();
return byteCount;
}
}
Use
IOUtils.copyLarge(InputStream input, OutputStream output)
Copy bytes from a large (over 2GB) InputStream to an OutputStream.
This method buffers the input internally, so there is no need to use a BufferedInputStream.
The buffer size is given by DEFAULT_BUFFER_SIZE.
or
IOUtils.copyLarge(InputStream input, OutputStream output, byte[] buffer)
Copy bytes from a large (over 2GB) InputStream to an OutputStream.
This method uses the provided buffer, so there is no need to use a BufferedInputStream.
http://commons.apache.org/proper/commons-io/javadocs/api-2.4/org/apache/commons/io/IOUtils.html
You can use "StreamingResponseBody" File download would start immediately while the chunks are written to the output stream. Below is the code snippet
#GetMapping (value = "/download-single-report")
public ResponseEntity<StreamingResponseBody> downloadSingleReport(final HttpServletResponse response) {
final File dlFile = new File("Sample.pdf");
response.setContentType("application/pdf");
response.setHeader(
"Content-Disposition",
"attachment;filename="+ dlFile.getName());
StreamingResponseBody stream = out -> FileCopyUtils.copy(new FileInputStream(dlFile), out);
return new ResponseEntity(stream, HttpStatus.OK);
}

Why are InputStream instances closed when referenced within an ObservableMap?

I have ObservableMap in which resource files added.
private ObservableMap<String, InputStream> resourceFilesData;
resourceFilesData = new ObservableMapWrapper<String, InputStream>(
new HashMap<String, InputStream>()
);
And InputStreams added in such way:
resourceFilesData.put(f.getName(), new FileInputStream(f));
and finally when I want to use streams, they appear closed!
Why? I cant find reason.
Maybe, there some whey to handle moment, when stream get closed? (for debugging)
how streams are used:
private void pack() throws JAXBException, IOException {
HashMap<String, InputStream> resources = new HashMap<>();
byte[] buf = new byte[1024];
ZipOutputStream zos = new ZipOutputStream(new FileOutputStream("../" + fwData.getFileName() + ".iolfw"));
File xml = fwData.marshal();
InputStream xmlStream = new FileInputStream(xml);
resources.put(xml.getName(), xmlStream);
resources.putAll(resourceFilesData);
for (Map.Entry<String, InputStream> data: resources.entrySet()) {
InputStream input = data.getValue();
zos.putNextEntry(new ZipEntry(data.getKey()));
for (int readNum = 0; (readNum = input.read(buf)) != -1; ) {
zos.write(buf, 0, readNum);
}
zos.closeEntry();
input.close();
}
zos.close();
resources.remove(xmlStream);
xml.delete();
}
trace:
http://pastebin.com/hE21ECL9
I don't know the reason of that behaviour. But you can try to debug the problem using inherited class:
class FileInputStreamInh extends FileInputStream {
public FileInputStreamInh(File file) throws FileNotFoundException {
super(file);
}
#Override
public void close() throws IOException {
super.close();
^^^breakpoint here
}
}
So, instead of creation FileInputStream, you should create FileInputStreamInh.

ZipOutputStream produces corrupted zip file on Android

I've implemented backup of user data from the app using zip archive, I am copying database and shared preferences files to zip archive and calculating MD5 checksum of input files to prevent user from modifying backup data.
To restore from archive I unzip backup file to temporary directory, check checksums and then copy preferences \ database file in the according folders.
Some of my users are complaining that the app generates corrupted backup files (zip files are indeed corrupted).
Here is code that compresses all files to zip file:
public void backup(String filename) {
File file = new File(getBackupDirectory(), filename);
FileOutputStream fileOutputStream = null;
ZipOutputStream stream = null;
try {
String settingsMD5 = null;
String databaseMD5 = null;
if (file.exists())
file.delete();
fileOutputStream = new FileOutputStream(file);
stream = new ZipOutputStream(new BufferedOutputStream(fileOutputStream));
File database = getDatabasePath(databaseFileName);
File dataDirectory = getFilesDir();
if (dataDirectory != null) {
File settings = new File(dataDirectory.getParentFile(), "/shared_prefs/" + PREFERENCES_FILENAME);
settingsMD5 = zipFile("preferences", stream, settings);
}
databaseMD5 = zipFile("database.db", stream, database);
JSONObject jsonObject = new JSONObject();
try {
jsonObject.put(META_DATE, new SimpleDateFormat(DATE_FORMAT, Locale.US).format(new Date()));
jsonObject.put(META_DATABASE, databaseMD5);
jsonObject.put(META_SHARED_PREFS, settingsMD5);
} catch (Exception e) {
e.printStackTrace();
}
InputStream metadata = new ByteArrayInputStream(jsonObject.toString().getBytes("UTF-8"));
zipInputStream(stream, metadata, new ZipEntry("metadata"));
stream.finish();
stream.close();
stream = null;
return file;
} catch (FileNotFoundException e) {
//handling errrors
} catch (IOException e) {
//handling errrors
}
}
private String zipFile(String name, ZipOutputStream zipStream, File file) throws FileNotFoundException, IOException {
ZipEntry zipEntry = new ZipEntry(name);
return zipInputStream(zipStream, new FileInputStream(file), zipEntry);
}
private String zipInputStream(ZipOutputStream zipStream, InputStream fileInputStream, ZipEntry zipEntry) throws IOException {
InputStream inputStream = new BufferedInputStream(fileInputStream);
MessageDigest messageDigest = null;
try {
messageDigest = MessageDigest.getInstance("MD5");
if (messageDigest != null)
inputStream = new DigestInputStream(inputStream, messageDigest);
} catch (NoSuchAlgorithmException e) {
}
zipStream.putNextEntry(zipEntry);
inputToOutput(inputStream, zipStream);
zipStream.closeEntry();
inputStream.close();
if (messageDigest != null) {
return getDigestString(messageDigest.digest());
}
return null;
}
private String getDigestString(byte[] digest) {
StringBuffer hexString = new StringBuffer();
for (int i = 0; i < digest.length; i++) {
String hex = Integer.toHexString(0xFF & digest[i]);
if (hex.length() == 1) {
hex = new StringBuilder("0").append(hex).toString();
}
hexString.append(hex);
}
return hexString.toString();
}
private void inputToOutput(InputStream inputStream, OutputStream outputStream) throws IOException {
byte[] buffer = new byte[BUFFER];
int count = 0;
while ((count = inputStream.read(buffer, 0, BUFFER)) != -1) {
outputStream.write(buffer, 0, count);
}
}
You might consider using the zip4j lib. I solved the problem I had ( same problem - different direction ) by using this lib. Some zip-files where not decodeable with the native android implementation, but with zip4j. You might also solve your problem by using zip4j for compression.
Here's some code to zip a directory into a file using only the java standard classes. With this you can just call:
ZipUtils.zip(sourceDirectory, targetFile);
ZipUtils.unzip(sourceFile, targetDirectory);
Code:
package com.my.project.utils.zip;
import java.io.IOException;
import java.nio.file.FileVisitResult;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.SimpleFileVisitor;
import java.nio.file.attribute.BasicFileAttributes;
import java.util.zip.ZipEntry;
import java.util.zip.ZipOutputStream;
public class ZipUtils {
public static void unzip(Path sourceFile, Path targetPath) throws IOException {
try (ZipInputStream zipInStream = new ZipInputStream(Files.newInputStream(sourceFile))){
byte[] buffer = new byte[1024];
Files.createDirectories(targetPath);
ZipEntry entry = null;
while ((entry = zipInStream.getNextEntry()) != null){
Path entryPath = targetPath.resolve(entry.getName());
Files.createDirectories(entryPath.getParent());
Files.copy(zipInStream, entryPath);
zipInStream.closeEntry();
}
}
}
public static void zip(Path sourcePath, Path targetFile) throws IOException {
try (ZipOutputStream zipOutStream = new ZipOutputStream(Files.newOutputStream(targetFile))){
if (Files.isDirectory(sourcePath)){
zipDirectory(zipOutStream, sourcePath);
} else {
createZipEntry(zipOutStream, sourcePath, sourcePath);
}
}
}
private static void zipDirectory(ZipOutputStream zip, Path source) throws IOException {
Files.walkFileTree(source, new SimpleFileVisitor<Path>(){
#Override
public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException {
return FileVisitResult.CONTINUE;
}
#Override
public FileVisitResult visitFile(Path path, BasicFileAttributes attrs) throws IOException {
createZipEntry(zip, source, path);
return FileVisitResult.CONTINUE;
}
});
}
private static void createZipEntry(ZipOutputStream zip, Path sourcePath, Path path) throws IOException {
ZipEntry entry = new ZipEntry(sourcePath.relativize(path).toString());
zip.putNextEntry(entry);
Files.copy(path,zip);
zip.closeEntry();
}
}
some file that is certain length, will make that problem.
to solve that, before finish() and close(), you should call flush().

How to make a copy of a file in android?

In my app I want to save a copy of a certain file with a different name (which I get from user)
Do I really need to open the contents of the file and write it to another file?
What is the best way to do so?
To copy a file and save it to your destination path you can use the method below.
public static void copy(File src, File dst) throws IOException {
InputStream in = new FileInputStream(src);
try {
OutputStream out = new FileOutputStream(dst);
try {
// Transfer bytes from in to out
byte[] buf = new byte[1024];
int len;
while ((len = in.read(buf)) > 0) {
out.write(buf, 0, len);
}
} finally {
out.close();
}
} finally {
in.close();
}
}
On API 19+ you can use Java Automatic Resource Management:
public static void copy(File src, File dst) throws IOException {
try (InputStream in = new FileInputStream(src)) {
try (OutputStream out = new FileOutputStream(dst)) {
// Transfer bytes from in to out
byte[] buf = new byte[1024];
int len;
while ((len = in.read(buf)) > 0) {
out.write(buf, 0, len);
}
}
}
}
Alternatively, you can use FileChannel to copy a file. It might be faster than the byte copy method when copying a large file. You can't use it if your file is bigger than 2GB though.
public void copy(File src, File dst) throws IOException {
FileInputStream inStream = new FileInputStream(src);
FileOutputStream outStream = new FileOutputStream(dst);
FileChannel inChannel = inStream.getChannel();
FileChannel outChannel = outStream.getChannel();
inChannel.transferTo(0, inChannel.size(), outChannel);
inStream.close();
outStream.close();
}
Kotlin extension for it
fun File.copyTo(file: File) {
inputStream().use { input ->
file.outputStream().use { output ->
input.copyTo(output)
}
}
}
This is simple on Android O (API 26), As you see:
#RequiresApi(api = Build.VERSION_CODES.O)
public static void copy(File origin, File dest) throws IOException {
Files.copy(origin.toPath(), dest.toPath());
}
These worked nice for me
public static void copyFileOrDirectory(String srcDir, String dstDir) {
try {
File src = new File(srcDir);
File dst = new File(dstDir, src.getName());
if (src.isDirectory()) {
String files[] = src.list();
int filesLength = files.length;
for (int i = 0; i < filesLength; i++) {
String src1 = (new File(src, files[i]).getPath());
String dst1 = dst.getPath();
copyFileOrDirectory(src1, dst1);
}
} else {
copyFile(src, dst);
}
} catch (Exception e) {
e.printStackTrace();
}
}
public static void copyFile(File sourceFile, File destFile) throws IOException {
if (!destFile.getParentFile().exists())
destFile.getParentFile().mkdirs();
if (!destFile.exists()) {
destFile.createNewFile();
}
FileChannel source = null;
FileChannel destination = null;
try {
source = new FileInputStream(sourceFile).getChannel();
destination = new FileOutputStream(destFile).getChannel();
destination.transferFrom(source, 0, source.size());
} finally {
if (source != null) {
source.close();
}
if (destination != null) {
destination.close();
}
}
}
Much simpler now with Kotlin:
File("originalFileDir", "originalFile.name")
.copyTo(File("newFileDir", "newFile.name"), true)
trueorfalse is for overwriting the destination file
https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.io/java.io.-file/copy-to.html
It might be too late for an answer but the most convenient way is using
FileUtils's
static void copyFile(File srcFile, File destFile)
e.g. this is what I did
`
private String copy(String original, int copyNumber){
String copy_path = path + "_copy" + copyNumber;
try {
FileUtils.copyFile(new File(path), new File(copy_path));
return copy_path;
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
`
in kotlin , just :
val fileSrc : File = File("srcPath")
val fileDest : File = File("destPath")
fileSrc.copyTo(fileDest)
Here is a solution that actually closes the input/output streams if an error occurs while copying. This solution utilizes apache Commons IO IOUtils methods for both copying and handling the closing of streams.
public void copyFile(File src, File dst) {
InputStream in = null;
OutputStream out = null;
try {
in = new FileInputStream(src);
out = new FileOutputStream(dst);
IOUtils.copy(in, out);
} catch (IOException ioe) {
Log.e(LOGTAG, "IOException occurred.", ioe);
} finally {
IOUtils.closeQuietly(out);
IOUtils.closeQuietly(in);
}
}
in Kotlin: a short way
// fromPath : Path the file you want to copy
// toPath : The path where you want to save the file
// fileName : name of the file that you want to copy
// newFileName: New name for the copied file (you can put the fileName too instead of put a new name)
val toPathF = File(toPath)
if (!toPathF.exists()) {
path.mkdir()
}
File(fromPath, fileName).copyTo(File(toPath, fileName), replace)
this is work for any file like images and videos
now in kotlin you could just use
file1.copyTo(file2)
where file1 is an object of the original file and file2 is an object of the new file you want to copy to
Simple and easy way...!
import android.os.FileUtils;
try (InputStream in = new FileInputStream(sourceFile);
OutputStream out = new FileOutputStream(destinationFile) ){
FileUtils.copy(in, out);
}catch(Exception e){
Log.d("ReactNative","Error copying file: "+e.getMessage());
}

Categories