How to unzip a 7zip archive in Android? - java

I have a 7zip archive which contains some hundred files separated into different directories. The target is to download it from a FTP server and then extract it on the phone.
My problem is that the 7zip SDK doesn't contain a lot. I am looking for examples, tutorials and snippets regarding the decompression of 7z files.
(Decompression via Intent is only a secondary option)

Go here:
LZMA SDK just provides the encoder and decoder for encoding/decoding the raw data, but 7z archive is a complex format for storing multiple files.

i found this page that provides an alternative that works like a charm. You only have to add compile 'org.apache.commons:commons-compress:1.8'
to your build gradle script and use the feature you desire. For this issue i did the following :
AssetManager am = getAssets();
InputStream inputStream = null;
try {
inputStream = am.open("a7ZipedFile.7z");
File file1 = createFileFromInputStream(inputStream);
} catch (IOException e) {
e.printStackTrace();
}
SevenZFile sevenZFile = null;
try{
File f = new File(this.getFilesDir(), "a7ZipedFile.7z");
OutputStream outputStream = new FileOutputStream(f);
byte buffer[] = new byte[1024];
int length = 0;
while((length=inputStream.read(buffer)) > 0) {
outputStream.write(buffer,0,length);
}
try {
sevenZFile = new SevenZFile(f);
SevenZArchiveEntry entry = sevenZFile.getNextEntry();
while (entry != null) {
System.out.println(entry.getName());
FileOutputStream out = openFileOutput(entry.getName(), Context.MODE_PRIVATE);
byte[] content = new byte[(int) entry.getSize()];
sevenZFile.read(content, 0, content.length);
out.write(content);
out.close();
entry = sevenZFile.getNextEntry();
}
sevenZFile.close();
outputStream.close();
inputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}catch (IOException e) {
//Logging exception
e.printStackTrace();
}
The only draw back is approximately 200k for the imported library. Other than that it is really easy to use.

Related

How to parse file inside a zip without writing to disk - java

I have a password protected zip file [in the form of a base64 encoded data and the name of the zip file] which contains a single xml. I wish to parse that xml without writing anything to disk. What is the way to do this in Zip4j? Following is what I tried.
String docTitle = request.getDocTitle();
byte[] decodedFileData = Base64.getDecoder().decode(request.getBase64Data());
InputStream inputStream = new ByteArrayInputStream(decodedFileData);
try (ZipInputStream zipInputStream = new ZipInputStream(inputStream, password)) {
while ((localFileHeader = zipInputStream.getNextEntry()) != null) {
String fileTitle = localFileHeader.getFileName();
File extractedFile = new File(fileTitle);
try (InputStream individualFileInputStream = org.apache.commons.io.FileUtils.openInputStream(extractedFile)) {
// Call parser
parser.parse(localFileHeader.getFileName(),
individualFileInputStream));
} catch (IOException e) {
// Handle IOException
}
}
} catch (IOException e) {
// Handle IOException
}
Which is throwing me java.io.FileNotFoundException: File 'xyz.xml' does not exist at line FileUtils.openInputStream(extractedFile). Can you please suggest me the right way to do this?
ZipInputStream keeps all content of a zip file. Each call of zipInputStream.getNextEntry() delivers the content of each file and moves "pointer" to the next entry (file). You also can read the file (ZipInputStream.read) before moving to the next entry.
Your case:
byte[] decodedFileData = Base64.getDecoder().decode(request.getBase64Data());
InputStream inputStream = new ByteArrayInputStream(decodedFileData);
try (ZipInputStream zipInputStream = new ZipInputStream(inputStream, password)) {
ZipEntry zipEntry = null;
while ((zipEntry = zipInputStream.getNextEntry()) != null) {
byte[] fileContent = IOUtils.toByteArray(zipInputStream);
parser.parse(zipEntry.getName(),
new ByteArrayInputStream(fileContent)));
}
} catch (Exception e) {
// Handle Exception
}

Create a zip file on S3 from files on S3 in Java

I have a lot of files on S3 that I need to zip and then provide the zip via S3. Currently I zip them from stream to a local file and then upload the file again. This takes up a lot of disk space, as each file has around 3-10MB and I have to zip up to 100.000 files. So a zip can have more than 1TB. So I would like a solution just along this lines:
Create a zip file on S3 from files on S3 using Lambda Node
Here it seams the zip is created directly on S3 without taking up local disk space. But I am just not smart enough to transfer the above solution to Java. I am also finding conflicting information on the java aws sdk, saying that they planned on changing the stream behavior in 2017.
Not sure if this will help, but here's what I've been doing so far (Upload is my local model that holds S3 information). I just removed logging and stuff for better readability. I think I am not taking up space for the download "piping" the InputStream directly into the zip. But like I said I would also like to avoid the local zip file and create it directly on S3. That however would probably require the ZipOutputStream to be created with S3 as target instead of a FileOutputStream. Not sure how that can be done.
public File zipUploadsToNewTemp(List<Upload> uploads) {
List<String> names = new ArrayList<>();
byte[] buffer = new byte[1024];
File tempZipFile;
try {
tempZipFile = File.createTempFile(UUID.randomUUID().toString(), ".zip");
} catch (Exception e) {
throw new ApiException(e, BaseErrorCode.FILE_ERROR, "Could not create Zip file");
}
try (
FileOutputStream fileOutputStream = new FileOutputStream(tempZipFile);
ZipOutputStream zipOutputStream = new ZipOutputStream(fileOutputStream)) {
for (Upload upload : uploads) {
InputStream inputStream = getStreamFromS3(upload);
ZipEntry zipEntry = new ZipEntry(upload.getFileName());
zipOutputStream.putNextEntry(zipEntry);
writeStreamToZip(buffer, zipOutputStream, inputStream);
inputStream.close();
}
zipOutputStream.closeEntry();
zipOutputStream.close();
return tempZipFile;
} catch (IOException e) {
logError(type, e);
if (tempZipFile.exists()) {
FileUtils.delete(tempZipFile);
}
throw new ApiException(e, BaseErrorCode.IO_ERROR,
"Error zipping files: " + e.getMessage());
}
}
// I am not even sure, but I think this takes up memory and not disk space
private InputStream getStreamFromS3(Upload upload) {
try {
String filename = upload.getId() + "." + upload.getFileType();
InputStream inputStream = s3FileService
.getObject(upload.getBucketName(), filename, upload.getPath());
return inputStream;
} catch (ApiException e) {
throw e;
} catch (Exception e) {
logError(type, e);
throw new ApiException(e, BaseErrorCode.UNKOWN_ERROR,
"Unkown Error communicating with S3 for file: " + upload.getFileName());
}
}
private void writeStreamToZip(byte[] buffer, ZipOutputStream zipOutputStream,
InputStream inputStream) {
try {
int len;
while ((len = inputStream.read(buffer)) > 0) {
zipOutputStream.write(buffer, 0, len);
}
} catch (IOException e) {
throw new ApiException(e, BaseErrorCode.IO_ERROR, "Could not write stream to zip");
}
}
And finally the upload Source code. Inputstream is created from the Temp Zip file.
public PutObjectResult upload(InputStream inputStream, String bucketName, String filename, String folder) {
String uploadKey = StringUtils.isEmpty(folder) ? "" : (folder + "/");
uploadKey += filename;
ObjectMetadata metaData = new ObjectMetadata();
byte[] bytes;
try {
bytes = IOUtils.toByteArray(inputStream);
} catch (IOException e) {
throw new ApiException(e, BaseErrorCode.IO_ERROR, e.getMessage());
}
metaData.setContentLength(bytes.length);
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(bytes);
PutObjectRequest putObjectRequest = new PutObjectRequest(bucketPrefix + bucketName, uploadKey, byteArrayInputStream, metaData);
putObjectRequest.setCannedAcl(CannedAccessControlList.PublicRead);
try {
return getS3Client().putObject(putObjectRequest);
} catch (SdkClientException se) {
throw s3Exception(se);
} finally {
IOUtils.closeQuietly(inputStream);
}
}
Just found a similar question to what I need also without answer:
Upload ZipOutputStream to S3 without saving zip file (large) temporary to disk using AWS S3 Java
You can get input stream from your S3 data, then zip this batch of bytes and stream it back to S3
long numBytes; // length of data to send in bytes..somehow you know it before processing the entire stream
PipedOutputStream os = new PipedOutputStream();
PipedInputStream is = new PipedInputStream(os);
ObjectMetadata meta = new ObjectMetadata();
meta.setContentLength(numBytes);
new Thread(() -> {
/* Write to os here; make sure to close it when you're done */
try (ZipOutputStream zipOutputStream = new ZipOutputStream(os)) {
ZipEntry zipEntry = new ZipEntry("myKey");
zipOutputStream.putNextEntry(zipEntry);
S3ObjectInputStream objectContent = amazonS3Client.getObject("myBucket", "myKey").getObjectContent();
byte[] bytes = new byte[1024];
int length;
while ((length = objectContent.read(bytes)) >= 0) {
zipOutputStream.write(bytes, 0, length);
}
objectContent.close();
} catch (IOException e) {
e.printStackTrace();
}
}).start();
amazonS3Client.putObject("myBucket", "myKey", is, meta);
is.close(); // always close your streams
I would suggest using an Amazon EC2 instance (as low as 1c/hour, or you could even use a Spot Instance to get it at a lower price). Smaller instance types are lower cost but have limited bandwidth, so play around with the size to get your preferred performance.
Write a script to loop through the files then:
Download
Zip
Upload
Delete local files
All the zip magic happens on local disk. No need to use streams. Just use the Amazon S3 download_file() and upload_file() calls.
If the EC2 instance is in the same region as Amazon S3 then there is no Data Transfer charge.

Creating a zip file containing Text Files

I've been trying to tackle this problem for a day or two and can't seem to figure out precisely how to add text files to a zip file, I was able to figure out how to add these text files to a 7zip file which was insanely easy, but a zip file seems to me much more complicated for some reason. I want to return a zip file for user reasons btw.
Here's what I have now:
(I know the code isn't too clean at the moment, I plan to tackle that after getting the bare functionality down).
private ZipOutputStream addThreadDumpsToZipFile(File file, List<Datapoint<ThreadDump>> allThreadDumps, List<Datapoint<String>> allThreadDumpTextFiles) {
ZipOutputStream threadDumpsZipFile = null;
try {
//creat new zip file which accepts input stream
//TODO missing step: create text files containing each thread dump then add to zip
threadDumpsZipFile = new ZipFile(new FileOutputStream(file));
FileInputStream fileInputStream = null;
try {
//add data to each thread dump entry
for(int i=0; i<allThreadDumpTextFiles.size();i++) {
//create file for each thread dump
File threadDumpFile = new File("thread_dump_"+i+".txt");
FileUtils.writeStringToFile(threadDumpFile,allThreadDumpTextFiles.get(i).toString());
//add entry/file to zip file (creates block to add input to)
ZipEntry threadDumpEntry = new ZipEntry("thread_dump_"+i); //might need to add extension here?
threadDumpsZipFile.putNextEntry(threadDumpEntry);
//add the content to this entry
fileInputStream = new FileInputStream(threadDumpFile);
byte[] byteBuffer = new byte[(int) threadDumpFile.length()]; //see if this sufficiently returns length of data
int bytesRead = -1;
while ((bytesRead = fileInputStream.read(byteBuffer)) != -1) {
threadDumpsZipFile.write(byteBuffer, 0, bytesRead);
}
}
threadDumpsZipFile.flush();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
fileInputStream.close();
} catch(Exception e) {
}
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return threadDumpsZipFile;
}
As you can sort of guess, I have a set of Thread Dumps that I want to add to my zip file and return to the user.
Let me know if you guys need any more info!
PS: There might be some bugs in this question, I just realized with some breakpoints that the threadDumpFile.length() won't really work.
Look forward to your replies!
Thanks,
Arsa
Here's a crack at it. I think you'll want to keep the file extensions when you make your ZipEntry objects. See if you can implement the below createTextFiles() function; the rest of this works -- I stubbed that method to return a single "test.txt" file with some dummy data to verify.
void zip()
{
try {
FileOutputStream fos = new FileOutputStream("yourZipFile.zip");
ZipOutputStream zos = new ZipOutputStream(fos);
File[] textFiles = createTextFiles(); // should be an easy step
for (int i = 0; i < files.length; i++) {
addToZipFile(file[i].getName(), zos);
}
zos.close();
fos.close();
} catch (Exception e) {
e.printStackTrace();
}
}
void addToZipFile(String fileName, ZipOutputStream zos) throws Exception {
File file = new File(fileName);
FileInputStream fis = new FileInputStream(file);
ZipEntry zipEntry = new ZipEntry(fileName);
zos.putNextEntry(zipEntry);
byte[] bytes = new byte[1024];
int length;
while ((length = fis.read(bytes)) >= 0) {
zos.write(bytes, 0, length);
}
zos.closeEntry();
fis.close();
}

Is it possible to create a File object from InputStream

Is there any way to create a java.io.File object from an java.io.InputStream ?
My requirement is reading the File from a RAR . I am not trying to write a temporary File, I have a file inside RAR archive which I am trying to read.
You need to create new file and copy contents from InputStream to that file:
File file = //...
try(OutputStream outputStream = new FileOutputStream(file)){
IOUtils.copy(inputStream, outputStream);
} catch (FileNotFoundException e) {
// handle exception here
} catch (IOException e) {
// handle exception here
}
I am using convenient IOUtils.copy() to avoid manual copying of streams. Also it has built-in buffering.
In one line :
FileUtils.copyInputStreamToFile(inputStream, file);
(org.apache.commons.io)
Since Java 7, you can do it in one line even without using any external libraries:
Files.copy(inputStream, outputPath, StandardCopyOption.REPLACE_EXISTING);
See the API docs.
Create a temp file first using org.apache.commons.io.
File tempFile = File.createTempFile(prefix, suffix);
tempFile.deleteOnExit();
FileOutputStream out = new FileOutputStream(tempFile);
IOUtils.copy(in, out);
return tempFile;
Easy Java 9 solution with try with resources block
public static void copyInputStreamToFile(InputStream input, File file) {
try (OutputStream output = new FileOutputStream(file)) {
input.transferTo(output);
} catch (IOException ioException) {
ioException.printStackTrace();
}
}
java.io.InputStream#transferTo is available since Java 9.
If you do not want to use other libraries, here is a simple function to copy data from an InputStream to an OutputStream.
public static void copyStream(InputStream in, OutputStream out) throws IOException {
byte[] buffer = new byte[1024];
int read;
while ((read = in.read(buffer)) != -1) {
out.write(buffer, 0, read);
}
}
Now, you can easily write an Inputstream into a file by using FileOutputStream-
FileOutputStream out = new FileOutputStream(outFile);
copyStream (inputStream, out);
out.close();
If you are using Java version 7 or higher, you can use try-with-resources to properly close the FileOutputStream. The following code use IOUtils.copy() from commons-io.
public void copyToFile(InputStream inputStream, File file) throws IOException {
try(OutputStream outputStream = new FileOutputStream(file)) {
IOUtils.copy(inputStream, outputStream);
}
}

Uploading binary files using dropbox java api

Could you point me out to a code or url where I can find some examples how to use dropbox java api and upload binary files like, .doc files jpg and video files.
Current examples in the web only point to uploading a text file. But when I try to read files using java InputStream and convert them to byte array and pass into dropbox file upload functions files get corrupted. Same issue with downloading files as well. Thanks in Advance.
Regards,
Waruna.
EDIT--
Code Sample
FileInputStream fis = new FileInputStream(file);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte [] buf = new byte[1024];
for(int readNum; (readNum = fis.read(buf)) != -1;) {
bos.write(buf, 0, readNum);
System.out.println("read "+ readNum + "bytes,");
}
ByteArrayInputStream inputStream2 = new ByteArrayInputStream(bos.toByteArray());
Entry newEntry = mDBApi.putFile("/uploads/"+file.getName(), inputStream2, file.toString().length(), null, null);
System.out.println("Done. \nRevision of file: " + newEntry.rev + " " + newEntry.mimeType);
return newEntry.rev;
The 3rd argument of DropboxAPI.putFile() should be the number of bytes to read from the input stream - You are passing the length of the filename.
Instead of
Entry newEntry = mDBApi.putFile("/uploads/"+file.getName(), inputStream2,
file.toString().length(), null, null);
Use
Entry newEntry = mDBApi.putFile("/uploads/"+file.getName(), inputStream2,
bos.size(), null, null);
I don't think you need to convert to byte array, simply use FileInputStream is enough for a file, txt as well as binary. The following code works, I just tested with JPG.
DropboxAPI<?> client = new DropboxAPI<WebAuthSession>(session);
FileInputStream inputStream = null;
try {
File file = new File("some_pic.jpg");
inputStream = new FileInputStream(file);
DropboxAPI.Entry newEntry = client.putFile("/testing.jpg", inputStream,
file.length(), null, null);
System.out.println("The uploaded file's rev is: " + newEntry.rev);
} catch (DropboxUnlinkedException e) {
// User has unlinked, ask them to link again here.
System.out.println("User has unlinked.");
} catch (DropboxException e) {
System.out.println("Something went wrong while uploading.");
} catch (FileNotFoundException e) {
System.out.println("File not found.");
} finally {
if (inputStream != null) {
try {
inputStream.close();
} catch (IOException e) {}
}
}

Categories