I'm trying to make a hex dumping application and for that, I need to read the file bytes. I'm using apache io version 2.8.0 to do the hex dumping. This is the code I'm using:
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
try {
byte[] bytes = Files.readAllBytes(Paths.get(getPackageManager().getApplicationInfo("com.pixel.gun3d", 0).nativeLibraryDir.concat("/libil2cpp.so")));
Log.e("MainActivity", dumpHex(bytes));
} catch (PackageManager.NameNotFoundException | IOException e) {
e.printStackTrace();
}
}
private String dumpHex(byte[] bytes) {
ByteArrayOutputStream out = new ByteArrayOutputStream();
try {
HexDump.dump(bytes, 0, out, 0);
} catch (IOException e) {
e.printStackTrace();
}
return new String(out.toByteArray());
}
And the error I get is this: java.lang.OutOfMemoryError: Failed to allocate a 155189264 byte allocation with 25165824 free bytes and 94MB until OOM, max allowed footprint 328602112, growth limit 402653184
I looked it up and none of the suggestions I tried such as adding android:largeHeap="true" and android:hardwareAccelerated="false" to the manifest worked. Any help is appreciated <3
When reading file it is recommended to partition the read into small chunks to avoid this kind of errors.
Reading the whole files bytes into byte array has limitations, for example the file size is over Integer.MAX_VALUE you will get an OutOfMemoryException.
Try doing something similar to this:
byte[] buffer = new byte[1024];
FileInputStream fis = new FileInputStream(file);
int readBytesNum = fis.read(buffer);
while (readBytesNum > 0)
{
//do what you need
readBytesNum = in.read(buffer);
}
This means you would need to change the implementation of HexDump to handle files in partitions
Related
so I've tried to implement https://stackoverflow.com/a/1718140/13592426 this function for downloading file in my app, but I didn't succeed, and want to find out why. Here's the code:
public class Receiver extends BroadcastReceiver {
public static void downloadFile(String url, File outputFile) {
try {
URL u = new URL(url);
URLConnection conn = u.openConnection();
int contentLength = conn.getContentLength();
DataInputStream stream = new DataInputStream(u.openStream());
byte[] buffer = new byte[contentLength];
stream.readFully(buffer);
stream.close();
DataOutputStream fos = new DataOutputStream(new FileOutputStream(outputFile));
fos.write(buffer);
fos.flush();
fos.close();
} catch(FileNotFoundException e) {
Log.i("FileNotFoundException", "file not found"); // swallow a 404
} catch (IOException e) {
Log.i("IOException", "io exc"); // swallow a 404
}
}
Handler handler;
#Override
public void onReceive(Context context, Intent intent) {
final String link = "https://images.app.goo.gl/zjcreNXUrrihcWnD6";
String path = Environment.getExternalStorageDirectory().toString()+ "/Downloads";
final File empty_file = new File(path);
new Thread(new Runnable() {
#Override
public void run() {
downloadFile(link, empty_file);
}
}).start();
}
}
And I get these errors:
2020-07-07 14:09:23.142 26313-26371/com.example.downloader E/AndroidRuntime: FATAL EXCEPTION: Thread-2
Process: com.example.downloader, PID: 26313
java.lang.NegativeArraySizeException: -1
at com.example.downloader.Receiver.downloadFile(Receiver.java:39)
at com.example.downloader.Receiver$1.run(Receiver.java:66)
at java.lang.Thread.run(Thread.java:919)
39th line is:
byte[] buffer = new byte[contentLength];
Maybe the main problem is in the wrong usage of Thread.
Honestly, I'm new to Android and quite struggle with solving this issue, maybe you can reccomend some good material or tutorials related to Threads/URLs in Android (I had searched for lot, but it's still difficult). And of course I'll appreciate direct suggestions on what I'm doing wrong.
An HTTP server can send a response with a content length of -1 if it doesn't know ahead of time how large the response will be.
Instead of allocating a buffer that is the size of the complete file, set it to a reasonable size (for example 8K bytes) and use a loop to stream the file; e.g.
byte[] buffer = new byte[8192];
int count;
while ((count = in.read(buffer)) > 0) {
out.write(buffer, 0, count);
}
This approach has a couple of advantages. It works when the content length is -1, and it protects you against OOME problems if the content length is a very large number. (It still makes sense to check the content length though ... to avoid filling the devices file system.)
I note that your current version has other problems.
You are managing the streams in a way that could lead to file descriptor leaks. I recommend that you learn about ... and use ... JAVA 8 try with resources syntax.
This is dubious:
Log.i("IOException", "io exc"); // swallow a 404
It is not safe to assume that all IOExceptions caught in that catch block will be due to 404 responses. The comment (at least!) is inaccurate.
I have a problem when the user upload large files (> 1 GB) (I'm using flow.js library), it creates hundred of thousand small chunked files (e.g 100KB each) inside temporary directory but failed to merge into single file, due to MemoryOutOfException. This is not happened when the file is under 1 GB. I know it sound tedious and you probably suggest me to increase the XmX in my container-but I want to have another angle besides that.
Here is my code
private void mergeFile(String identifier, int totalFile, String outputFile) throws AppException{
File[] fileDatas = new File[totalFile]; //we know the size of file here and create specific amount of the array
byte fileContents[] = null;
int totalFileSize = 0;
int filePartUploadSize = 0;
int tempFileSize = 0;
//I'm creating array of file and append the length
for (int i = 0; i < totalFile; i++) {
fileDatas[i] = new File(identifier + "." + (i + 1)); //indentifier is the name of the file
totalFileSize += fileDatas[i].length();
}
try {
fileContents = new byte[totalFileSize];
InputStream inStream;
for (int j = 0; j < totalFile; j++) {
inStream = new BufferedInputStream(new FileInputStream(fileDatas[j]));
filePartUploadSize = (int) fileDatas[j].length();
inStream.read(fileContents, tempFileSize, filePartUploadSize);
tempFileSize += filePartUploadSize;
inStream.close();
}
} catch (FileNotFoundException ex) {
throw new AppException(AppExceptionCode.FILE_NOT_FOUND);
} catch (IOException ex) {
throw new AppException(AppExceptionCode.ERROR_ON_MERGE_FILE);
} finally {
write(fileContents, outputFile);
for (int l = 0; l < totalFile; l++) {
fileDatas[l].delete();
}
}
}
Please show the "inefficient" of this method, once again... only large files that cannot be merge using this method, smaller one ( < 1 GB) no problem at all....
I appreciate if you do not suggest me to increase the heap memory instead show me the fundamental error of this method... thanks...
Thanks
It's unnecessary to allocate the entire file size in memory by declaring a byte array of the entire size. Building the concatenated file in memory in general is totally unnecessary.
Just open up an outputstream for your target file, and then for each file that you are combining to make it, just read each one as an input stream and write the bytes to outputstream, closing each one as you finish. Then when you're done with them all, close the output file. Total memory use will be a few thousand bytes for the buffer.
Also, don't do I/O operations in finally block (except closing and stuff).
Here is a rough example you can play with.
ArrayList<File> files = new ArrayList<>();// put your files here
File output = new File("yourfilename");
BufferedOutputStream boss = null;
try
{
boss = new BufferedOutputStream(new FileOutputStream(output));
for (File file : files)
{
BufferedInputStream bis = null;
try
{
bis = new BufferedInputStream(new FileInputStream(file));
boolean done = false;
while (!done)
{
int data = bis.read();
boss.write(data);
done = data < 0;
}
}
catch (Exception e)
{
//do error handling stuff, log it maybe?
}
finally
{
try
{
bis.close();//do this in a try catch just in case
}
catch (Exception e)
{
//handle this
}
}
}
} catch (Exception e)
{
//handle this
}
finally
{
try
{
boss.close();
}
catch (Exception e) {
//handle this
}
}
... show me the fundamental error of this method
The implementation flaw is that you are creating a byte array (fileContents) whose size is the total file size. If the total file size is too big, that will cause an OOME. Inevitably.
Solution - don't do that! Instead "stream" the file by reading from the "chunk" files and writing to the final file using a modest sized buffer.
There are other problems with your code too. For instance, it could leak file descriptors because you are not ensure that inStream is closed under all circumstances. Read up on the "try-with-resources" construct.
I made a small program to download data and write it to a file.
Here is the code:
public void run()
{
byte[] bytes = new byte[1024];
int bytes_read;
URLConnection urlc = null;
RandomAccessFile raf = null;
InputStream i = null;
try
{
raf = new RandomAccessFile("file1", "rw");
}
catch(Exception e)
{
e.printStackTrace();
return;
}
try
{
urlc = new URL(link).openConnection();
i = urlc.getInputStream();
}
catch(Exception e)
{
e.printStackTrace();
return;
}
while(canDownload())
{
try
{
bytes_read = i.read(bytes);
}
catch(Exception e)
{
e.printStackTrace();
return;
}
if(bytes_read != -1)
{
try
{
raf.write(bytes, 0, bytes_read);
}
catch(Exception e)
{
e.printStackTrace();
return;
}
}
else
{
try
{
i.close();
raf.close();
return;
}
catch(Exception e)
{
e.printStackTrace();
return;
}
}
}
}
The problem is that when I download big files, I get few bytes missing in the end of the file.
I tried to change the byte array size to 2K, and the problem was solved. But when I downloaded a bigger file (500 MB) , I got few bytes missing again.
I said "Ok, let's try with 4K size". And I changed the byte array size to 4K. It worked!
Nice, but then I downloaded a 4 GB file, I got bytes missing in the end again!
I said "Cool, let's try with 8K size". And then I changed the byte array size to 8K. Worked.
My first question is: Why this happens? (when I change buffer size, the file doesn't get corrupted).
Ok, in theory, the file corrupted problem can be solved changing the byte array size to bigger values.
But there's another problem: how can I measure the download speed (in one second interval) with big byte array sizes?
For example: Let's say that my download speed is 2 KB/s. And the byte array size is 4 K.
My second question is: How can I measure the speed (in one second interval) if the thread will have to wait the byte array to be full? My answer should be: change the byte array size to a smaller value. But the file will get corrupted xD.
After trying to solve the problem by myself, I spent 2 days searching over the internet for a solution. And nothing.
Please, can you guys answer my two questions? Thanks =D
Edit
Code for canDownload():
synchronized private boolean canDownload()
{
return can_download;
}
My advice is to use a proven library such as Apache Commons IO instead of trying to roll your own code. For your particular problem, take a look at the copyURLToFile(URL, File) method.
I would:
Change the RandomAccessFile to a FileOutputStream.
Get rid of canDownload(), whatever it's for, and set a read timeout on the connection instead.
Simplify the copy loop to this:
while ((bytes_read = i.read(bytes)) > 0)
{
out.write(bytes, 0, bytes_read);
}
out.close();
i.close();
with all the exception handling outside this loop.
I think you will find the problem is that you closed the underlying InputStream while the RandomAccessFile still had data in its write buffers. This will be why you are occasionally missing the last few bytes of data.
The race condition is between the JVM flushing the final write, and your call to i.close().
Removing the i.close() should fix the problem; it isn't necessary as the raf.close() closes the underlying stream anyway, but this way you give the RAF a chance to flush any outstanding buffers before it does so.
I am trying to archive list of files in zip format and then downloading it for the user on the fly...
I am facing out of memory issue when downloading a zip of 1gb size
Please help me how i can resolve this without increasing jvm heap size. i would like to flush the stream periodically..
I AM TRYING TO FLUSH PERIODICALLY BUT THIS IS NOT WORKING FOR ME.
Please find my code attached below:
try{
ServletOutputStream out = response.getOutputStream();
ZipOutputStream zip = new ZipOutputStream(out);
response.setContentType("application/octet-stream");
response.addHeader("Content-Disposition",
"attachment; filename=\"ResultFiles.zip\"");
//adding multiple files to zip
ZipUtility.addFileToZip("c:\\a", "print1.txt", zip);
ZipUtility.addFileToZip("c:\\a", "print2.txt", zip);
ZipUtility.addFileToZip("c:\\a", "print3.txt", zip);
ZipUtility.addFileToZip("c:\\a", "print4.txt", zip);
zip.flush();
zip.close();
out.close();
} catch (ZipException ex) {
System.out.println("zip exception");
} catch (Exception ex) {
System.out.println("exception");
ex.printStackTrace();
}
public class ZipUtility {
static public void addFileToZip(String path, String srcFile,
ZipOutputStream zip) throws Exception {
File file = new File(path + "\\" + srcFile);
boolean exists = file.exists();
if (exists) {
long fileSize = file.length();
int buffersize = (int) fileSize;
byte[] buf = new byte[buffersize];
int len;
FileInputStream fin = new FileInputStream(path + "\\" + srcFile);
zip.putNextEntry(new ZipEntry(srcFile));
int bytesread = 0, bytesBuffered = 0;
while ((bytesread = fin.read(buf)) > -1) {
zip.write(buf, 0, bytesread);
bytesBuffered += bytesread;
if (bytesBuffered > 1024 * 1024) { //flush after 1mb
bytesBuffered = 0;
zip.flush();
}
}
zip.closeEntry();
zip.flush();
fin.close();
}
}
}
}
You want to use chunked encoding to send a file that large otherwise the servlet container will try and figure out the size of the data you are trying to send before sending it so it can set the Content-Length header. Since you are compressing files you don't know the size of the data you're sending. Chunked-Encoding allows you to send pieces of the response in smaller chunks. Don't set the content length of the stream. You might try using curl or something to see the HTTP headers in the response your getting from the server. If it isn't chunked then you'll want to figure that out. You'll want to research how to force the servlet container to send chunked encoding. You might have to add this to the response header to make the servlet container send it chunked.
response.setHeader("Transfer-Encoding", "chunked");
The other option would be to compress the file into a temporary file with File.createTemp(), and then send the contents of that. If you compress to a temp file first then you can know how big the file is and set the content length for the servlet.
I guess you are digging in a wrong direction. Try to replace the servlet output stream by a file stream and see if the issue is still here. I suspect your web container tries to collect whole servlet output to calculate content-length before sending http headers.
Another thing...you are performing your close inside your try catch block. This leaves the chance for the stream to stay open on your files if you have an exception, as well as NOT giving the stream the chance to flush to the disk.
Always make sure your close is in a finally block (at least until you can get Java 7 with its try-with-resources block)
//build the byte buffer for transferring the data from the file
//to the zip.
final int BUFFER = 2048;
byte [] data = new byte[BUFFER];
File zipFile= new File("C\:\\myZip.zip");
BufferedInputStream in = null;
ZipOutputStream zipOut = null;
try {
//create the out stream to send the file to and zip it.
//we want it buffered as that is more efficient.
FileOutputStream destination = new FileOutputStream(zipFile);
zipOut = new ZipOutputStream(new BufferedOutputStream(destination));
zipOut.setMethod(ZipOutputStream.DEFLATED);
//create the input stream (buffered) to read in the file so we
//can write it to the zip.
in = new BufferedInputStream(new FileInputStream(fileToZip), BUFFER);
//now "add" the file to the zip (in object speak only).
ZipEntry zipEntry = new ZipEntry(fileName);
zipOut.putNextEntry(zipEntry);
//now actually read from the file and write the file to the zip.
int count;
while((count = in.read(data, 0, BUFFER)) != -1) {
zipOut.write(data, 0, count);
}
}
catch (FileNotFoundException e) {
throw e;
}
catch (IOException e) {
throw e;
}
finally {
//whether we succeed or not, close the streams.
if(in != null) {
try {
in.close();
}
catch (IOException e) {
//note and do nothing.
e.printStackTrace();
}
}
if(zipOut != null) {
try {
zipOut.close();
}
catch (IOException e) {
//note and do nothing.
e.printStackTrace();
}
}
}
Now if you need to loop, you can just loop around the part that you need to add more files to. Perhaps pass in an array of files and loop over it. This code worked for me zipping a file up.
Don't size your buf based on the file size, use a fixed size buffer.
Basically I have this code to decompress some string that stores in a file:
public static String decompressRawText(File inFile) {
InputStream in = null;
InputStreamReader isr = null;
StringBuilder sb = new StringBuilder(STRING_SIZE);
try {
in = new FileInputStream(inFile);
in = new BufferedInputStream(in, BUFFER_SIZE);
in = new GZIPInputStream(in, BUFFER_SIZE);
isr = new InputStreamReader(in);
int length = 0;
while ((length = isr.read(cbuf)) != -1) {
sb.append(cbuf, 0, length);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
in.close();
} catch (Exception e1) {
e1.printStackTrace();
}
}
return sb.toString();
}
Since physical IO is quite time consuming, and since my compressed version of files are all quite small(around 2K from 2M of text), is it possible for me to still do the above, but on a file that is already mapped to memory? possibly using java NIO? Thanks
It won't make any difference, at least not much. Mapped files are about 20% faster in I/O last time I looked. You still have to actually do the I/O: mapping just saves some data copying. I would look at increasing BUFFER_SIZE to at least 32k. Also the size of cbuf, which should be a local variable in this method, not a member variable, so it will be thread-safe. It might be worth not compressing the files under a certain size threshold, say 10k.
Also you should be closing isr here, not in.
It might be worth trying putting another BufferedInputStream on top of the GZIPInputStream, as well as the one underneath it. Get it to do more at once.