This question already has answers here:
Java - Read file and split into multiple files
(11 answers)
Closed 4 years ago.
How can I split a file in two ( file1 and file2 ) such that the file1 contains first 10kb of the file and file2 contains rest of the remaining data of the file.
I am using AIDE on android.
There is no "system call" to split a file. You need to open a file, read it and copy the contents to the corresponding output files (which you need to create).
Synopsis:
Open the input file as a FileInputStream
Make a byte[] buffer somewhere around 4k
Open the two output files as two FileOutputStreams
Read from input into buffer and write buffer to first OutputStream
Do this until exactly 10kb bytes have been read and written
Read from input into buffer and write buffer to second OutputStream
Do this until there are no more bytes from the input stream
Close all three streams
Of course, you will need to be careful to make sure that you copy exactly the correct number of bytes. See InputStream.read(buf, offset, length) for details. Test also for special case when input file is less than 10k long.
Related
This question already has answers here:
convert zip byte[] to unzip byte[]
(4 answers)
Closed 5 years ago.
Could some one help me with the code snippet in converting zip byte[] to unzip byte[] in memory with out writing to a file intermediary
I looked in to this stack overflow "convert zip byte[] to unzip byte[]' but not able to get it using java code
Thanks
Somu
Here are the tools you need for that:
ByteArrayInputStream - allows you to wrap an array of bytes as a stream.
ZipInputStream - reads the zipped stream of bytes and presents them as unzipped ones.
ByteArrayOutputStream - a stream that writes into internal byte buffer.
(If using Java 9) InputStream#transferTo - copy from input stream to output stream. (If not using Java 9) Copy it manually
ByteArrayOutputStream#toByteArray - extract buffer from the output stream.
Wire them all together and you are done.
I am seeing something unusual in my zip files.
I have two .txt files and both are then zipped through java.util.zip(ZipOutputStream, ZipEntry ...) in my application and then returned in response as downloadable zip files through the browser.
One file has data which is a database blob and other is a StringBuffer. My blob txt file is of size 10 mb and my StringBuffer txt file is 15 mb but when these are zipped the blob txt zip file has size larger that the StringBuffer txt file although it contains a smaller txt file.
Any reason why this might be happening?
the StringBuffer and (as of Java 5) StringBuilder classes store just
the buffer for the character data plus current length (without the
additional offset and hash code fields of a String), but that buffer
could be larger than the actual number of characters placed in it;
a Java char takes up two bytes, even if you're using them to store
boring old ASCII values that would fit into a single byte;
Your BLOB -- binary large object -- probably contains data that isn't text, and as compressible as text. For example, it could contain an image.
If you don't already know what the blob contains, you can use a hexdump program to look at it.
I am using an OBJ Loader library that I found on the 'net and I want to look into speeding it up.
It works by reading an .OBJ file line by line (its basically a text file with lines of numbers)
I have a 12mb OBJ file that equates to approx 400,000 lines. suffice to say, it takes forever to try and read line by line.
Is there a way to speed it up? It uses a BufferedReader to read the file (which is stored in my assets folder)
Here is the link to the library: click me
Just an idea: you could first get the size of the file using the File class, after getting the file:
File file = new File(sdcard,"sample.txt");
long size = file.length();
The size returned is in bytes, thus, divide the file size into a sizable number of chunks e.g. 5, 10, 20 e.t.c. with a byte range specified and saved for each chunk. Create byte arrays of the same size as each chunk created above, then "assign" each chunk to a separate worker thread, which should read its chunk into its corresponding character array using the read(buffer, offset, length) method i.e. read "length" characters of each chunk into the array "buffer" beginning at array index "offset". You have to convert the bytes into characters. Then concatenate all arrays together to get the final complete file contents. Insert checks for the chunk sizes so each thread does not overlap the others boundaries. Again, this is just an idea, hopefully it will work when actually implemented.
This question already has answers here:
Read a file line by line in reverse order
(10 answers)
Closed 9 years ago.
I'm trying to implement a log structure file system as an operating system assignment. In it most recent data is placed at end of file. That's why I want to read text file "line-by-line" in reverse order. Is it possible?
Check out ReverseLineInputStream:
https://code.google.com/p/lt2-sander-marco/source/browse/src/leertaak2/ReverseLineInputStream.java?spec=svn15&r=15
It refers to the SO question posted at How to read file from end to start (in reverse order) in Java?
in = new BufferedReader (new InputStreamReader (new ReverseLineInputStream(file)));
while(true) {
String line = in.readLine();
if (line == null) {
break;
}
System.out.println("X:" + line);
}
(Thanks, #Mark O'Donohue)
If its line by line - you could pass all the lines into an arraylist and then read it backgrounds using a reverse for loop such as for(int i = list.size()-1;i>=0;i--)
Yes, but it requires reading through the entire file first. If the file is long, what you can do is split the file into several files, read each file into memory and write the data out to a new file from last line to first line of the old file. When the operation is done, you can delete all the temporary files you created.
I've been reading on RandomAccessFile and understand that its possible to truncate the end of a file by setLength to a length shorter than the file. Im trying to copy just the "end" of the file to a new file and truncate the beginning.
So for example: I want to delete the first 1300 bytes of a file and copy the rest of the file into a new file.
Is there any way of doing this?
Cheers
Have you considered using the RandomAccessFile seek method to seek to 1300 bytes, and then read the remainder of the file starting at the offset and use another RandomAccessFile (or different stream output) to create a new file with the values you read in from the original file beginning at the 1300 byte offset you specified?