Reading 20 uncompressed parquet files with total size 3.2GB, takes more then 12GB in RAM, when reading them "concurrently".
"concurrently" means that I need to read the second file before closing the first file, not multithreading.
The data is time series, so my program needs to read all the files up to some time, and then proceed.
I expect Arrow to use the amount of memory that corresponds to a single batch multiplied by the amount of files, but in reality the memory used is much more then the entire files.
The files were created with pandas default config (using pyarrow), and reading them in java gives the correct values.
when reading each file to the fullest, and then closing the file, the amount of ram used is ok.
I have tried to switch between the netty, and unsafe memory jars but they have the same results.
-Darrow.memory.debug.allocator=true did not produce any error.
trying to limit the amount of direct memory (the excess memory is outside of the JVM) I have tried to replace NativeMemoryPool.getDefault() with
NativeMemoryPool.createListenable(DirectReservationListener.instance()) or NativeMemoryPool.createListenable(.. some custom listener ..)
but the result is exception:
Exception in thread "main" java.lang.RuntimeException: JNIEnv was not attached to current thread
at org.apache.arrow.dataset.jni.JniWrapper.nextRecordBatch(Native Method)
at org.apache.arrow.dataset.jni.NativeScanner$NativeReader.loadNextBatch(NativeScanner.java:134)
at ParquetExample.main(ParquetExample.java:47)
using -XX:MaxDirectMemorySize=1g, -Xmx4g anyways had no effect.
the runtime is using env varibale:
_JAVA_OPTIONS="--add-opens=java.base/java.nio=ALL-UNNAMED"
on JDK 17.0.2 with arrow 9.0.0
the code is extracted to this simple example, taken from the official documentation:
import org.apache.arrow.dataset.file.FileFormat;
import org.apache.arrow.dataset.file.FileSystemDatasetFactory;
import org.apache.arrow.dataset.jni.NativeMemoryPool;
import org.apache.arrow.dataset.scanner.ScanOptions;
import org.apache.arrow.dataset.scanner.Scanner;
import org.apache.arrow.dataset.source.Dataset;
import org.apache.arrow.dataset.source.DatasetFactory;
import org.apache.arrow.memory.BufferAllocator;
import org.apache.arrow.memory.RootAllocator;
import org.apache.arrow.vector.VectorSchemaRoot;
import org.apache.arrow.vector.ipc.ArrowReader;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.List;
public class ParquetExample {
static BufferAllocator allocator = new RootAllocator(128 * 1024 * 1024); // limit does not affect problem
public static ArrowReader read_parquet_file(Path filePath, NativeMemoryPool nativeMemoryPool) {
String uri = "file:" + filePath;
ScanOptions options = new ScanOptions(/*batchSize*/ 64 * 1024 * 1024);
try (
DatasetFactory datasetFactory = new FileSystemDatasetFactory(
allocator, nativeMemoryPool, FileFormat.PARQUET, uri);
Dataset dataset = datasetFactory.finish()
) {
Scanner scanner = dataset.newScan(options);
return scanner.scan().iterator().next().execute();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public static void main(String[] args) throws IOException {
List<VectorSchemaRoot> schemaRoots = new ArrayList<>();
for (Path filePath : [...] ) { // 20 files, total uncompressed size 3.2GB
ArrowReader arrowReader = read_parquet_file(file,
NativeMemoryPool.getDefault());
if (arrowReader.loadNextBatch()) { // single batch read
schemaRoots.add(arrowReader.getVectorSchemaRoot());
}
}
}
}
the question is - why Arrow using so much memory in a straight-forward example, and why replacing the NativeMemoryPool results in crash?
Thanks
Related
Here is my code:
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.SeekableByteChannel;
import java.nio.file.Files;
import java.nio.file.InvalidPathException;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;
public class ExplicitChannelRead {
public static void main(String[] args) {
int count;
Path filePath = null;
// First, obtain a path to a file.
try {
filePath = Paths.get("test1.txt");
}
catch(InvalidPathException e) {
System.out.println("Path error: "+e);
return;
}
// Next, obtain a channel to that file within a try-with-resources block.
try(SeekableByteChannel fChan =
Files.newByteChannel(filePath, StandardOpenOption.CREATE_NEW)) {
// Allocate a buffer.
ByteBuffer mBuf = ByteBuffer.allocate(128);
while((count=fChan.read(mBuf)) != -1) {
//Rewind the buffer so that it can be read.
mBuf.rewind();
for(int i=0; i<count; i++) System.out.print((char)mBuf.get());
}
System.out.println();
} catch (IOException e) {
e.printStackTrace();
// System.out.println("I/O error: "+e);
}
}
}
On running the above code I get this exception:
java.nio.file.NoSuchFileException: test1.txt
at java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:85)
at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:108)
at java.base/sun.nio.fs.WindowsFileSystemProvider.newByteChannel(WindowsFileSystemProvider.java:235)
at java.base/java.nio.file.Files.newByteChannel(Files.java:375)
at java.base/java.nio.file.Files.newByteChannel(Files.java:426)
at fileNIO.ExplicitChannelRead.main(ExplicitChannelRead.java:31)
I don't understand why test1.txt file is not being created as it doesn't exist currently and I am using the StandardOpenOption.CREATE_NEW option?
When I use StandardOpenOption.WRITE option along with StandardOpenOption.CREATE_NEW then I see the file text1.txt being created and at that time I get the exception:
Exception in thread "main" java.nio.channels.NonReadableChannelException
This exception I understand its cause because I have opened the file in write mode and in the code I am performing read operation on the file.
It seems to me that a new file can't be created when the file is opened in read mode.
I have reproduced what you are seeing (on Linux with Java 17).
As I noted in the comments, the behavior seems to contradict what the javadocs say what should happen, but what I discovered is this:
With READ or neither READ or WRITE, a NoSuchFileException is thrown.
With WRITE (and no READ), the file is created but then NonReadableChannelException is thrown.
With both READ and WRITE, it works. At least ... it did for me.
I guess this sort of makes sense. You need READ to read the file and WRITE to create it. And the javadocs state that READ is the default if you don't specify READ, WRITE or APPEND.
But creating an empty file1 with CREATE_NEW and then immediately trying to read it is a use-case that borders on pointless. So it not entirely surprising that they didn't (clearly) document how to achieve this.
1 - As a comment noted, CREATE_NEW is specified to fail if the file already exists. If you want "create it if it doesn't exist", then you should use CREATE instead.
I am observing an interesting performance degradation when using File.createNewFile() or File.createTempFile(). The following code creates 48 threads, each of which writes about 128MB of data to a different file. If I run the code as is, it takes about 60 seconds on my particular machine. If I run the code exactly as is, except I comment out the f.createTempFile() call then it takes around 5 seconds.
import java.util.*;
import java.util.concurrent.*;
import java.io.File;
import java.io.BufferedOutputStream;
import java.io.FileOutputStream;
import java.io.IOException;
public final class TestFile implements Runnable {
public void run() {
byte[] b = new byte[128205100];
Arrays.fill(b, (byte)10);
try {
File f = new File("/tmp/test", UUID.randomUUID().toString());
// If I comment the following f.createNewFile() then the code takes
// 5 seconds rather than 60 to execute.
f.createNewFile();
FileOutputStream fOutputStream = new FileOutputStream(f);
BufferedOutputStream fBufStream = new BufferedOutputStream(fOutputStream, 32768);
fBufStream.write(b);
fBufStream.close();
} catch (IOException e) {
System.err.println("Caught IOException: " + e.getMessage());
}
}
public static void main(String[] args) {
final ExecutorService executorPool = Executors.newFixedThreadPool(48);
for (int counter=0; counter < 48; counter++) {
executorPool.execute(new TestFile());
}
try {
executorPool.shutdown();
executorPool.awaitTermination(120, TimeUnit.SECONDS);
} catch (InterruptedException e) {
System.err.println("Caught InterruptedException: " + e.getMessage());
}
}
}
Using jstack, I can see that when running the code above all the threads end up spending most of their time in close0(). This function is unfortunately native :-/ Any idea where I find the source for it?
"Thread-47" #68 prio=5 os_prio=0 tid=0x00007f21001de800 nid=0x4eb4 runnable [0x00007f209edec000]
java.lang.Thread.State: RUNNABLE
at java.io.FileOutputStream.close0(Native Method)
at java.io.FileOutputStream.access$000(FileOutputStream.java:53)
at java.io.FileOutputStream$1.close(FileOutputStream.java:356)
at java.io.FileDescriptor.closeAll(FileDescriptor.java:212)
- locked <0x00000005908ad628> (a java.io.FileDescriptor)
at java.io.FileOutputStream.close(FileOutputStream.java:354)
at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
at TestFile.run(TestFile.java:19)
at java.lang.Thread.run(Thread.java:745)
My guess is that someone somewhere (inside the native close0 ?) is issuing a sync, but I am not finding it. I have tested this on a few machines, and in some of them I don't see the degradation. So this is possibly configuration or environmental based.
I am running on Ubuntu using Java 8.
Any help would be greatly appreciated. Thanks!
It's very simple. File.createNewFile() searches for a file by that name, and either creates a new file if it doesn't exist, or fails, which you are correctly ignoring, as it doesn't matter in the least whether it succeeded or not. new FileOutputStream() searches for any existing file by the same name, deletes it, and creates a new file.
It is evident therefore that File.createNewFile() is a complete waste of time when it is followed by new FileOutputStream(), as it forces the operating system to:
Search for the file.
Create it if it doesn't exist, or fail.
Search for the file.
Delete it if it exists.
Create it.
Clearly (1) and (2) are a waste of time, and force (4) to happen when it may not have needed to.
Solution: don't call File.createNewFile() before new FileOutputStream(...). Or new FileWriter(...) for that matter, or new PrintStream/PrintWriter(...) either. There is nothing to be gained, and time and space to be wasted.
I've got wrapper for BufferedReader that reads in files one after the other to create an uninterrupted stream across multiple files:
import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.FileReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.Reader;
import java.util.ArrayList;
import java.util.zip.GZIPInputStream;
/**
* reads in a whole bunch of files such that when one ends it moves to the
* next file.
*
* #author isaak
*
*/
class LogFileStream implements FileStreamInterface{
private ArrayList<String> fileNames;
private BufferedReader br;
private boolean done = false;
/**
*
* #param files an array list of files to read from, order matters.
* #throws IOException
*/
public LogFileStream(ArrayList<String> files) throws IOException {
fileNames = new ArrayList<String>();
for (int i = 0; i < files.size(); i++) {
fileNames.add(files.get(i));
}
setFile();
}
/**
* advances the file that this class is reading from.
*
* #throws IOException
*/
private void setFile() throws IOException {
if (fileNames.size() == 0) {
this.done = true;
return;
}
if (br != null) {
br.close();
}
//if the file is a .gz file do a little extra work.
//otherwise read it in with a standard file Reader
//in either case, set the buffer size to 128kb
if (fileNames.get(0).endsWith(".gz")) {
InputStream fileStream = new FileInputStream(fileNames.get(0));
InputStream gzipStream = new GZIPInputStream(fileStream);
// TODO this probably needs to be modified to work well on any
// platform, UTF-8 is standard for debian/novastar though.
Reader decoder = new InputStreamReader(gzipStream, "UTF-8");
// note that the buffer size is set to 128kb instead of the standard
// 8kb.
br = new BufferedReader(decoder, 131072);
fileNames.remove(0);
} else {
FileReader filereader = new FileReader(fileNames.get(0));
br = new BufferedReader(filereader, 131072);
fileNames.remove(0);
}
}
/**
* returns true if there are more lines available to read.
* #return true if there are more lines available to read.
*/
public boolean hasMore() {
return !done;
}
/**
* Gets the next line from the correct file.
* #return the next line from the files, if there isn't one it returns null
* #throws IOException
*/
public String nextLine() throws IOException {
if (done == true) {
return null;
}
String line = br.readLine();
if (line == null) {
setFile();
return nextLine();
}
return line;
}
}
If I construct this object on a large list of files (300MB worth of files), then print nextLine() over and over again in a while loop performance continually degrades until there is no more RAM to use. This happens even if I'm reading in files that are ~500kb and using a virtual machine that has 32MB of memory.
I want this code to be able to run on positively massive data-sets (hundreds of gigabytes worth of files) and it is a component of a program that needs to run with 32MB or less of memory.
The files that are used are mostly labeled CSV files, hence the use of Gzip to compress them on disk. This reader needs to handle gzip and uncompressed files.
Correct me if I'm wrong, but once a file has been read through and had its lines spat out the data from that file, the objects related to that file, and everything else should be viable for garbage collection?
With Java 8, GZIP support has moved from Java code to native zlib usage.
Non-closed GZIP streams leak native memory (I really said "native" not "heap" memory) and it is far from easy to diagnose. Depending of application usage of such streams, operating system may reach its memory limit quite fast.
Symptom is that operating system process memory usage is not consistent with JVM memory usage produced by Native Memory Tracking https://docs.oracle.com/javase/8/docs/technotes/guides/vm/nmt-8.html
You will find full story details at http://www.evanjones.ca/java-native-leak-bug.html
The last call to setFile won't close your BufferedReader so you are leaking ressources.
Indeed in nextLine you read the first file until the end. When the end is reached you call setFile and check if there is more file to process. However if there is no more file you return imediatly without closing the last BufferReader user.
Furthermore if you don't process all files you will have a ressource still in use.
There is at least one leak in your code: Method setFile() does not close the last BufferedReader because the if (fileNames.size() == 0) check comes before if (br != null) check.
However, this could lead to the described effect only if LogFileStream is instantiated multiple times.
It would also be better to use LinkedList instead of ArrayList as fileNames.remove(0) is more 'expensive' on the ArrayList than on the LinkedList. You could instantiate it using following single line in the constructor: fileNames = new LinkedList<>(files);
Every once in a while, you could flush() or close() the BufferedReader. This will clear the reader's contents, so maybe every time you use the setFile() method, flush the reader. Then, just before every call like br = new BufferedReader(decoder, 131072), close() the BufferedReader
The GC starts to work after you close your connection/ reader. If you are using Java 7 or above, you may want to consider to use the try-with-resource statement which is a better way to deal with IO operation.https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html
I have tried all the following:
import java.io.BufferedInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.URLConnection;
import java.nio.file.Files;
public class mimeDicom {
public static void main(String[] argvs) throws IOException{
String path = "Image003.dcm";
String[] mime = new String[3];
File file = new File(path);
mime[0] = Files.probeContentType(file.toPath());
mime[1] = URLConnection.guessContentTypeFromName(file.getName());
InputStream is = new BufferedInputStream(new FileInputStream(file));
mime[2] = URLConnection.guessContentTypeFromStream(is);
for(String m: mime)
System.out.println("mime: " + m);
}
}
But the results are still: mime: null for each of the tried methods above and I really want to know if the file is a DICOM as sometimes they don't have the extension or have a different one.
How can I know if the file is a DICOM from the path?
Note: this is not a duplicate of How to accurately determine mime data from a file? because the excellent list of magic numbers doesn't cover DICOM files and the apache tika gives application/octet-stream as return which doesn't really identify it as an image and it's not useful as the NIfTI files (among others) get the exactly same MIME from Tika.
To determine if a file is Dicom, you best bet is to parse the file yourself and see if it contains the magic bytes "DICM" at the file offset 128.
The first 128 bytes are usually 0 but may contain anything.
I am using following standalone class to calculate size of zipped files before zipping.
I am using 0 level compression, but still i am getting a difference of few bytes.
Can you please help me out in this to get exact size?
Quick help will be appreciated.
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.zip.CRC32;
import java.util.zip.ZipEntry;
import java.util.zip.ZipInputStream;
import java.util.zip.ZipOutputStream;
import org.apache.commons.io.FilenameUtils;
public class zipcode {
/**
* #param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
try {
CRC32 crc = new CRC32();
byte[] b = new byte[1024];
File file = new File("/Users/Lab/Desktop/ABC.xlsx");
FileInputStream in = new FileInputStream(file);
crc.reset();
// out put file
ZipOutputStream out = new ZipOutputStream(new FileOutputStream("/Users/Lab/Desktop/ABC.zip"));
// name the file inside the zip file
ZipEntry entry = new ZipEntry("ABC.xlsx");
entry.setMethod(ZipEntry.DEFLATED);
entry.setCompressedSize(file.length());
entry.setSize(file.length());
entry.setCrc(crc.getValue());
out.setMethod(ZipOutputStream.DEFLATED);
out.setLevel(0);
//entry.setCompressedSize(in.available());
//entry.setSize(in.available());
//entry.setCrc(crc.getValue());
out.putNextEntry(entry);
// buffer size
int count;
while ((count = in.read(b)) > 0) {
System.out.println();
out.write(b, 0, count);
}
out.close();
in.close();
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Firstly, I'm not convinced by explanation for why you need to do this. There is something wrong with your system design or implementation if it is necessary to know the file size before you start uploading.
Having said that, the solution is basically to create the ZIP file on the server side so that you know its size before you start uploading it to the client:
Write the ZIP file to a temporary file and upload from that.
Write the ZIP file to an buffer in memory and upload from that.
If you don't have either the file space or the memory space on the server side, then:
Create "sink" outputStream that simply counts the bytes that are written to calculate the nominal file size.
Create / write the ZIP file to the sink, and capture the file size.
Open your connection for uploading.
Send the metadata including the file size.
Create / write the ZIP a second time, writing to the socket stream ... or whatever.
These 3 approaches will all allow you to create and send a compressed ZIP, if that is going to help.
If you insist on trying to do this on-the-fly in one pass, then you are going to need to read the ZIP file spec in forensic detail ... and do some messy arithmetic. Helping you is probably beyond the scope of a SO question.
I had to do this myself to write the zip results straight to AWS S3 which requires a file size. Unfortunately there is no way I found to compute the size of a compressed file without performing the computation on each block of data.
One method is to zip everything twice. The first time you throw out the data but add up the number of bytes:
long getSize(List<InputStream> files) throws IOException {
final AtomicLong counter = new AtomicLong(0L);
final OutputStream countingStream = new OutputStream() {
#Override
public void write(int b) throws IOException {
counter.incrementAndGet();
}
};
ZipOutputStream zoutcounter = new ZipOutputStream(countingStream);
// Loop through files or input streams here and do compression
// ...
zoutcounter.close();
return counter.get();
}
The alternative is to do the above creating an entry for each file but then don't write any actual data (don't call write()) so you can compute the total size of just the zip entry headers. This will only work if you turn off compression like this:
entry.setMethod(ZipEntry.STORED);
The size of the zip entries plus the size of each uncompressed file should give you an accurate final size, but only with compression turned off. You don't have to set the CRC values or any of those other fields when computing the zip file size as those entries always have the same size in the final entry header. It's only the name, comment and extra fields on the ZipEntry that vary in size. The other entries like the file size, CRC, etc. take up the same space in the final zip file whether or not they were set.
There is one more solution you can try. Guess the size conservatively and add a safety margin, then compress it aggressively. Pad the rest of the file until it equals your estimated size. Zip ignores padding. If you implement an output stream that wrappers your actual output stream but implements the close operation as a noop then you can pass that as the output stream for your ZipOutputStream. After you close your ZipOutputStream instance, write the padding to the actual output stream to equal your estimated number of bytes, then close it for real. The file will be larger than it could be but you save the computation of the accurate file size and the result will benefit from at least some compression.