There are several high quality frameworks that hide the complexity of NIO based network programming (mina, netty, grizzly, etc.). Are there similar frameworks that simplify NIO based file-system programming?
For example, as a learning exercise, I would like to implement a disk backed Map based on this (awesome!) article: http://www.javaworld.com/javaworld/jw-01-1999/jw-01-step.html.
No (but...)
But that is because Java's NIO FileChannel and MappedByteBuffer are not nearly as complex or difficult to understand and use as the networking and Selector stuff java.nio.
Here is an example of creating a disk backed map (known as a 'mapped byte buffer' in NIO-land) that would be appropriate for your exercise:
File file = new File("/Users/stu/mybigfile.bin");
FileChannel fc = (new FileInputStream(file)).getChannel();
MappedByteBuffer buf = fc.map(MapMode.READ_WRITE, 0, file.length());
You can access the buffer like any other Buffer. Data moves magically and quickly between disk and memory, all managed by Java and the underlying OS's virtual memory management system. You do have a degree of control of this, though. E.g.: MappedByteBuffer's .force() ('Forces any changes made to this buffer's content to be written to the storage device containing the mapped file.') and .load() ('Loads this buffer's content into physical memory.') I've never needed these personally.
To add to #Stu's comment. It worth noting that socket connections do not have all their data at once but instead may need to support many slow connections (esp connections which are open but no data is sent yet)
However for files, all the data is available at once and you typically only need to open a few files at a time to get maximum performance (often one at a time is fine) If you are loading data from multiple drives (rare) or from multiple servers (very rare) or multiple network interfaces (even rarer) you might fine accessing a few files at a time improves performance. Even then the complexity is not high and you can just create a thread for each file you are loading.
The only occasion where files are complicated is reading log files. This complicated as the file can grow in size as you read it. You can reach the end of the files and later find more data. Also log files can be rotated meaning the file you had open is no longer the file you want. Even so this is not very difficult to deal with and a fairly rare requirement.
Related
I am working on using Java for reading of (potentially) large amounts of data from (potentially) large files - the scenario is uncompressed imagery from a file format like HEIF. Larger than 2G is likely. Writing is a future need, but this question is scoped to reading.
The HEIF format (which is derived from ISO Base Media File Format - ISO/IEC 14496-12) is variable sizes "boxes" - you read the length and kind of box, and do some parsing thing appropriate to the box. In my design, I'll parse out the small-ish boxes, and keep references to the bulk storage (mdat) offsets to be able to pull the data out for rendering / processing as requested.
I'm considering two options - multiple MappedByteBuffer (since that is 2G limited), and a single MemorySegment (from a memory mapped file). Its not clear to me which is likely to be more efficient. The MappedByteBuffer has all the nice ByteBuffer API, but I need to manage multiple entities. The MemorySegment will be a single entry, but it looks I'll need to create slice views to get anything I can read from (e.g. a byte array or ByteBuffer), which looks like a different version of the same problem. A secondary benefit for the MemorySegment is that it may lead to a nicer design when I need to use some other non-Java API (like feeding the imagery into a hardware encoder for compression). I also have the skeleton of the MemorySegment implemented and reading (just with some gross assumptions that I can turn it into a single ByteBuffer).
Are there emerging patterns for efficient reading from a MemorySegment? Failing that, is there something I'm missing in the MemorySegment API?
What options are there for processing large files quickly, multiple times?
I have a single file (min 1.5 GB, but can be upwards of 10-15 GB) that needs to be read multiple times - on the order of hundreds to thousands of times. The server has a large amount of RAM (64+ GB) and plenty of processors (24+).
The file will be sequential, read-only. Files are encrypted (sensitive data) on disk. I also use MessagePack to deserialize them into objects during the read process.
I cannot store the objects created from the file into memory - too large of an expansion (1.5 GB file turns into 35 GB in-memory object array). File can't be stored as a byte array (limited by Java's array length of 2^32-1).
My initial thought is to use a memory mapped file, but that has its own set of limitations.
The idea is to get the file off the disk and into memory for processing.
The large volume of data is for a machine learning algorithm, that requires multiple reads. During the calculation of each file pass, there's a considerable amount of heap usage by the algorithm itself, which is unavoidable, hence the requirement to read it multiple times.
The problem you have here is that you cannot mmap() the way the system call of the same name does; the syscall can map up to 2^64, FileChannel#map() cannot map more than 2^30 reliably.
However, what you can do is wrap a FileChannel into a class and create several "map ranges" covering all the file.
I have done "nearly" such a thing except more complicated: largetext. More complicated because I have to do the decoding process to boot, and the text which is loaded must be so into memory, unlike you who reads bytes. Less complicated because I have a define JDK interface to implement and you don't.
You can however use nearly the same technique using Guava and a RangeMap<Long, MappedByteBuffer>.
I implement CharSequence in this project above; I suggest that you implement a LargeByteMapping interface instead, from which you can read whatever parts you want; or, well, whatever suits you. Your main problem will be to define that interface. I suspect what CharSequence does is not what you want.
Meh, I may even have a go at it some day, largetext is quite exciting a project and this looks like the same kind of thing; except less complicated, ultimately!
One could even imagine a LargeByteMapping implementation where a factory would create such mappings with only a small part of that into memory and the rest written to a file; and such an implementation would also use the principle of locality: the latest queried part of the file into memory would be kept into memory for faster access.
See also here.
EDIT I feel some more explanation is needed here... A MappedByteBuffer will NOT EAT HEAP SPACE!!
It will eat address space only; it is nearly the equivalent of a ByteBuffer.allocateDirect(), except it is backed by a file.
And a very important distinction needs to be made here; all of the text above supposes that you are reading bytes, not characters!
Figure out how to structure the data. Get a good book about NoSQL and find the appropriate Database (Wide-Column, Graph, etc.) for your scenario. That's what I'd do. You'd not only have sophisticated query methods on your data but also mangling the data using distribute map-reduced implementations doing whatever you want. Maybe that's what you want (you even dropped the bigdata bomb)
How about creating "a dictionary" as the bridge between your program and the target file? Your program will call the dictionary then dictionary will refer you to the big fat file.
I'm writing an application that needs to deserialize quickly millions of messages from a single file.
What the application does is essentially to get one message from the file, do some work and then throw away the message. Each message is composed of ~100 fields (not all of them are always parsed but I need them all because the user of the application can decide on which fields he wants to work on).
In this moment the application consists in a loop that in each iteration just executes using a readDelimitedFrom() call.
Is there a way to optimize the problem to fit better this case (splitting in multiple files, etc...). In addition, in this moment due to the number of messages and the dimension of each message, I need to gzip the file (and it is fairly effective in reducing the size since the value of the fields are quite repetitive) - this though reduces the performance.
If CPU time is your bottleneck (which is unlikely if you are loading directly from HDD with cold cache, but could be the case in other scenarios), then here are some ways you can improve throughput:
If possible, use C++ rather than Java, and reuse the same message object for each iteration of the loop. This reduces the amount of time spent on memory management, as the same memory will be reused each time.
Instead of using readDelimitedFrom(), construct a single CodedInputStream and use it to read multiple messages like so:
// Do this once:
CodedInputStream cis = CodedInputStream.newInstance(input);
// Then read each message like so:
int limit = cis.pushLimit(cis.readRawVarint32());
builder.mergeFrom(cis);
cis.popLimit(limit);
cis.resetSizeCounter();
(A similar approach works in C++.)
Use Snappy or LZ4 compression rather than gzip. These algorithms still get reasonable compression ratios but are optimized for speed. (LZ4 is probably better, though Snappy was developed by Google with Protobufs in mind, so you might want to test both on your data set.)
Consider using Cap'n Proto rather than Protocol Buffers. Unfortunately, there is no Java version yet, but EDIT: There is capnproto-java and also implementations in many other languages. In the languages it supports it has been shown to be quite a bit faster. (Disclosure: I am the author of Cap'n Proto. I am also the author of Protocol Buffers v2, which is the version Google released open source.)
I expect that the majority of the time spent by your CPU is in garbage collection. I would look to replace the default garbage collector with one better suited for your use case of short lived objects.
If you do decide to write this in C++ - use an Arena to create the first message before parsing: https://developers.google.com/protocol-buffers/docs/reference/arenas
I'm toying around with creating a pure Java audio mixing library, preferably one that can be used with Android, not entirely practical but definitely an interesting thing to have. I'm sure it's been done already, but just for my own learning experience I am trying to do this with wav files since there are usually no compression models to work around.
Given the nature of java.io, it defines many InputStream type of classes. Each implements operations that are primarily for reading data from some underlying resource. What you do with data afterward, dump it or aggregate it in your own address space, etc, is up to you. I want this to be purely Java, e.g. works on anything (no JNI necessary), optimized for low memory configurations, and simple to extend.
I understand the nature of the RIFF format and how to assemble the PCM sample data, but I'm at a loss for the best way of managing the memory required for inflating the files into memory. Using a FileInputStream, only so much of the data is read at a time, based on the underlying file system and how the read operations are invoked. FileInputStream doesn't furnish a method of indexing where in the file you are so that retrieving streams for mixing later is not possible. My goal would be to inflate the RIFF document into Java objects that allow for reading and writing of the appropriate regions of the underlying chunk.
If I allocate space for the entire thing, e.g. all PCM sample data, that's like 50 MB per average song. On a typical smart phone or tablet, how likely is it that this will affect overall performance? Would I be better off coming up with my own InputStream type that maybe keeps track of where the chunks are in the InputStream? For file's this will result in lots of blocking when fetching PCM samples, but will still cut down on the overall memory footprint on the system.
I'm not sure I understand all of your question, but I'll answer what I can. Feel free to clarify in the comments, and I'll edit.
Don't keep all file data in in memory for a DAW-type app, or any file/video player that expects to play large files. This might work on some devices depending on the memory model, but you are asking for trouble.
Instead, read the required section of the file as needed (ie on demand). It'a actually a bit more complex than that because you don't want to read the file in the audio playback thread (you don't want audio playback, which is low latency, to depend on file IO, which is high-latency). To get around that, you may have to buffer some of the file in advance. (it depends on whether you are using a callback or blocking model)
Using FileInputStream works fine, you'll just have to keep track of where everything is in the file yourself (this involves converting milliseconds or whatever to samples to bytes and taking into account the size of the header[1]). A slightly better option is RandomAccessFile because it allows you to jump arround.
My slides from a talk on programing audio software might help, especially if you are confused by callback v blocking: http://blog.bjornroche.com/2011/11/slides-from-fundamentals-of-audio.html
[1] or, more correctly, knowing the offset of the audio data in the file.
Alright. So I have a very large amount of binary data (let's say, 10GB) distributed over a bunch of files (let's say, 5000) of varying lengths.
I am writing a Java application to process this data, and I wish to institute a good design for the data access. Typically what will happen is such:
One way or another, all the data will be read during the course of processing.
Each file is (typically) read sequentially, requiring only a few kilobytes at a time. However, it is often necessary to have, say, the first few kilobytes of each file simultaneously, or the middle few kilobytes of each file simultaneously, etc.
There are times when the application will want random access to a byte or two here and there.
Currently I am using the RandomAccessFile class to read into byte buffers (and ByteBuffers). My ultimate goal is to encapsulate the data access into some class such that it is fast and I never have to worry about it again. The basic functionality is that I will be asking it to read frames of data from specified files, and I wish to minimize the I/O operations given the considerations above.
Examples for typical access:
Give me the first 10 kilobytes of all my files!
Give me byte 0 through 999 of file F, then give me byte 1 through 1000, then give me 2 through 1001, etc, etc, ...
Give me a megabyte of data from file F starting at such and such byte!
Any suggestions for a good design?
Use Java NIO and MappedByteBuffers, and treat your files as a list of byte arrays. Then, let the OS worry about the details of caching, read, flushing etc.
#Will
Pretty good results. Reading a large binary file quick comparison:
Test 1 - Basic sequential read with RandomAccessFile.
2656 ms
Test 2 - Basic sequential read with buffering.
47 ms
Test 3 - Basic sequential read with MappedByteBuffers and further frame buffering optimization.
16 ms
Wow. You are basically implementing a database from scratch. Is there any possibility of importing the data into an actual RDBMS and just using SQL?
If you do it yourself you will eventually want to implement some sort of caching mechanism, so the data you need comes out of RAM if it is there, and you are reading and writing the files in a lower layer.
Of course, this also entails a lot of complex transactional logic to make sure your data stays consistent.
I was going to suggest that you follow up on Eric's database idea and learn how databases manage their buffers—effectively implementing their own virtual memory management.
But as I thought about it more, I concluded that most operating systems are already a better job of implementing file system caching than you can likely do without low-level access in Java.
There is one lesson from database buffer management that you might consider, though. Databases use an understanding of the query plan to optimize the management strategy.
In a relational database, it's often best to evict the most-recently-used block from the cache. For example, a "young" block holding a child record in a join won't be looked at again, while the block containing its parent record is still in use even though it's "older".
Operating system file caches, on the other hand, are optimized to reuse recently used data (and reading ahead of the most recently used data). If your application doesn't fit that pattern, it may be worth managing the cache yourself.
You may want to take a look at an open source, simple object database called jdbm - it has a lot of this kind of thing developed, including ACID capabilities.
I've done a number of contributions to the project, and it would be worth a review of the source code if nothing else to see how we solved many of the same problems you might be working on.
Now, if your data files are not under your control (i.e. you are parsing text files generated by someone else, etc...) then the page-structured type of storage that jdbm uses may not be appropriate for you - but if all of these files are files that you are creating and working with, it may be worth a look.
#Eric
But my queries are going to be much, much simpler than anything I can do with SQL. And wouldn't a database access be much more expensive than a binary data read?
This is to answer the part about minimizing I/O traffic. On the Java side, all you can really do is wrap your readers in BufferedReaders. Aside from that, your operating system will handle other optimizations like keeping recently-read data in the page cache and doing read-ahead on files to speed up sequential reads. There's no point in doing additional buffering in Java (although you'll still need a byte buffer to return the data to the client).
I had someone recommend hadoop (http://hadoop.apache.org) to me just the other day. It looks like it could be pretty nice, and might have some marketplace traction.
I would step back and ask yourself why you are using files as your system of record, and what gains that gives you over using a database. A database certainly gives you the ability to structure your data. Given the SQL standard, it might be more maintainable in the long run.
On the other hand, your file data may not be structured so easily within the constraints of a database. The largest search company in the world :) doesn't use a database for their business processing. See here and here.