I'm toying around with creating a pure Java audio mixing library, preferably one that can be used with Android, not entirely practical but definitely an interesting thing to have. I'm sure it's been done already, but just for my own learning experience I am trying to do this with wav files since there are usually no compression models to work around.
Given the nature of java.io, it defines many InputStream type of classes. Each implements operations that are primarily for reading data from some underlying resource. What you do with data afterward, dump it or aggregate it in your own address space, etc, is up to you. I want this to be purely Java, e.g. works on anything (no JNI necessary), optimized for low memory configurations, and simple to extend.
I understand the nature of the RIFF format and how to assemble the PCM sample data, but I'm at a loss for the best way of managing the memory required for inflating the files into memory. Using a FileInputStream, only so much of the data is read at a time, based on the underlying file system and how the read operations are invoked. FileInputStream doesn't furnish a method of indexing where in the file you are so that retrieving streams for mixing later is not possible. My goal would be to inflate the RIFF document into Java objects that allow for reading and writing of the appropriate regions of the underlying chunk.
If I allocate space for the entire thing, e.g. all PCM sample data, that's like 50 MB per average song. On a typical smart phone or tablet, how likely is it that this will affect overall performance? Would I be better off coming up with my own InputStream type that maybe keeps track of where the chunks are in the InputStream? For file's this will result in lots of blocking when fetching PCM samples, but will still cut down on the overall memory footprint on the system.
I'm not sure I understand all of your question, but I'll answer what I can. Feel free to clarify in the comments, and I'll edit.
Don't keep all file data in in memory for a DAW-type app, or any file/video player that expects to play large files. This might work on some devices depending on the memory model, but you are asking for trouble.
Instead, read the required section of the file as needed (ie on demand). It'a actually a bit more complex than that because you don't want to read the file in the audio playback thread (you don't want audio playback, which is low latency, to depend on file IO, which is high-latency). To get around that, you may have to buffer some of the file in advance. (it depends on whether you are using a callback or blocking model)
Using FileInputStream works fine, you'll just have to keep track of where everything is in the file yourself (this involves converting milliseconds or whatever to samples to bytes and taking into account the size of the header[1]). A slightly better option is RandomAccessFile because it allows you to jump arround.
My slides from a talk on programing audio software might help, especially if you are confused by callback v blocking: http://blog.bjornroche.com/2011/11/slides-from-fundamentals-of-audio.html
[1] or, more correctly, knowing the offset of the audio data in the file.
Related
What options are there for processing large files quickly, multiple times?
I have a single file (min 1.5 GB, but can be upwards of 10-15 GB) that needs to be read multiple times - on the order of hundreds to thousands of times. The server has a large amount of RAM (64+ GB) and plenty of processors (24+).
The file will be sequential, read-only. Files are encrypted (sensitive data) on disk. I also use MessagePack to deserialize them into objects during the read process.
I cannot store the objects created from the file into memory - too large of an expansion (1.5 GB file turns into 35 GB in-memory object array). File can't be stored as a byte array (limited by Java's array length of 2^32-1).
My initial thought is to use a memory mapped file, but that has its own set of limitations.
The idea is to get the file off the disk and into memory for processing.
The large volume of data is for a machine learning algorithm, that requires multiple reads. During the calculation of each file pass, there's a considerable amount of heap usage by the algorithm itself, which is unavoidable, hence the requirement to read it multiple times.
The problem you have here is that you cannot mmap() the way the system call of the same name does; the syscall can map up to 2^64, FileChannel#map() cannot map more than 2^30 reliably.
However, what you can do is wrap a FileChannel into a class and create several "map ranges" covering all the file.
I have done "nearly" such a thing except more complicated: largetext. More complicated because I have to do the decoding process to boot, and the text which is loaded must be so into memory, unlike you who reads bytes. Less complicated because I have a define JDK interface to implement and you don't.
You can however use nearly the same technique using Guava and a RangeMap<Long, MappedByteBuffer>.
I implement CharSequence in this project above; I suggest that you implement a LargeByteMapping interface instead, from which you can read whatever parts you want; or, well, whatever suits you. Your main problem will be to define that interface. I suspect what CharSequence does is not what you want.
Meh, I may even have a go at it some day, largetext is quite exciting a project and this looks like the same kind of thing; except less complicated, ultimately!
One could even imagine a LargeByteMapping implementation where a factory would create such mappings with only a small part of that into memory and the rest written to a file; and such an implementation would also use the principle of locality: the latest queried part of the file into memory would be kept into memory for faster access.
See also here.
EDIT I feel some more explanation is needed here... A MappedByteBuffer will NOT EAT HEAP SPACE!!
It will eat address space only; it is nearly the equivalent of a ByteBuffer.allocateDirect(), except it is backed by a file.
And a very important distinction needs to be made here; all of the text above supposes that you are reading bytes, not characters!
Figure out how to structure the data. Get a good book about NoSQL and find the appropriate Database (Wide-Column, Graph, etc.) for your scenario. That's what I'd do. You'd not only have sophisticated query methods on your data but also mangling the data using distribute map-reduced implementations doing whatever you want. Maybe that's what you want (you even dropped the bigdata bomb)
How about creating "a dictionary" as the bridge between your program and the target file? Your program will call the dictionary then dictionary will refer you to the big fat file.
I'm not sure if I'm asking this question right, but I want to make some sort of lyrics player using subtitle files. As I also want to make it compatible with larger files (say 10.000 lines), it's not a good idea to load the file before you play it. This might cost a lot of time and unnecessary amounts of data stored on the RAM. That's why I want to load it the way for example online videos do (they store an amount of minutes on the RAM and discard that what's been played, all while playing). I believe this is called buffering.
My question was: are there any pre-made i/o classes inside java that allow this sort of thing? I know a lot of classes with buffer in their name, but I have little to no idea what they do or what they are different from other classes (without buffer in their name).
I need to parse (and transform and write) a large binary file (larger than memory) in Java. I also need to do so as efficiently as possible in a single thread. And, finally, the format being read is very structured, so it would be good to have some kind of parser library (so that the code is close to the complex specification).
The amount of lookahead needed for parsing should be small, if that matters.
So my questions are:
How important is nio v io for a single threaded, high volume application?
Are there any good parser libraries for binary data?
How well do parsers support streaming transformations (I want to be able to stream the data being parsed to some output during parsing - I don't want to have to construct an entire parse tree in memory before writing things out)?
On the nio front my suspicion is that nio isn't going to help much, as I am likely disk limited (and since it's a single thread, there's no loss in simply blocking). Also, I suspect io-based parsers are more common.
Let me try to explain if and how Preon addresses all of the concerns you mention:
I need to parse (and transform and write) a large binary file (larger
than memory) in Java.
That's exactly why Preon was created. You want to be able to process the entire file, without loading it into memory entirely. Still, the program model gives you a pointer to a data structure that appears to be in memory entirely. However, Preon will try to load data as lazily as it can.
To explain what that means, imagine that somewhere in your data structure, you have a collection of things that are encoded in a binary representation with a constant size; say that every element will be encoded in 20 bytes. Then Preon will first of all not load that collection in memory at all, and if you're grabbing data beyond that collection, it will never touch that region of your encoded representation at all. However, if you would pick the 300th element of that collection, it would (instead of decoding all elements up to the 300th element), calculate the offset for that element, and jump there immediately.
From the outside, it is as though you have a reference to a list that is fully populated. From the inside, it only goes out to grab an element of the list if you ask for it. (And forget about it immediately afterward, unless you instruct Preon to do things differently.)
I also need to do so as efficiently as possible in a single thread.
I'm not sure what you mean by efficiently. It could mean efficiently in terms of memory consumption, or efficiently in terms of disk IO, or perhaps you mean it should be really fast. I think it's fair to say that Preon aims to strike a balance between an easy programming model, memory use and a number of other concerns. If you really need to traverse all data in a sequential way, then perhaps there are ways that are more efficient in terms of computational resources, but I think that would come at the cost of "ease of programming".
And, finally, the format being read is very structured, so it would be
good to have some kind of parser library (so that the code is close to
the complex specification).
The way I implemented support for Java byte code, is to just read the byte code specification, and then map all of the structures they mention in there directly to Java classes with annotations. I think Preon comes pretty close to what you're looking for.
You might also want to check out preon-emitter, since it allows you to generate annotated hexdumps (such as in this example of the hexdump of a Java class file) of your data, a capability that I haven't seen in any other library. (Hint: make sure you hover with your mouse over the hex numbers.)
The same goes for the documentation it generates. The aim has always been to mak sure it creates documentation that could be posted to Wikipedia, just like that. It may not be perfect yet, but I'm not unhappy with what it's currently capable of doing. (For an example: this is the documentation generated for Java's class file specification.)
The amount of lookahead needed for parsing should be small, if that matters.
Okay, that's good. In fact, that's even vital for Preon. Preon doesn't support lookahead. It does support looking back though. (That is, sometimes part the encoding mechanism is driven by data that was read before. Preon allows you to declare dependencies that point back to data read before.)
Are there any good parser libraries for binary data?
Preon! ;-)
How well do parsers support streaming transformations (I want to be
able to stream the data being parsed to some output during parsing - I
don't want to have to construct an entire parse tree in memory before
writing things out)?
As I outlined above, Preon does not construct the entire data structure in memory before you can start processing it. So, in that sense, you're good. However, there is nothing in Preon supporting transformations as first class citizens, and it's support for encoding is limited.
On the nio front my suspicion is that nio isn't going to help much, as
I am likely disk limited (and since it's a single thread, there's no
loss in simply blocking). Also, I suspect io-based parsers are more
common.
Preon uses NIO, but only it's support for memory mapped files.
On NIO vs IO you are right, going with IO should be the right choice - less complexity, stream oriented etc.
For a binary parsing library - checkout Preon
Using a Memory Mapped File you can read through it without worrying about your memory and it's fast.
I think you are correct re NIO vs IO unless you have little endian data as NIO can read little endian natively.
I am not aware of any fast binary parsers, generally you want to call the NIO or IO directly.
Memory mapped files can help with writing from a single thread as you don't have to flush it as you write. (But it can be more cumbersome to use)
You can stream the data how you like, I don't forsee any problems.
There are several high quality frameworks that hide the complexity of NIO based network programming (mina, netty, grizzly, etc.). Are there similar frameworks that simplify NIO based file-system programming?
For example, as a learning exercise, I would like to implement a disk backed Map based on this (awesome!) article: http://www.javaworld.com/javaworld/jw-01-1999/jw-01-step.html.
No (but...)
But that is because Java's NIO FileChannel and MappedByteBuffer are not nearly as complex or difficult to understand and use as the networking and Selector stuff java.nio.
Here is an example of creating a disk backed map (known as a 'mapped byte buffer' in NIO-land) that would be appropriate for your exercise:
File file = new File("/Users/stu/mybigfile.bin");
FileChannel fc = (new FileInputStream(file)).getChannel();
MappedByteBuffer buf = fc.map(MapMode.READ_WRITE, 0, file.length());
You can access the buffer like any other Buffer. Data moves magically and quickly between disk and memory, all managed by Java and the underlying OS's virtual memory management system. You do have a degree of control of this, though. E.g.: MappedByteBuffer's .force() ('Forces any changes made to this buffer's content to be written to the storage device containing the mapped file.') and .load() ('Loads this buffer's content into physical memory.') I've never needed these personally.
To add to #Stu's comment. It worth noting that socket connections do not have all their data at once but instead may need to support many slow connections (esp connections which are open but no data is sent yet)
However for files, all the data is available at once and you typically only need to open a few files at a time to get maximum performance (often one at a time is fine) If you are loading data from multiple drives (rare) or from multiple servers (very rare) or multiple network interfaces (even rarer) you might fine accessing a few files at a time improves performance. Even then the complexity is not high and you can just create a thread for each file you are loading.
The only occasion where files are complicated is reading log files. This complicated as the file can grow in size as you read it. You can reach the end of the files and later find more data. Also log files can be rotated meaning the file you had open is no longer the file you want. Even so this is not very difficult to deal with and a fairly rare requirement.
Alright. So I have a very large amount of binary data (let's say, 10GB) distributed over a bunch of files (let's say, 5000) of varying lengths.
I am writing a Java application to process this data, and I wish to institute a good design for the data access. Typically what will happen is such:
One way or another, all the data will be read during the course of processing.
Each file is (typically) read sequentially, requiring only a few kilobytes at a time. However, it is often necessary to have, say, the first few kilobytes of each file simultaneously, or the middle few kilobytes of each file simultaneously, etc.
There are times when the application will want random access to a byte or two here and there.
Currently I am using the RandomAccessFile class to read into byte buffers (and ByteBuffers). My ultimate goal is to encapsulate the data access into some class such that it is fast and I never have to worry about it again. The basic functionality is that I will be asking it to read frames of data from specified files, and I wish to minimize the I/O operations given the considerations above.
Examples for typical access:
Give me the first 10 kilobytes of all my files!
Give me byte 0 through 999 of file F, then give me byte 1 through 1000, then give me 2 through 1001, etc, etc, ...
Give me a megabyte of data from file F starting at such and such byte!
Any suggestions for a good design?
Use Java NIO and MappedByteBuffers, and treat your files as a list of byte arrays. Then, let the OS worry about the details of caching, read, flushing etc.
#Will
Pretty good results. Reading a large binary file quick comparison:
Test 1 - Basic sequential read with RandomAccessFile.
2656 ms
Test 2 - Basic sequential read with buffering.
47 ms
Test 3 - Basic sequential read with MappedByteBuffers and further frame buffering optimization.
16 ms
Wow. You are basically implementing a database from scratch. Is there any possibility of importing the data into an actual RDBMS and just using SQL?
If you do it yourself you will eventually want to implement some sort of caching mechanism, so the data you need comes out of RAM if it is there, and you are reading and writing the files in a lower layer.
Of course, this also entails a lot of complex transactional logic to make sure your data stays consistent.
I was going to suggest that you follow up on Eric's database idea and learn how databases manage their buffers—effectively implementing their own virtual memory management.
But as I thought about it more, I concluded that most operating systems are already a better job of implementing file system caching than you can likely do without low-level access in Java.
There is one lesson from database buffer management that you might consider, though. Databases use an understanding of the query plan to optimize the management strategy.
In a relational database, it's often best to evict the most-recently-used block from the cache. For example, a "young" block holding a child record in a join won't be looked at again, while the block containing its parent record is still in use even though it's "older".
Operating system file caches, on the other hand, are optimized to reuse recently used data (and reading ahead of the most recently used data). If your application doesn't fit that pattern, it may be worth managing the cache yourself.
You may want to take a look at an open source, simple object database called jdbm - it has a lot of this kind of thing developed, including ACID capabilities.
I've done a number of contributions to the project, and it would be worth a review of the source code if nothing else to see how we solved many of the same problems you might be working on.
Now, if your data files are not under your control (i.e. you are parsing text files generated by someone else, etc...) then the page-structured type of storage that jdbm uses may not be appropriate for you - but if all of these files are files that you are creating and working with, it may be worth a look.
#Eric
But my queries are going to be much, much simpler than anything I can do with SQL. And wouldn't a database access be much more expensive than a binary data read?
This is to answer the part about minimizing I/O traffic. On the Java side, all you can really do is wrap your readers in BufferedReaders. Aside from that, your operating system will handle other optimizations like keeping recently-read data in the page cache and doing read-ahead on files to speed up sequential reads. There's no point in doing additional buffering in Java (although you'll still need a byte buffer to return the data to the client).
I had someone recommend hadoop (http://hadoop.apache.org) to me just the other day. It looks like it could be pretty nice, and might have some marketplace traction.
I would step back and ask yourself why you are using files as your system of record, and what gains that gives you over using a database. A database certainly gives you the ability to structure your data. Given the SQL standard, it might be more maintainable in the long run.
On the other hand, your file data may not be structured so easily within the constraints of a database. The largest search company in the world :) doesn't use a database for their business processing. See here and here.