Reading a file using a buffer - java

I'm not sure if I'm asking this question right, but I want to make some sort of lyrics player using subtitle files. As I also want to make it compatible with larger files (say 10.000 lines), it's not a good idea to load the file before you play it. This might cost a lot of time and unnecessary amounts of data stored on the RAM. That's why I want to load it the way for example online videos do (they store an amount of minutes on the RAM and discard that what's been played, all while playing). I believe this is called buffering.
My question was: are there any pre-made i/o classes inside java that allow this sort of thing? I know a lot of classes with buffer in their name, but I have little to no idea what they do or what they are different from other classes (without buffer in their name).

Related

How to handle large XM File Java Around 5 GB

My application needs to use data in a XML file which is up to 5 GB in size. I Load data in Image Classed from the XML. The Image class has many attributes, Like Path, Name, MD5, Hash, and many other information like that.
The 5 GB file has around 50 Million of Image data in it, When i parse the xml the data is loaded inside the app and same amount of image classes is created inside the app, and i perform different operation and calculation on it.
My Problem is when i parse such a hugh file my memory get eat up. I guess all the data is loading inside the ram. Due to complexity of the code, I'm unable to provide the whole code. I there an efficient way to handle such a hugh number of classes. I have done research all night, but didn't get success, Can some one point me in right direction?
Thanks
You need some sort of pipeline to pass the data on to its actual destination without ever store it all in memory at once
I don't know how your code doing the parsing but you you don't need to store all data in the memory.
Here is a very good answer for implementation for reading large XML files
If you're using SAX, but you are eating up memory, then you are doing something wrong, and there is no way we can tell you what you are doing wrong without seeing your code.
I suggest using JVisualVM to get a heap dump and see what objects are using up the memory, and then investigating the part of your application that creates those objects.

Processing a large (GB) file, quickly and multiple times (Java)

What options are there for processing large files quickly, multiple times?
I have a single file (min 1.5 GB, but can be upwards of 10-15 GB) that needs to be read multiple times - on the order of hundreds to thousands of times. The server has a large amount of RAM (64+ GB) and plenty of processors (24+).
The file will be sequential, read-only. Files are encrypted (sensitive data) on disk. I also use MessagePack to deserialize them into objects during the read process.
I cannot store the objects created from the file into memory - too large of an expansion (1.5 GB file turns into 35 GB in-memory object array). File can't be stored as a byte array (limited by Java's array length of 2^32-1).
My initial thought is to use a memory mapped file, but that has its own set of limitations.
The idea is to get the file off the disk and into memory for processing.
The large volume of data is for a machine learning algorithm, that requires multiple reads. During the calculation of each file pass, there's a considerable amount of heap usage by the algorithm itself, which is unavoidable, hence the requirement to read it multiple times.
The problem you have here is that you cannot mmap() the way the system call of the same name does; the syscall can map up to 2^64, FileChannel#map() cannot map more than 2^30 reliably.
However, what you can do is wrap a FileChannel into a class and create several "map ranges" covering all the file.
I have done "nearly" such a thing except more complicated: largetext. More complicated because I have to do the decoding process to boot, and the text which is loaded must be so into memory, unlike you who reads bytes. Less complicated because I have a define JDK interface to implement and you don't.
You can however use nearly the same technique using Guava and a RangeMap<Long, MappedByteBuffer>.
I implement CharSequence in this project above; I suggest that you implement a LargeByteMapping interface instead, from which you can read whatever parts you want; or, well, whatever suits you. Your main problem will be to define that interface. I suspect what CharSequence does is not what you want.
Meh, I may even have a go at it some day, largetext is quite exciting a project and this looks like the same kind of thing; except less complicated, ultimately!
One could even imagine a LargeByteMapping implementation where a factory would create such mappings with only a small part of that into memory and the rest written to a file; and such an implementation would also use the principle of locality: the latest queried part of the file into memory would be kept into memory for faster access.
See also here.
EDIT I feel some more explanation is needed here... A MappedByteBuffer will NOT EAT HEAP SPACE!!
It will eat address space only; it is nearly the equivalent of a ByteBuffer.allocateDirect(), except it is backed by a file.
And a very important distinction needs to be made here; all of the text above supposes that you are reading bytes, not characters!
Figure out how to structure the data. Get a good book about NoSQL and find the appropriate Database (Wide-Column, Graph, etc.) for your scenario. That's what I'd do. You'd not only have sophisticated query methods on your data but also mangling the data using distribute map-reduced implementations doing whatever you want. Maybe that's what you want (you even dropped the bigdata bomb)
How about creating "a dictionary" as the bridge between your program and the target file? Your program will call the dictionary then dictionary will refer you to the big fat file.

Inserting to and searching a large amount of data in Java

I am writing a program in Java which tracks data about baseball cards. I am trying to decide how to store the data persistently. I have been leaning towards storing the data in an XML file, but I am unfamiliar with XML APIs. (I have read some online tutorials and started experimenting with the classes in the javax.xml hierarchy.)
The software has to major use cases: the user will be able to add cards and search for cards.
When the user adds a card, I would like to immediately commit the data to the persistant storage. Does the standard API allow me to insert data in a random-access way (or even appending might be okay).
When the user searches for cards (for example, by a player's name), I would like to load a list from the storage without necessarily loading the whole file.
My biggest concern is that I need to store data for a large number of unique cards (in the neighborhood of thousands, possibly more). I don't want to store a list of all the cards in memory while the program is open. I haven't run any tests, but I believe that I could easily hit memory constraints.
XML might not be the best solution. However, I want to make it as simple as possible to install, so I am trying to avoid a full-blown database with JDBC or any third-party libraries.
So I guess I'm asking if I'm heading in the right direction and if so, where can I look to learn more about using XML in the way I want. If not, does anyone have suggestions about what other types of storage I could use to accomplish this task?
While I would certainly not discourage the use of XML, it does have some draw backs in your context.
"Does the standard API allow me to insert data in a random-access way"
Yes, in memory. You will have to save the entire model back to file though.
"When the user searches for cards (for example, by a player's name), I would like to load a list from the storage without necessarily loading the whole file"
Unless you're expected multiple users to be reading/writing the file, I'd probably pull the entire file/model into memory at load and keep it there until you want to save (doing periodical writes the background is still a good idea)
I don't want to store a list of all the cards in memory while the program is open. I haven't run any tests, but I believe that I could easily hit memory constraints
That would be my concern to. However, you could use a SAX parser to read the file into a custom model. This would reduce the memory overhead (as DOM parsers can be a little greedy with memory)
"However, I want to make it as simple as possible to install, so I am trying to avoid a full-blown database with JDBC"
I'd do some more research in this area. I (personally) use H2 and HSQLDB a lot for storage of large amount of data. These are small, personal database systems that don't require any additional installation (a Jar file linked to the program) or special server/services.
They make it really easy to build complex searches across the datastore that you would otherwise need to create yourself.
If you were to use XML, I would probably do one of three things
1 - If you're going to maintain the XML document in memory, I'd get familiar with XPath
(simple tutorial & Java's API) for searching.
2 - I'd create a "model" of the data using Objects to represent the various nodes, reading it in using a SAX. Writing may be a little more tricky.
3 - Use a simple SQL DB (and Object model) - it will simply the overall process (IMHO)
Additional
As if I hadn't dumped enough on you ;)
If you really want to XML (and again, I wouldn't discourage you from it), you might consider having a look a XML database style solution
Apache Xindice (apparently retired)
Or you could have a look at some other people think
Use XML as database in Java
Java: XML into a Database, whats the simplest way?
For example ;)

Safely wiping data from Android device: Gutmann or others?

I've been reading a few articles on Gutmann method of securely wiping data. I understood that the method is designed for hard disks. I want to write my tiny app that securely wipes data (there are a few on Google Play, I know) from either phone memory or SD card.
My questions are
Question 1: Gutmann or others?
As for the above observation, is Gutmann algorithm both effective and efficient? I believe that it is indeed effective because it rewrites the data so many times that a technology like flash memory has no way to remember data 35-writes-older. I don't know if it's efficient: I mean, do I just need fewer random writes to achieve a result?
Question 2: do I really overwrite sectors?
A question that came into my mind is the following: if I overwrite a file in Java, does Linux kernel write new data on old sectors or does it allocate new sectors on physical media while deallocating the old ones? You know, this makes the difference...
Re #2, the link you cited is not relevant. new FileOutputStream() doesn't overwrite the file at all, in the sense you mean. It creates a new one, or appends to an existing one. It is therefore most unlikely to reuse the same disk blocks. However new RandomAccessFile() in "rw" mode does indeed overwrite the file, and you would reasonable expect it to reuse the same disk blocks, although it is possible to imagine a filesystem that didn't.

How should I manage memory in mobile audio mixing software?

I'm toying around with creating a pure Java audio mixing library, preferably one that can be used with Android, not entirely practical but definitely an interesting thing to have. I'm sure it's been done already, but just for my own learning experience I am trying to do this with wav files since there are usually no compression models to work around.
Given the nature of java.io, it defines many InputStream type of classes. Each implements operations that are primarily for reading data from some underlying resource. What you do with data afterward, dump it or aggregate it in your own address space, etc, is up to you. I want this to be purely Java, e.g. works on anything (no JNI necessary), optimized for low memory configurations, and simple to extend.
I understand the nature of the RIFF format and how to assemble the PCM sample data, but I'm at a loss for the best way of managing the memory required for inflating the files into memory. Using a FileInputStream, only so much of the data is read at a time, based on the underlying file system and how the read operations are invoked. FileInputStream doesn't furnish a method of indexing where in the file you are so that retrieving streams for mixing later is not possible. My goal would be to inflate the RIFF document into Java objects that allow for reading and writing of the appropriate regions of the underlying chunk.
If I allocate space for the entire thing, e.g. all PCM sample data, that's like 50 MB per average song. On a typical smart phone or tablet, how likely is it that this will affect overall performance? Would I be better off coming up with my own InputStream type that maybe keeps track of where the chunks are in the InputStream? For file's this will result in lots of blocking when fetching PCM samples, but will still cut down on the overall memory footprint on the system.
I'm not sure I understand all of your question, but I'll answer what I can. Feel free to clarify in the comments, and I'll edit.
Don't keep all file data in in memory for a DAW-type app, or any file/video player that expects to play large files. This might work on some devices depending on the memory model, but you are asking for trouble.
Instead, read the required section of the file as needed (ie on demand). It'a actually a bit more complex than that because you don't want to read the file in the audio playback thread (you don't want audio playback, which is low latency, to depend on file IO, which is high-latency). To get around that, you may have to buffer some of the file in advance. (it depends on whether you are using a callback or blocking model)
Using FileInputStream works fine, you'll just have to keep track of where everything is in the file yourself (this involves converting milliseconds or whatever to samples to bytes and taking into account the size of the header[1]). A slightly better option is RandomAccessFile because it allows you to jump arround.
My slides from a talk on programing audio software might help, especially if you are confused by callback v blocking: http://blog.bjornroche.com/2011/11/slides-from-fundamentals-of-audio.html
[1] or, more correctly, knowing the offset of the audio data in the file.

Categories