H5 file reading very slow with Java - java

I have a Java program using the H5 libraries that tries to read a dataset in a H5 file with the following properties:
The file's size is 769M.
The code that reads the dataset is the following (very simple):
// Open file using the default properties.
fileId = H5.H5Fopen(filepath, HDF5Constants.H5F_ACC_RDONLY, HDF5Constants.H5P_DEFAULT);
// Open dataset using the default properties.
if (fileId >= 0) {
datasetId = H5.H5Dopen(fileId, "/data/0_u0/20050103", HDF5Constants.H5P_DEFAULT);
}
if (datasetId >= 0) {
dataSpaceId = H5.H5Dget_space(datasetId);
}
// Get the dimensions of the dataset
int ndims = -1;
if (dataSpaceId >= 0)
ndims = H5.H5Sget_simple_extent_ndims(dataSpaceId);
if (ndims > 0) {
long[] dims = new long[ndims];
H5.H5Sget_simple_extent_dims(dataSpaceId, dims, null);
H5.H5Sclose(dataSpaceId);
int dimX = (int)dims[0];
int dimY = (int)dims[1];
Double[][] dsetData = new Double[dimX][dimY];
H5.H5Dread(datasetId, HDF5Constants.H5T_NATIVE_DOUBLE,
HDF5Constants.H5S_ALL, HDF5Constants.H5S_ALL,
HDF5Constants.H5P_DEFAULT, dsetData);
}
And it takes forever (more than 15 minutes, I stopped after that).
What I don't understand is that I also have kind of the same code in Python, and it takes a few seconds.
When I debug the Java program and stop in the middle execution, it's in the byteToDouble() function of the H5 lib. It's a lot of double, but should not take that much time right?
Thanks for your help!

I think the issue is that your reading the data into 2D array Double[][]. When you do this the HDF5 implementation is very slow (think the issue is probably in HDFArray.arrayify). Try reading the data into a 1D double[].
Also you are using boxed Double it would probably be better to use primative double.

Related

Java data structure for providing random <String><Float> pair based on a large data set at run-time

Is there a smart way to create a 'JSON-like' structure of String - Float pairs, 'key' not needed as data will be grabbed randomly - although an incremented key from 0-n might aid random retrieval of associated data. Due to the size of data set (10k pairs of values), I need this to be saved out to an external file type.
The reason is how my data will be compiled. To save someone entering data into an array manually the item will be excel based, saved out to CSV, parsed using a temporary java program to a file format (for example jJSON) which can be added to my project resources folder. I can then retrieve data from this set, without my application having to manually load a huge array into memory upon application creation. I can quite easily parse the CSV to 'fill-up' an array (or similar) at run-time - but I fear that on a mobile device, the memory overhead will be significant?
I have reviewed the answers to: Suitable Java data structure for parsing large data file and Data structure options for efficiently storing sets of integer pairs on disk? and have not been able to draw a definitive conclusion.
I have tried saving to a .JSON file, however not sure if I can request a random entry, plus this seems quite cumbersome for holding a simple structure. Is a treeMap or hashtable where I need to be focusing my search.
To provide some context to my query, my application will be running on android, and needs to reference a definition (approx 500 character String) and a conversion factor (an Float). I need to retrieve a random data entry. The user may only make 2 or 3 requests during a session - therefore see no point in loading a 10k element array into memory. QUERY: potentially modern day technology on android phones will easily munch through this type of query, and its perhaps only an issue if I am parsing millions of entries at run-time?
I am open to using SQLlite to hold my data if this will provide the functionality required. Please note that the data set must be derived from an easily exportable file format from excel (CSV, TXT etc).
Any advice you can give me would be much appreciated.
Here's one possible design that requires a minimal memory footprint while providing fast access:
Start with a data file of comma-separated or tab-separated values so you have line breaks between your data pairs.
Keep an array of long values corresponding to the indexes of the lines in the data file. When you know where the lines are, you can use InputStream.skip() to advance to the desired line. This leverages the fact that skip() is typically quite a bit faster than read for InputStreams.
You would have some setup code that would run at initialization time to index the lines.
An enhancement would be to only index every nth line so that the array is smaller. So if n is 100 and you're accessing line 1003, you take the 10th index to skip to line 1000, then read past two more lines to get to line 1003. This allows you to tune the size of the array to use less memory.
I thought this was an interesting problem, so I put together some code to test my idea. It uses a sample 4MB CSV file that I downloaded from some big data website that has about 36K lines of data. Most of the lines are longer than 100 chars.
Here's code snippet for the setup phase:
long start = SystemClock.elapsedRealtime();
int lineCount = 0;
try (InputStream in = getResources().openRawResource(R.raw.fl_insurance_sample)) {
int index = 0;
int charCount = 0;
int cIn;
while ((cIn = in.read()) != -1) {
charCount++;
char ch = (char) cIn; // this was for debugging
if (ch == '\n' || ch == '\r') {
lineCount++;
if (lineCount % MULTIPLE == 0) {
index = lineCount / MULTIPLE;
if (index == mLines.length) {
mLines = Arrays.copyOf(mLines, mLines.length + 100);
}
mLines[index] = charCount;
}
}
}
mLines = Arrays.copyOf(mLines, index+1);
} catch (IOException e) {
Log.e(TAG, "error reading raw resource", e);
}
long elapsed = SystemClock.elapsedRealtime() - start;
I discovered my data file was actually separated by carriage returns rather than line feeds. It must have been created on an Apple computer. Hence the test for '\r' as well as '\n'.
Here's a snippet from the code to access the line:
long start = SystemClock.elapsedRealtime();
int ch;
int line = Integer.parseInt(editText.getText().toString().trim());
if (line < 1 || line >= mLines.length ) {
mTextView.setText("invalid line: " + line + 1);
}
line--;
int index = (line / MULTIPLE);
in.skip(mLines[index]);
int rem = line % MULTIPLE;
while (rem > 0) {
ch = in.read();
if (ch == -1) {
return; // readLine will fail
} else if (ch == '\n' || ch == '\r') {
rem--;
}
}
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
String text = reader.readLine();
long elapsed = SystemClock.elapsedRealtime() - start;
My test program used an EditText so that I could input the line number.
So to give you some idea of performance, the first phase averaged around 1600ms to read through the entire file. I used a MULTIPLE value of 10. Accessing the last record in the file averaged about 30ms.
To get down to 30ms access with only a 29312-byte memory footprint is pretty good, I think.
You can see the sample project on GitHub.

Generate Random Data for Cassandra DB

I have a big data project for school that requires us to build and query a 8 node Cassandra system. The system must contain at least seven terabytes of data. I have to generate all this data myself. There is no requirement that the data be "relevant" to the assignment -- ie each column can just be a random int. That being said it is required that each value is random or based on a random sequence.
So, I wrote a simple java program to just generate random ints. I can generate ~200 MB of random test data in ~120s. Now unless my math is off, then I think I'm in a pickle.
There are 35000 200MB units in 7 terabytes.
35000 * 120 = 4 200 000 seconds
4 200 000 / 3600 ~ 1167hours
1167 / 24 = 49 days
So, it appears that it will take 49 days to generate all the test data needed. Obviously, this is impractical. I'm looking for suggestions that will increase the rate at which I can generate data.
I've considered/considering:
setting replication factor to 8 to reduce the amount of data needed to be generated, and also running the data generation program on all 8 nodes.
edit: how I'm generating the data
private void initializeCols(){
cols = new ArrayList<Generator>();
cols.add(new IntGenerator(400));
}
public ArrayList<String> generatePage(){
ArrayList<String> page = new ArrayList<String>();
String line = "";
for(int i = 0; i < PAGE_SIZE; i++){
line = "";
for(Generator column : cols){
line += column.gen();
}
page.add(line);
}
return page;
}
originally I was generating more test specific data like phone numbers etc. but then I decided to just generate random ints in order to shave some time off -- not much savings. Here is the IntGenerator class.
public IntGenerator(int series){
this.series = series;
}
public String gen(){
String output = "";
for(int i = 0; i < series; i++){
output += Integer.toString(randomInt(1,1000));
output += SEPERATOR;
}
return output;
}
Use cassandra stress 2.1
And this tool to generate your yaml.
You'll have random data in C* in minutes, no coding!
As you are performing a lot of concatenation in loops, I highly recommend you check out StringBuilder. It will dramatically increase the speed of your loops. For example,
public String gen(){
StringBuilder sb = new StringBuilder();
for(int i = 0; i < series; i++){
sb.append(Integer.toString(randomInt(1,1000)));
sb.append(SEPERATOR);
}
return sb.toString();
}
And you should do similar in your generatePage method as well.
Speed in volume as well as more data realism can be had via third-party test data tools. This one (RowGen) creates flat files you can copy into DataStax; see:
Creating Test Data for Cassandra DataStax

How to parse a huge file line by line, serialize & deserialize a huge object efficiently?

I have a file of size around 4-5 Gigs(nearly billion lines). From every line of the file, I have to parse the array of integers and the additional integer info and update my custom data structure. My class to hold such information looks like
class Holder {
private int[][] arr = new int[1000000000][5]; // assuming that max array size is 5
private int[] meta = new int[1000000000];
}
A sample line from the file looks like
(1_23_4_55) 99
Every index in the arr & meta corresponds to the line number in the file. From the above line, I extract the array of integers first and then the meta information. In that case,
--pseudo_code--
arr[line_num] = new int[]{1, 23, 4, 55}
meta[line_num]=99
Right now, I am using BufferedReader object and it's readLine method to read each line & use character level operations to parse the integer array and meta information from each line and populate the Holder instance. But, it takes almost half an hour to complete this entire operation.
I used both java Serialization & Externalizable(write the meta and arr) to serialize and deserialize this HUGE Holder instance. And with both of them, the time to serialize is almost half an hour and to deserialize is also almost half an hour.
I would appreciate your suggestions on dealing with this kind of problem & would definitely love to hear your part of story if any.
P.S. Main Memory is not a problem. I have almost 50 GB of RAM in my machine. I have also increased the BufferedReader size to 40 MB (Of course, I can increase this upto 100 MB considering that disk access takes approx. 100 MB/sec). Even cores and CPU is not a problem.
EDIT I
The code that I am using to do this task is provided below(after anonymizing very few information);
public class BigFileParser {
private int parsePositiveInt(final String s) {
int num = 0;
int sign = -1;
final int len = s.length();
final char ch = s.charAt(0);
if (ch == '-')
sign = 1;
else
num = '0' - ch;
int i = 1;
while (i < len)
num = num * 10 + '0' - s.charAt(i++);
return sign * num;
}
private void loadBigFile() {
long startTime = System.nanoTime();
Holder holder = new Holder();
String line;
try {
Reader fReader = new FileReader("/path/to/BIG/file");
// 40 MB buffer size
BufferedReader bufferedReader = new BufferedReader(fReader, 40960);
String tempTerm;
int i, meta, ascii, len;
boolean consumeNextInteger;
// GNU Trove primitive int array list
TIntArrayList arr;
char c;
while ((line = bufferedReader.readLine()) != null) {
consumeNextInteger = true;
tempTerm = "";
arr = new TIntArrayList(5);
for (i = 0, len = line.length(); i < len; i++) {
c = line.charAt(i);
ascii = c - 0;
// 95 is the ascii value of _ char
if (consumeNextInteger && ascii == 95) {
arr.add(parsePositiveInt(tempTerm));
tempTerm = "";
} else if (ascii >= 48 && ascii <= 57) { // '0' - '9'
tempTerm += c;
} else if (ascii == 9) { // '\t'
arr.add(parsePositiveInt(tempTerm));
consumeNextInteger = false;
tempTerm = "";
}
}
meta = parsePositiveInt(tempTerm);
holder.update(arr, meta);
}
bufferedReader.close();
long endTime = System.nanoTime();
System.out.println("#time -> " + (endTime - startTime) * 1.0
/ 1000000000 + " seconds");
} catch (IOException exp) {
exp.printStackTrace();
}
}
}
public class Holder {
private static final int SIZE = 500000000;
private TIntArrayList[] arrs;
private TIntArrayList metas;
private int idx;
public Holder() {
arrs = new TIntArrayList[SIZE];
metas = new TIntArrayList(SIZE);
idx = 0;
}
public void update(TIntArrayList arr, int meta) {
arrs[idx] = arr;
metas.add(meta);
idx++;
}
}
It sounds like the time taken for file I/O is the main limiting factor, given that serialization (binary format) and your own custom format take about the same time.
Therefore, the best thing you can do is to reduce the size of the file. If your numbers are generally small, then you could get a huge boost from using Google protocol buffers, which will encode small integers generally in one or two bytes.
Or, if you know that all your numbers are in the 0-255 range, you could use a byte[] rather than int[] and cut the size (and hence load time) to a quarter of what it is now. (assuming you go back to serialization or just write to a ByteChannel)
It simply can't take that long. You're working with some 6e9 ints, which means 24 GB. Writing 24 GB to the disk takes some time, but nothing like half an hour.
I'd put all the data in a single one-dimensional array and access it via methods like int getArr(int row, int col) which transform row and col onto a single index. According to how the array gets accessed (usually row-wise or usually column-wise), this index would be computed as N * row + col or N * col + row to maximize locality. I'd also store meta in the same array.
Writing a single huge int[] into memory should be pretty fast, surely no half an hour.
Because of the data amount, the above doesn't work as you can't have a 6e9 entries array. But you can use a couple of big arrays instead and all of the above applies (compute a long index from row and col and split it into two ints for accessing the 2D-array).
Make sure you aren't swapping. Swapping is the most probable reason for the slow speed I can think of.
There are several alternative Java file i/o libraries. This article is a little old, but it gives an overview that's still generally valid. He's reading about 300Mb per second with a 6-year old Mac. So for 4Gb you have under 15 seconds of read time. Of course my experience is that Mac IO channels are very good. YMMV if you have a cheap PC.
Note there is no advantage above a buffer size of 4K or so. In fact you're more likely to cause thrashing with a big buffer, so don't do that.
The implication is that parsing characters into the data you need is the bottleneck.
I have found in other apps that reading into a block of bytes and writing C-like code to extract what I need goes faster than the built-in Java mechanisms like split and regular expressions.
If that still isn't fast enough, you'd have to fall back to a native C extension.
If you randomly pause it you will probably see that the bulk of the time goes into parsing the integers, and/or all the new-ing, as in new int[]{1, 23, 4, 55}. You should be able to just allocate the memory once and stick numbers into it at better than I/O speed if you code it carefully.
But there's another way - why is the file in ASCII?
If it were in binary, you could just slurp it up.

Extracting Values used for Normalization in Weka Multilayer Perceptron

I have a machine learning scheme in which I am using the java classes from Weka to implement machine learning in a matlab script. I am then uploading the model for the classifier to a database, since I need to perform the classification on a different machine in a different language (obj-c). The evaluation of the network was fairly straightforward to program, but I need the values that WEKA used to normalize the data set before training so I can use them in the evaluation of the network later. Does anyone know how to get the normalization factors that weka would use for training a Multilayer Perceptron network? I would prefer the answer to be in Java.
After some digging through the WEKA source code and documentation... this is what I've come up with. Even though there is a filter in WEKA called "Normalize", the Multilayer Perceptron doesn't use it, instead it uses a bit of code internally that looks like this.
m_attributeRanges = new double[inst.numAttributes()];
m_attributeBases = new double[inst.numAttributes()];
for (int noa = 0; noa < inst.numAttributes(); noa++) {
min = Double.POSITIVE_INFINITY;
max = Double.NEGATIVE_INFINITY;
for (int i=0; i < inst.numInstances();i++) {
if (!inst.instance(i).isMissing(noa)) {
value = inst.instance(i).value(noa);
if (value < min) {
min = value;
}
if (value > max) {
max = value;
}
}
}
m_attributeRanges[noa] = (max - min) / 2;
m_attributeBases[noa] = (max + min) / 2;
if (noa != inst.classIndex() && m_normalizeAttributes) {
for (int i = 0; i < inst.numInstances(); i++) {
if (m_attributeRanges[noa] != 0) {
inst.instance(i).setValue(noa, (inst.instance(i).value(noa)
- m_attributeBases[noa]) /
m_attributeRanges[noa]);
}
else {
inst.instance(i).setValue(noa, inst.instance(i).value(noa) -
m_attributeBases[noa]);
}
So the only values that I should need to transmit to the other system I'm trying to use to evaluate this network would be the min and the max. Luckily for me, there turned out to be a method on the filter weka.filters.unsupervised.attribute.Normalize that returns a double array of the mins and the maxes for a processed dataset. All I had to do then was tell the multilayer perceptron to not automatically normalize my data, and to process it separately with the filter so I could extract the mins and maxes to send to the database along with the weights and everything else.

Java fileinputstream reading without .read()

I ran into a weird problem, and i was wondering if anyone has an idea what could be the cause. I'm reading in a file ( a small exe of 472 KB ) with FileInputStream, i plan to send the file torugh RMI connection, and i had an idea, where i could show the upload's % based on how much have i already sent trough compared to the overall length of the file.
First i tried it out locally and i couldn't get it work. Here is an example, what i was doing.
FileInputStream fileData = new FileInputStream(file);
reads = new ArrayList<Integer>();
buffers = new ArrayList<byte[]>();
int i = 0;
while ( (read = fileData.read(buffer)) > 0) {
System.out.println("Run : " + (i + 1));
outstreamA.write(buffer, 0, read);
reads.add(read);
buffers.add(buffer);
outstreamB.write(this.buffers.get(i), 0, this.reads.get(i));
i = i + 1;
}
This two FileOutputStream creates two files ( same ones just with different name ), works fine. However, when i'm not using fileData.read() but any other for / while, it just dosen't work. It creates the exact same file ( length is exactly the same ) but my Window cannot run the exe, i get an error message :
"The version of this file is not compatible with the version of Windows you're running...".
This is how i tried:
//for (int i = 0; i < buffers.size(); ++i) {
i = 0;
//while ( (read = fileData2.read(buffer)) > 0) {
while ( i < size) {
System.out.println("Run#2 : " + (i + 1));
outstreamC.write(this.buffers.get(i), 0, this.reads.get(i));
i = i + 1;
}
fileData2 is the same as fileData. If i work with fileData2.read(buffer), outstreamC creates a working file aswell.
It dosen't matter if i run with for till the list's size, or till "size" which equals the time i entered the first while. There is something missing, and i cannot figure it out.
The weird thing is, outstreamB creates a working file, yet outstreamC cannot, but they working with the exact same items.
Originally i was planning to pass the "read" and "buffer" each time i entered the first while trough RMI connection, and put everything together on the other side, after all the parts arrived, but now my plan is kinda dead. Anyone has maybe an idea, how could i solve this, or achieve something similar to be able to send files trough RMI?
Best regards,
Mihaly
Your code can never work. You are reading into the same buffer repeatedly and adding the same buffer to a list. So the list contains several copies of the final data you read. You would need to allocate a new buffer every time around the loop.

Categories