Weka's PCA is taking too long to run - java

I am trying to use Weka for feature selection using PCA algorithm.
My original feature space contains ~9000 attributes, in 2700 samples.
I tried to reduce dimensionality of the data using the following code:
AttributeSelection selector = new AttributeSelection();
PrincipalComponents pca = new PrincipalComponents();
Ranker ranker = new Ranker();
selector.setEvaluator(pca);
selector.setSearch(ranker);
Instances instances = SamplesManager.asWekaInstances(trainSet);
try {
selector.SelectAttributes(instances);
return SamplesManager.asSamplesList(selector.reduceDimensionality(instances));
} catch (Exception e ) {
...
}
However, It did not finish to run within 12 hours. It is stuck in the method selector.SelectAttributes(instances);.
My questions are:
Is so long computation time expected for weka's PCA? Or am I using PCA wrongly?
If the long run time is expected:
How can I tune the PCA algorithm to run much faster? Can you suggest an alternative? (+ example code how to use it)?
If it is not:
What am I doing wrong? How should I invoke PCA using weka and get my reduced dimensionality?
Update: The comments confirms my suspicion that it is taking much more time than expected.
I'd like to know: How can I get PCA in java - using weka or an alternative library.
Added a bounty for this one.

After deepening in the WEKA code, the bottle neck is creating the covariance matrix, and then calculating the eigenvectors for this matrix. Even trying to switch to sparsed matrix implementation (I used COLT's SparseDoubleMatrix2D) did not help.
The solution I came up with was first reduce the dimensionality using a first fast method (I used information gain ranker, and filtering based on document frequencey), and then use PCA on the reduced dimensionality to reduce it farther.
The code is more complex, but it essentially comes down to this:
Ranker ranker = new Ranker();
InfoGainAttributeEval ig = new InfoGainAttributeEval();
Instances instances = SamplesManager.asWekaInstances(trainSet);
ig.buildEvaluator(instances);
firstAttributes = ranker.search(ig,instances);
candidates = Arrays.copyOfRange(firstAttributes, 0, FIRST_SIZE_REDUCTION);
instances = reduceDimenstions(instances, candidates)
PrincipalComponents pca = new PrincipalComponents();
pca.setVarianceCovered(var);
ranker = new Ranker();
ranker.setNumToSelect(numFeatures);
selection = new AttributeSelection();
selection.setEvaluator(pca);
selection.setSearch(ranker);
selection.SelectAttributes(instances );
instances = selection.reduceDimensionality(wekaInstances);
However, this method scored worse then using a greedy information gain and a ranker, when I cross-validated for estimated accuracy.

It looks like you're using the default configuration for the PCA, which judging by the long runtime, it is likely that it is doing way too much work for your purposes.
Take a look at the options for PrincipalComponents.
I'm not sure if -D means they will normalize it for you or if you have to do it yourself. You want your data to be normalized (centered about the mean) though, so I would do this yourself manually first.
-R sets the amount of variance you want accounted for. Default is 0.95. The correlation in your data might not be good so try setting it lower to something like 0.8.
-A sets the maximum number of attributes to include. I presume the default is all of them. Again, you should try setting it to something lower.
I suggest first starting out with very lax settings (e.g. -R=0.1 and -A=2) then working your way up to acceptable results.

Best
for the construction of your covariance matrix, you can use the following formula which is also used by matlab. It is faster then the apache library.
Whereby Matrix is an m x n matrix. (m --> #databaseFaces)

Related

Incorrect class prediction using Weka

I am using the WEKA API weka-stable-3.8.1.
I have been trying to use J48 decision tree(C4.5 implementation of weka).
My data has around 22 features and a nominal class with 2 possible values : yes or no.
While evaluating with the following code :
Classifier model = (Classifier) weka.core.SerializationHelper.read(trainedModelDestination);
Evaluation evaluation = new Evaluation(trainingInstances);
evaluation.evaluateModel(model, testingInstances);
System.out.println("Number of correct predictions : "+evaluation.correct());
I get all predictions correct.
But when I try these test cases individually using :
for(Instance i : testingInstances){
double predictedClassLabel = model.classifyInstance(i);
System.out.println("predictedClassLabel : "+predictedClassLabel);
}
I always get the same output, i.e. 0.0.
Why is this happening ?
If the provided snippet is indeed from your code, you seem to be always classifying the first test instance: "testingInstances.firstInstance()".
Rather, you may want to make a loop to classify each test instance.
for(Instance i : testingInstances){
double predictedClassLabel = model.classifyInstance(i);
System.out.println("predictedClassLabel : "+predictedClassLabel);
}
Should have updated much sooner.
Here's how I fixed this:
During the training phase, the model learns from your training set. While learning from this set it encounters categorical/nominal features as well.
Most algorithms require numerical values to work. To deal with this the algorithm maps the variables to a specific numerical value. longer explanation here
Since the algorithm has learned this during the training phase, the Instances object holds this information. During testing phase you have to use the same Instances object that was created during training phase. Otherwise, the testing classifier will not correctly map your nominal values to their expected values.
Note:
This kind of encoding gives biased training results in Non-tree based models and things like One-Hot-Encoding should be used in such cases.

How to adjust two arrays of points to a sine function?

I'm working with ImageJ. I have two arrays of points (i.e it[ ], cmx[ ]) and what I want is to adjust this to a sine function. I've been working with CurveFitting but I don't understand it very well. I also am having issues with UserFunction.
Is there an easier approach to this? If you have examples I would appreciate it.
The following Groovy script is an example of running curve fitting on three data points:
import ij.measure.CurveFitter;
xData = [0,1,2];
yData = [3.1, 5.1, 6.9];
cv = new CurveFitter((double[]) xData.toArray(), (double[]) yData.toArray());
cv.doFit(CurveFitter.STRAIGHT_LINE);
println (cv.getResultString());
I'm not sure if CurveFitter allows fitting to trigonometric functions, there doesn't seem to be this option in the available fitting types. You might try a high-degree polynomial fitting instead.
You can also ask on the ImageJ forum or mailing list regarding the implementation details of the CurveFitter class.

Solving a non linear system in java (using optim toolbox)

I have a system of nonlinear dynamics which I which to solve to optimality. I know how to do this in MATLAB, but I wish to implement this in JAVA. I'm for some reason lost in how to do it in Java.
What I have is following:
z(t) which returns states in a dynamic system.
z(t) = [state1(t),...,state10(t)]
The rate of change of this dynamic system is given by:
z'(t) = f(z(t),u(t),d(t)) = [dstate1(t)/dt,...,dstate10(t)/dt]
where u(t) and d(t) is some external variables that I know the value of.
In addition I have a function, lets denote that g(t) which is defined from a state variable:
g(t) = state4(t)/c1
where c1 is some constant.
Now I wish to solve the following unconstrained nonlinear system numerically:
g(t) - c2 = 0
f(z(t),u(t),0)= 0
where c2 is some constant. Above system can be seen as a simple f'(x) = 0 problem consisting of 11 equations and 1 unkowns and if I where supposed to solve this in MATLAB I would do following:
[output] = fsolve(#myDerivatives, someInitialGuess);
I am aware of the fact that JAVA doesn't come with any build-in solvers. So as I see it there are two options in solving the above mentioned problem:
Option 1: Do it my-self: I could use numerical methods as e.g. Gauss newton or similar to solve this system of nonlinear equations. However, I will start by using a java toolbox first, and then move to a numerical method afterwards.
Option 2: Solvers (e.g. commons optim) This solution is what I am would like to look into. I have been looking into this toolbox, however, I have failed to find an exact example of how to actually use the MultiVariateFunction evaluater and the numerical optimizer. Does any of you have any experience in doing so?
Please let me know if you have any ideas or suggestions for solving this problem.
Thanks!
Please compare what your original problem looks like:
A global optimization problem
minimize f(y)
is solved by looking for solutions of the derivatives system
0=grad f(y) or 0=df/dy (partial derivatives)
(the gradient is the column vector containing all partial derivatives), that is, you are computing the "flat" or horizontal points of f(y).
For optimization under constraints
minimize f(y,u) such that g(y,u)=0
one builds the Lagrangian functional
L(y,p,u) = f(y,u)+p*g(y,u) (scalar product)
and then compute the flat points of that system, that is
g(y,u)=0, dL/dy(y,p,u)=0, dL/du(y,p,u)=0
After that, as also in the global optimization case, you have to determine what the type of the flat point is, maximum, minimun or saddle point.
Optimal control problems have the structure (one of several equivalent variants)
minimize integral(0,T) f(t,y(t),u(t)) dt
such that y'(t)=g(t,y(t),u(t)), y(0)=y0 and h(T,y(T))=0
To solve it, one considers the Hamiltonian
H(t,y,p,u)=f(t,y,u)-p*g(t,y,u)
and obtained the transformed problem
y' = -dH/dp = g, (partial derivatives, gradient)
p' = dH/dy,
with boundary conditions
y(0)=y0, p(T)= something with dh/dy(T,y(T))
u(t) realizes the minimum in v -> H(t,y(t),p(t),v)

Fastest way to access a table of data Java

Basically I am amidst a friendly code optimisation battle (to get the fastest program), I am trying to find a way that is faster to access a dictionary of hard coded data than a multidimensional array.
e.g to get the value for x:
int x = array[v1][v2][v3] ;
I have read that nested switch statements in a custom array may possibly be faster. Or is there a way I can possibly access memory more directly similar to pointers in C. Any ideas appreciated!
My 'competitor' is using a truth table and idea is to find something faster!
Many Thanks
Sam
If the array is regular in shape (i.e. MxNxK for some fixed M, N and K), you could try flattening it to achieve better locality of reference:
int array[] = new int[M*N*K];
...
int x = array[v1*N*K + v2*K + v3];
Also, if the entire array doesn't fit in the CPU cache, you might want to examine the patterns in which the array is accessed, to perhaps re-order the indices or change your code to make better use of the caches.

Java-ML(LibSVM) How can I get the classes probabilities?

We are using Java-ML(LibSVM) in order to execute the SVM algorithm over a multi-class problem
Classifier clas = new LibSVM();
clas.buildClassifier(data);
Dataset dataForClassification= FileHandler.loadDataset(new File(.), 0, ",");
/* Counters for correct and wrong predictions. */
int correct = 0, wrong = 0;
/* Classify all instances and check with the correct class values */
for (Instance inst : dataForClassification) {
Object predictedClassValue = clas.classify(inst);
Map<Object,Double> map = clas.classDistribution(inst);
Object realClassValue = inst.classValue();
if (predictedClassValue.equals(realClassValue))
correct++;
else
wrong++;
}
the classDistributtion() returns a standard vector ( meaning all values are 0 but one value which equals to 1)
java-ml - http://java-ml.sourceforge.net/
Despite the other answers, it is possible to output probability estimates for SVMs and LibSVM does do this. However, I'm fairly sure you can't use this feature from Java-ML. The file LibSVM.java only ever refers to the function svm_predict_values and never svm_predict_probabilities. It probably wouldn't be too hard to add this functionality in to Java-ML if you felt you really needed it.
AFAIK, LibSVM is a deterministic classifier, meaning that the only distributions you will see are concentrated on a single class i.e. a standard vector. This is different than a probabilistic classifier such as Naive Bayes, which may give values different than 0.0 and 1.0.

Categories