How to adjust two arrays of points to a sine function? - java

I'm working with ImageJ. I have two arrays of points (i.e it[ ], cmx[ ]) and what I want is to adjust this to a sine function. I've been working with CurveFitting but I don't understand it very well. I also am having issues with UserFunction.
Is there an easier approach to this? If you have examples I would appreciate it.

The following Groovy script is an example of running curve fitting on three data points:
import ij.measure.CurveFitter;
xData = [0,1,2];
yData = [3.1, 5.1, 6.9];
cv = new CurveFitter((double[]) xData.toArray(), (double[]) yData.toArray());
cv.doFit(CurveFitter.STRAIGHT_LINE);
println (cv.getResultString());
I'm not sure if CurveFitter allows fitting to trigonometric functions, there doesn't seem to be this option in the available fitting types. You might try a high-degree polynomial fitting instead.
You can also ask on the ImageJ forum or mailing list regarding the implementation details of the CurveFitter class.

Related

Java library for Fourier transform like in Mathcad

In mathcad there are two functions: cfft and icfft.
I need the same in my java code. It should work for 1d and 2d arrays.
Anyone know any java libraries for it? I tried to use Appache Common Math FastFourierTransformer class, but the length of the data set to be a power of 2.
Check whether JTransforms suits your needs. It is quite known, very fast implementation.

Three-variable interpolation in java similar to matlab interpn() or interp3()

1. Consider six different vectors float[] xRef, yRef, zRef and float[] xTest, yTest, zTest, representing a positions grid. For each set of the Ref and Test vectors a vector dataRef and dataTest exist that hold data for the respective mesh.
2. My goal is to interpolate test data that are on a grid represented in the Test vectors onto the Ref vectors and currently I have a Matlab code in the form of
[ yMsh, xMsh, zMsh ] = meshgrid ( yRef, xRef, zRef );
finalTestMesh = interp3 ( yTest, xTest, zTest, origTestMesh, yMsh, xMsh, zMsh );
3. My questions: Are there any suitable java api's available? If not then I am asking for suggestions to a solution. So far my own attempts either fail and/or are too slow.
I have decided to give the Michael Thomas Flanagan's Java Scientific Library a chance. The TriCubicSpline Class page is found at http://www.ee.ucl.ac.uk/~mflanaga/java/TriCubicSpline.html

Convex optimization, java

I'm looking for a Java library to solve this problem:
We know X is sparse(most of it's entries are zero), so X can be recovered by solving this:
variable X;
minimize(norm(X,1)+norm(A*X - Y,2));
It's a MATLAB code, matrix A and vector Y are known and I want the best X.
I saw JOptimizer, but I couldn't use it. (Doesn't have good documentation or examples).
What you need is a reasonably good LP Solver.
Possible Java LP Solver Options
Apache Commons (Math) Simplex Solver.
See this blog post.
If you have access to CPLEX (not-free), its Java API would work great.
Also, you can look into SuanShu, a Java numerical and statistical library
lpSolve has a Java wrapper which can do the job.
Finally, JOptimizer is indeed a good option. Not sure if you looked at this example.
Hope at least one of those help.
As far as I can tell, you're trying to solve a binary integer program for feasibility
Ax = b, x in {0,1}.
I'm not completely sure, but it seems that you might be interested in the optimization problem
min 1'*x
s.t. Ax = b, x in {0,1}
where 1 is a vector of 1's of the same dimension as x.
The feasibility problem may be in practice much easier than the optimization problem - it all depends on a particular A and b.
If you can get a license of either CPLEX or Gurobi (if you're an academic), these are excellent integer programming solvers with good Java API's. If you don't have access to these, lpsolve may be a good option.
As far as I can tell, JOptimizer will not solve your problem since your variables are integers (although I have never used JOptimizer).
To solve convex optimization problems in java you can use the following library https://github.com/erikerlandson/gibbous

Formula manipulation algorithm

I am wanting to make a program that will when given a formula, it can manipulate the formula to make any value (or in the case of a simultaneous formula, a common value) the subject of the formula.
For example if given:
a + b = c
d + b = c
The program should therefore say:
b = c - a, d = c - b etc.
I'm not sure if java can do this automatically or not when I give the original formula as input. I am not really interested in solving the equation and getting the result of each variable, I am just interested in returning a manipulated formula.
Please let me know if I need to make an algorithm or not for this, and if so, how would I go about doing this. Also, if there are any helpful links that you might have, please post them.
Regards
Take a look at JavaCC. It's a little daunting at first but it's the right tool for something like this. Plus there are already examples of what you are trying to achieve.
Not sure what exactly you are after, but this problem in its general problem is hard. Very hard.
In fact, given a set of "formulas" (axioms), and deduction rules (mathematical equivalence operations), we cannot deduce if a given formula is correct or not. This problem is actually undecideable.
This issue was first addressed by Hilbert as Entscheidungsproblem
I read a book called Fluid Concepts and Creative Analogies by Douglas Hofstadter that talked about this sort of algebraic manipulations that would automatically rewrite equations in other ways attempting to join equations to other equations an infinite (yet restricted) number of ways given rules. It was an attempt to prove yet unproven theorems/proofs by brute force.
http://en.wikipedia.org/wiki/Fluid_Concepts_and_Creative_Analogies
Douglas Hofstadter's Numbo program attempts to do what you want. He doesn't give you the source, only describes how it works in detail.
It sounds like you want a program to do what highschool students do when they solve algebraic problems to move from a position where you know something, modifying it and combining it with other equations, to prove something previously unknown. It takes a strong Artificial intelligence to do this. The part of your brain that does this is the Neo Cortex, which does science, and it's operating principle is as of yet not understood.
If you want something that will do what college students do when they manipulate equations in calculus, you'll have to build a fairly strong artificial intelligence.
http://en.wikipedia.org/wiki/Neocortex
When we can do whole-brain emulation of a human neo cortex, I will post the answer here.
Yes, you need to write some algorithm to do this kind of computer algebra. At least
a parser to interpret the input
an algebra model to relate parsed operands ('a', 'b', ...) and operator ('+', '=')
implement any appropriate rule to support the manipulation you wish to do

Weka's PCA is taking too long to run

I am trying to use Weka for feature selection using PCA algorithm.
My original feature space contains ~9000 attributes, in 2700 samples.
I tried to reduce dimensionality of the data using the following code:
AttributeSelection selector = new AttributeSelection();
PrincipalComponents pca = new PrincipalComponents();
Ranker ranker = new Ranker();
selector.setEvaluator(pca);
selector.setSearch(ranker);
Instances instances = SamplesManager.asWekaInstances(trainSet);
try {
selector.SelectAttributes(instances);
return SamplesManager.asSamplesList(selector.reduceDimensionality(instances));
} catch (Exception e ) {
...
}
However, It did not finish to run within 12 hours. It is stuck in the method selector.SelectAttributes(instances);.
My questions are:
Is so long computation time expected for weka's PCA? Or am I using PCA wrongly?
If the long run time is expected:
How can I tune the PCA algorithm to run much faster? Can you suggest an alternative? (+ example code how to use it)?
If it is not:
What am I doing wrong? How should I invoke PCA using weka and get my reduced dimensionality?
Update: The comments confirms my suspicion that it is taking much more time than expected.
I'd like to know: How can I get PCA in java - using weka or an alternative library.
Added a bounty for this one.
After deepening in the WEKA code, the bottle neck is creating the covariance matrix, and then calculating the eigenvectors for this matrix. Even trying to switch to sparsed matrix implementation (I used COLT's SparseDoubleMatrix2D) did not help.
The solution I came up with was first reduce the dimensionality using a first fast method (I used information gain ranker, and filtering based on document frequencey), and then use PCA on the reduced dimensionality to reduce it farther.
The code is more complex, but it essentially comes down to this:
Ranker ranker = new Ranker();
InfoGainAttributeEval ig = new InfoGainAttributeEval();
Instances instances = SamplesManager.asWekaInstances(trainSet);
ig.buildEvaluator(instances);
firstAttributes = ranker.search(ig,instances);
candidates = Arrays.copyOfRange(firstAttributes, 0, FIRST_SIZE_REDUCTION);
instances = reduceDimenstions(instances, candidates)
PrincipalComponents pca = new PrincipalComponents();
pca.setVarianceCovered(var);
ranker = new Ranker();
ranker.setNumToSelect(numFeatures);
selection = new AttributeSelection();
selection.setEvaluator(pca);
selection.setSearch(ranker);
selection.SelectAttributes(instances );
instances = selection.reduceDimensionality(wekaInstances);
However, this method scored worse then using a greedy information gain and a ranker, when I cross-validated for estimated accuracy.
It looks like you're using the default configuration for the PCA, which judging by the long runtime, it is likely that it is doing way too much work for your purposes.
Take a look at the options for PrincipalComponents.
I'm not sure if -D means they will normalize it for you or if you have to do it yourself. You want your data to be normalized (centered about the mean) though, so I would do this yourself manually first.
-R sets the amount of variance you want accounted for. Default is 0.95. The correlation in your data might not be good so try setting it lower to something like 0.8.
-A sets the maximum number of attributes to include. I presume the default is all of them. Again, you should try setting it to something lower.
I suggest first starting out with very lax settings (e.g. -R=0.1 and -A=2) then working your way up to acceptable results.
Best
for the construction of your covariance matrix, you can use the following formula which is also used by matlab. It is faster then the apache library.
Whereby Matrix is an m x n matrix. (m --> #databaseFaces)

Categories