Fastest linear algebra library in terms of Cholesky factorization - java

I'd really like assessing if any of you could point me towards the most optimized and computetionally quick linear algebra library in terms of Cholesky factorization.
So far I've been using the Apache Commons Math library, but perhaps there are more robust and better-enhanced options already available.
For instance, would PColt, EJML or ojAlgo better choices? The most urgent concerns is mainly one: I need to iteratively calculate (within a 2048 elements for loop generally) the lower triangular Cholesky factor for up to three different matrices; the largest size the matrices will reach is about 2000x2000.

Cholesky factorisation is quite a simple algorithm. Here's the (unoptimised) C# code that I use. C# and Java are quite similar, so should be an easy job for you to convert to Java and make whatever improvements you deem necessary.
public class CholeskyDecomposition {
public static double[,] Do(double[,] input) {
int size = input.GetLength(0);
if (input.GetLength(1) != size)
throw new Exception("Input matrix must be square");
double[] p = new double[size];
double[,] result = new double[size, size];
Array.Copy(input, result, input.Length);
for (int i = 0; i < size; i++) {
for (int j = i; j < size; j++) {
double sum = result[i, j];
for (int k = i - 1; k >= 0; k--)
sum -= result[i, k] * result[j, k];
if (i == j) {
if (sum < 0.0)
throw new Exception("Matrix is not positive definite");
p[i] = System.Math.Sqrt(sum);
} else
result[j, i] = sum / p[i];
}
}
for (int r = 0; r < size; r++) {
result[r, r] = p[r];
for (int c = r + 1; c < size; c++)
result[r, c] = 0;
}
return result;
}
}

Have a look at the Java Matrix Benchmark. The "Inver Symm" case test inverting a matrix using the cholesky decomposition. If you get the source code for the benchmark there is also a pure cholesky decomposition test that you can turn on.
Here's another comparison of various matrix decompositions between ojAlgo and JAMA

Related

How can get array of frequency from audio?

Now I can get only array of bytes from audio file. But I need frequency of sound. How I can get it. I'm trying the fft. But after that I get very big numbers and it's not frequency. Of course, I can't to mult to i, because this is Java
private static double[] fft(byte[] bytes) {
double[] fft = new double[bytes.length];
for (int k = 0; k < bytes.length; k++) {
for (int n = 0; n < bytes.length; n++) {
fft[k] += bytes[n] * Math.pow(Math.E, -2 * Math.PI * k * n / bytes.length);
}
}
return fft;
}
I would suggest using a complex type for your FFT calculations, it will make everything simpler (or at least more readable) and doesn't add a lot of overhead.
I am not a java person, it doesnt seem like JDK has a built in complex type BUT implementations like this exist:
https://introcs.cs.princeton.edu/java/97data/Complex.java.html
Your FFT could then be something like this (a bit unoptimized pseudo code!):
private static Complex[] fft(byte[] bytes) {
Complex[] fft = new Complex[bytes.length];
for (int k = 0; k < bytes.length; k++) {
for (int n = 0; n < bytes.length; n++) {
Complex temp = new Complex (0,-2 * Math.PI * k * n/bytes.length);
fft[k] += bytes[n] * Complex.exp(temp);
}
}
return fft;
}
You can get the magnitudes with something like
Complex.abs(fft[k])
I would also look at your outer loop (k), this is the size of your FFT and it will currently be the length of the input. This may or may not be what you want, I would suggest looking at signal windowing.

Naive matrix multiplication improvement

My CS teacher asked us to "add a small change" to this code to make it run with time complexity of N3 - N2 instead of the normal N3. I cannot for the life of me figure it out and I was wondering if anyone happened to know. I don't think he is talking about strassens method.
from when I looked at it, maybe it could take advantage of the fact that he only cares about a square (diagonal) matrix.
void multiply(int n, int A[][], int B[][], int C[][]) {
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
{
C[i][j] = 0;
for (int k = 0; k < n; k++)
{
C[i][j] += A[i][k]*B[k][j];
}
}
}
}
You cannot achieve Matrix multiplication in O(N2). However, you can improve the complexity from O(N3). In linear algebra, there are algorithms like the Strassen algorithm which reduces the time complexity to O(N2.8074) by reducing the number of multiplications required for each 2x2 sub-matrix from 8 to 7.
An improved version of the Coppersmith–Winograd algorithm is the fastest known matrix multiplication algorithm with the best time complexity of O(N2.3729).

Finding all possible subset sum which is equal to 0 out of set of positive and negative numbers? [duplicate]

This question already has answers here:
given a set of n integers, return all subsets of k elements that sum to 0
(3 answers)
Closed 6 years ago.
You have an array which has a set of positive and negative numbers, print all the subset sum which is equal to 0.
I can think of approach where i can cam make all powersets of givcen array and check if their sum is 0. BUt that does not llok like optimized solution to
me.
After reading looks a bit similar problem on net , looks like it can be solved with dynamic programming like below program to find if there is combination exist
to make sum 11 just an example ?
public boolean subsetSum(int input[], int total) {
boolean T[][] = new boolean[input.length + 1][total + 1];
for (int i = 0; i <= input.length; i++) {
T[i][0] = true;
}
for (int i = 1; i <= input.length; i++) {
for (int j = 1; j <= total; j++) {
if (j - input[i - 1] >= 0) {
T[i][j] = T[i - 1][j] || T[i - 1][j - input[i - 1]];
} else {
T[i][j] = T[i-1][j];
}
}
}
return T[input.length][total];
}
public static void main(String args[]) {
TestDynamic ss = new TestDynamic();
int arr1[] = {2, 3, 7, 8};
System.out.print(ss.subsetSum(arr1, 11));
}
But i am not sure how to extend above programe to
1) Include negative number
2) find combination of elements whick makes sum as zero( Above program just finds whether its possible to make given sum but does not
find which set of numbers makes it zero)
Here is a full implementation in Javascript. You can run it with node.js.
function target_sum(a, k, x)
{
if (k == a.length) return [];
if (a[k] == x) {
return [[a[k]]];
} else {
var s = target_sum(a, k + 1, x); // not using a[k]
var t = target_sum(a, k + 1, x - a[k]); // using a[k]
for (var i = 0; i < t.length; ++i) {
t[i].unshift(a[k]); // a[k] is part of the solution
s.push(t[i]); // merge t[] into s[]
}
return s;
}
}
var s = target_sum([1,4,5,2,7,8,-3,-5,-6,9,3,-7,-1,5,6], 0, 0);
for (var i = 0; i < s.length; ++i)
console.log(s[i].join(","));
Note that this is an exponential algorithm. Don't use it on large arrays.
Erwin Rooijakkers also pointed to the right direction. In particular, this post gives another algorithm. I could be wrong about the following – I believe that algorithm trades speed for space. It avoids staging arrays into the call stack, but it has to do more recursions to achieve that.
EDIT: about the algorithm you mentioned. It is not exponential, but it only works for positive numbers if I am right. Its time complexity is also proportional to the target sum, which may not be ideal depending on input.

Generating all subsets using Gosper's Hack (Bankers sequence)

I have a method that generates all subsets of an array, what I want to try and implement is the same sort of method but doing it using binary. Gosper's Hack seems to be the best idea but I have no idea how to implement it. The code below works to generate all subsets.The subsets can be unknown (http://imgur.com/KXflVjq) this shows an output after a couple of seconds of running. Thanks for any advice
int m = prop.length;
int list = (1 << m);
for(long i = 1; i<list; i++) {
final List sub = new ArrayList<>();
for(long j=0; j<m; j++) {
if((i & (1<<j)) > 0) {
sub.add(j);
}
}
Collections.sort(sub);
System.out.println(sub);
}
EDIT: As I have not worded the question correctly, what I need as output is:
2 1 0
0 0 1 = 0
0 1 0 = 1
etc.
First, I'd like to note that it's not clear what exactly is it that you're trying to achieve; please consider clarifying the question. I'll assume that you'd like to generate all k-subsets of an n-set. The problem can be easily reduced to that of generating all k-subsets of {1,2,...,n} (i.e. it suffices to compute all k-subsets of indices).
An algorithm for generating k-subsets of an n-set
A while back I wrote this implementation of a method (which I rediscovered few years ago) for generating all k-subsets of an n-set. Hope it helps. The algorithm essentially visists all binary sequences of length n containing exactly k ones in a clever way (without going through all 2^n sequences); see the accompanying note describing the algorithm, which contains detailed description, pseudocode, and a small step-by-step example.
I think the time complexity is of the order O(k {n choose k}). I do not yet have a formal proof for this. (It is obvious that any algorithm will have to take Omega({n choose k}) time.)
The code in C:
#include <stdlib.h>
#include <stdio.h>
void subs(int n, int k);
int main(int argc, char **argv)
{
if(argc != 3) return 1;
int n, k;
n = atoi(argv[1]); k = atoi(argv[2]);
subs(n, k);
return 0;
}
void subs(int n, int k)
{
int *p = (int *)malloc(sizeof(int)*k);
int i, j, r;
for(i = 0; i < k; ++i) p[i] = i; // initialize our ``set''
// the algorithm
while(1)
{ // visit the current k-subset
for(i = 0; i < k; ++i)
printf("%d ", p[i]+1);
printf("\n");
if(p[0] == n-k) break; // if this is the last k-subset, we are done
for(i = k-1; i >= 0 && p[i]+k-i == n; --i); // find the right element
r = p[i]; ++p[i]; j = 2; // exchange them
for(++i; i < k; ++i, ++j) p[i] = r+j; // move them
}
free(p);
}
References
If this is not efficient enough, I highly recommend Knuth's Volume 4 of The Art of Comouter Programming, where he deals with the problem extensively. It's probably the best reference out there (and fairly recent!).
You might even be able to find a draft of the fascicle, TAOCP Volume 4 Fascicle 3, Generating All Combinations and Partitions (2005), vi+150pp. ISBN 0-201-85394-9, on Knuth's homepage (see his news for 2011 or so).

Neural Network Backpropagation does not compute weights correctly

Currently, I am having problems with the Backpropagation algorithm.
I am trying to implement it and use it to recognize the direction of faces (left, right, down, straight).
Basically, I have N images, read the pixels and change its values(0 to 255) to values from 0.0 to 1.0. All images are 32*30.
I have an input layer of 960 neurons, a hidden layer of 3 neurons and an output layer of 4 neurons. For example, the output <0.1,0.9,0.1,0.1> means that the person looks to the right.
I followed the pseudy-code. However, it doesn't work right - it does not compute the correct weights and consequently it can't handle the training and test examples.
Here are parts of the code:
// main function - it runs the algorithm
private void runBackpropagationAlgorithm() {
for (int i = 0; i < 900; ++i) {
for (ImageUnit iu : images) {
double [] error = calcOutputError(iu.getRatioMatrix(), iu.getClassification());
changeHiddenUnitsOutWeights(error);
error = calcHiddenError(error);
changeHiddenUnitsInWeights(error,iu.getRatioMatrix());
}
}
}
// it creates the neural network
private void createNeuroneNetwork() {
Random generator = new Random();
for (int i = 0; i < inHiddenUnitsWeights.length; ++i) {
for (int j = 0; j < hiddenUnits; ++j) {
inHiddenUnitsWeights[i][j] = generator.nextDouble();
}
}
for (int i = 0; i < hiddenUnits; ++i) {
for (int j = 0; j < 4; ++j) {
outHddenUnitsWeights[i][j] = generator.nextDouble();
}
}
}
// Calculates the error in the network. It runs through the whole network.
private double [] calcOutputError(double[][] input, double [] expectedOutput) {
int currentEdge = 0;
Arrays.fill(hiddenUnitNodeValue, 0.0);
for (int i = 0; i < input.length; ++i) {
for (int j = 0; j < input[0].length; ++j) {
for (int k = 0; k < hiddenUnits; ++k) {
hiddenUnitNodeValue[k] += input[i][j] * inHiddenUnitsWeights[currentEdge][k];
}
++currentEdge;
}
}
double[] out = new double[4];
for (int j = 0; j < 4; ++j) {
for (int i = 0; i < hiddenUnits; ++i) {
out[j] += outHddenUnitsWeights[i][j] * hiddenUnitNodeValue[i];
}
}
double [] error = new double [4];
Arrays.fill(error, 4);
for (int i = 0; i < 4; ++i) {
error[i] = ((expectedOutput[i] - out[i])*(1.0-out[i])*out[i]);
//System.out.println((expectedOutput[i] - out[i]) + " " + expectedOutput[i] + " " + out[i]);
}
return error;
}
// Changes the weights of the outgoing edges of the hidden neurons
private void changeHiddenUnitsOutWeights(double [] error) {
for (int i = 0; i < hiddenUnits; ++i) {
for (int j = 0; j < 4; ++j) {
outHddenUnitsWeights[i][j] += learningRate*error[j]*hiddenUnitNodeValue[i];
}
}
}
// goes back to the hidden units to calculate their error.
private double [] calcHiddenError(double [] outputError) {
double [] error = new double[hiddenUnits];
for (int i = 0; i < hiddenUnits; ++i) {
double currentHiddenUnitErrorSum = 0.0;
for (int j = 0; j < 4; ++j) {
currentHiddenUnitErrorSum += outputError[j]*outHddenUnitsWeights[i][j];
}
error[i] = hiddenUnitNodeValue[i] * (1.0 - hiddenUnitNodeValue[i]) * currentHiddenUnitErrorSum;
}
return error;
}
// changes the weights of the incomming edges to the hidden neurons. input is the matrix of ratios
private void changeHiddenUnitsInWeights(double [] error, double[][] input) {
int currentEdge = 0;
for (int i = 0; i < input.length; ++i) {
for (int j = 0; j < input[0].length; ++j) {
for (int k = 0; k < hiddenUnits; ++k) {
inHiddenUnitsWeights[currentEdge][k] += learningRate*error[k]*input[i][j];
}
++currentEdge;
}
}
}
As the algorithm works, it computes bigger and bigger weights, which finally approach infinity (NaN values). I checked the code. Alas, I didn't manage to solve my problem.
I will be firmly grateful to anyone who would try to help me.
I didn't check all of your code. I just want to give you some general advices. I don't know if your goal is (1) to learn the direction of faces or (2) to implement your own neural network.
In case (1) you should consider one of those libraries. They just work and give you much more flexible configuration options. For example, standard backpropagation is one of the worst optimization algorithms for neural networks. The convergence depends on the learning rate. I can't see which value you chose in your implementation, but it could be too high. There are other optimization algorithms that don't require a learning rate or adapt it during training. In addition, 3 neurons in the hidden layer is most likely not enough. Most of the neural networks that have been used for images have hundreds and sometimes even thousands of hidden units. I would suggest you first try to solve your problem with a fully developed library. If it does work, try implementing your own ANN or be happy. :)
In case (2) you should first try to solve a simpler problem. Take a very simple artificial data set, then take a standard benchmark and then try it with your data. A good way to verify that your backpropagation implementation works is a comparison with a numerical differentation method.
Your code is missing the transfer functions. It sounds like you want the logistic function with a softmax output. You need to include the following in calcOutputError
// Logistic transfer function for hidden layer.
for (int k = 0; k < hiddenUnits; ++k) {
hiddenUnitNodeValue[k] = logistic(hiddenUnitNodeValue[k]);
}
and
// Softmax transfer function for output layer.
sum = 0;
for (int j = 0; j < 4; ++j) {
out[j] = logistic(out[j]);
sum += out[j];
}
for (int j = 0; j < 4; ++j) {
out[j] = out[j] / sum;
}
where the logistic function is
public double logistic(double x){
return (1/(1+(Math.exp(-x)));
}
Note that the softmax transfer function gives you outputs that sum to 1, so they can be interpreted as probabilities.
Also, your calculation of the error gradient for the output layer is incorrect. It should simply be
for (int i = 0; i < 4; ++i) {
error[i] = (expectedOutput[i] - out[i]);
}
I haven't tested your code but I am almost certain that you start out with to large weights.
Most of the introductions on the subjects leave it at "init the weights with random values" and leaving out that the algorithm actually diverges (goes to Inf) for some starting values.
Try using smaller starting values, for example between -1/5 and 1/5 and shrink it down.
And additionally do an method for matrix multiplication, you have (only) used that 4 times, much easier to see if there is some problem there.
I had a similar problem with a neural network processing grayscale images. You have 960 input values ranging between 0 and 255. Even with small initial weights, you can end up having inputs to your neurons with a very large magnitude and the backpropagation algorithm gets stuck.
Try dividing each pixel value by 255 before passing it into the neural network. That's what worked for me. Just starting with extremely small initial weights wasn't enough, I believe due to the floating-point precision issue brought up in the comments.
As suggested in another answer, a good way to test your algorithm is to see if your network can learn a simple function like XOR.
And for what it's worth, 3 neurons in the hidden layer was plenty for my purpose (identifying the gender of a facial image)
I wrote an entire new neural-network library and it works. It is sure that in my previous attempt I missed the idea of using transfer functions and their derivatives. Thank you, all!

Categories