Cholesky decomposition generating NaNs in Java - java

I'm not sure whether this is a maths.se or a SO question, but I'm going with SO as I think it's related to my Java.
I'm following a text book on Gaussian Processes (R&W) and implementing some examples in Java. One common step for several examples is to generate a Cholesky decomposition of a covariance matrix. In my attempt I can get successful results for matrices up to a limited size (33x33). However, for anything larger a NaN appears in the diagonal (at 32,32) and so all subsequent values in the matrix are likewise NaNs.
The code is shown below, and the source of the NaN is indicated in the cholesky method. Essentially the covariance element a[32][32] is 1.0, but the value of sum is a little over this (1.0000001423291431), so the square root is imaginary. So my questions are:
Is this an expected result from linear algebra, or, e.g., an
artefact of my implementation?
How is this problem best avoided in practice?
Note that I'm not looking for recommendations of libraries to use. This is simply for my own understanding.
Apologies for the length, but I've tried to provide a complete MWE:
import static org.junit.Assert.assertFalse;
import org.junit.Test;
public class CholeskyTest {
#Test
public void testCovCholesky() {
final int n = 34; // Test passes for n<34
final double[] xData = getSpread(-5, 5, n);
double[][] cov = covarianceSE(xData);
double[][] lower = cholesky(cov);
for(int i=0; i<n; ++i) {
for(int j=0; j<n; ++j) {
assertFalse("NaN at " + i + "," + j, Double.isNaN(lower[i][j]));
}
}
}
/**
* Generate n evenly space values from min to max inclusive
*/
private static double[] getSpread(final double min, final double max, final int n) {
final double[] values = new double[n];
final double delta = (max - min)/(n - 1);
for(int i=0; i<n; ++i) {
values[i] = min + i*delta;
}
return values;
}
/**
* Calculate the covariance matrix for the given observations using
* the squared exponential (SE) covariance function.
*/
private static double[][] covarianceSE (double[] v) {
final int m = v.length;
double[][] k = new double[m][];
for(int i=0; i<m; ++i) {
double vi = v[i];
double row[] = new double[m];
for(int j=0; j<m; ++j) {
double dist = vi - v[j];
row[j] = Math.exp(-0.5*dist*dist);
}
k[i] = row;
}
return k;
}
/**
* Calculate lower triangular matrix L such that LL^T = A
* Using Cholesky decomposition from
* https://rosettacode.org/wiki/Cholesky_decomposition#Java
*/
private static double[][] cholesky(double[][] a) {
final int m = a.length;
double[][] l = new double[m][m];
for(int i = 0; i< m;i++){
for(int k = 0; k < (i+1); k++){
double sum = 0;
for(int j = 0; j < k; j++){
sum += l[i][j] * l[k][j];
}
l[i][k] = (i == k) ? Math.sqrt(a[i][i] - sum) : // Source of NaN at 32,32
(1.0 / l[k][k] * (a[i][k] - sum));
}
}
return l;
}
}

Hmm, I think I've found an answer to my own question, from the same textbook I was following. From R&W p.201:
In practice it may be necessary to add a small multiple of the
identity matrix $\epsilon I$ to the covariance matrix for numerical
reasons. This is because the eigenvalues of the matrix K can decay
very rapidly [...] and without this stabilization the Cholesky
decomposition fails. The effect on the generated samples is to add
additional independent noise of variance $epsilon$.
So the following change seems to be sufficient:
private static double[][] cholesky(double[][] a) {
final int m = a.length;
double epsilon = 0.000001; // Small extra noise value
double[][] l = new double[m][m];
for(int i = 0; i< m;i++){
for(int k = 0; k < (i+1); k++){
double sum = 0;
for(int j = 0; j < k; j++){
sum += l[i][j] * l[k][j];
}
l[i][k] = (i == k) ? Math.sqrt(a[i][i]+epsilon - sum) : // Add noise to diagonal values
(1.0 / l[k][k] * (a[i][k] - sum));
}
}
return l;
}

I just finished writing my own version of a Cholesky Decomposition routine in C++ and JavaScript. Instead of computing L, it computes U, but I would be curious to test it with the matrix that causes the NaN error. Would you be able to post the matrix here, or contact me (info in Profile.)

Related

How to use NNLS for non-negative multiple linear regression?

I am trying to solve Non-negative multiple linear regression problem in Java.
And I found a solver class org.apache.spark.mllib.optimization.NNLS written in Scala.
However, I don't know how to use this.
What makes me confused is that the interface of the following method seems strange.
I thought that A is a MxN matrix and b is a M-vector, and the arguments ata and atb should be a NxN matrix and N-vector, respectively.
However, the actual type of ata is double[].
public static double[] solve(double[] ata, double[] atb, NNLS.Workspace ws)
I searched for an example code but I couldn't find.
Can anyone give me a sample code?
The library is written in Scala, but I want Java code if possible.
DISCLAIMER I've never used NNLS and got no idea about non-negative multiple linear regression.
You look at Spark 2.1.1's NNLS that does what you want, but is not the way to go since the latest Spark 2.2.1 marked as private[spark].
private[spark] object NNLS {
More importantly, as of Spark 2.0, org.apache.spark.mllib package (incl. org.apache.spark.mllib.optimization that NNLS belongs to) is in maintenance mode:
The MLlib RDD-based API is now in maintenance mode.
As of Spark 2.0, the RDD-based APIs in the spark.mllib package have entered maintenance mode. The primary Machine Learning API for Spark is now the DataFrame-based API in the spark.ml package.
In other words, you should stay away from the package and NNLS in particular.
What are the alternatives then?
You could look at the tests of NNLS, i.e. NNLSSuite where you could find some answers.
However, the actual type of ata is double[].
That's a matrix so elements are doubles again. As a matter of fact, ata is passed directly to BLAS's dgemv (here and here) that is described in the LAPACK docs:
DGEMV performs one of the matrix-vector operations
y := alpha*A*x + beta*y, or y := alpha*A**T*x + beta*y,
where alpha and beta are scalars, x and y are vectors and A is an
m by n matrix.
That should give you enough answers.
Another question would be what the recommended way in Spark MLlib for NNLS-like computations is?
It looks like Spark MLLib's ALS algorithm uses NNLS under the covers (which may not be that surprising for machine learning practitioners).
That part of the code is used when ALS is configured to train a model with nonnegative parameter turned on, i.e. true (which is disabled by default).
nonnegative Param for whether to apply nonnegativity constraints.
Default: false
whether to use nonnegative constraint for least squares
I would recommend reviewing that part of Spark MLlib to get deeper into the uses of NNLS for solving non-negative linear regression problem.
I wrote a test code.
Though I got some warnings like Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS, it works well for simple cases, but beta often becomes 0 when m is very large (about 3000).
package test;
import org.apache.spark.mllib.optimization.NNLS;
public class NNLSTest {
public static void main(String[] args) {
int n = 6, m = 300;
ExampleInMatLabDoc();
AllPositiveBetaNoiseInY(n, m);
SomeNegativesInBeta(n, m);
NoCorrelation(n, m);
}
private static void test(double[][] X, double[] y, double[] b) {
int m = X.length; int n = X[0].length;
double[] Xty = new double[n];
for (int i = 0; i < n; i++) {
Xty[i] = 0.0;
for (int j = 0; j < m; j++) Xty[i] += X[j][i] * y[j];
}
double[] XtX = new double[n * n];
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
XtX[n * i + j] = 0.0;
for (int k = 0; k < m; k++) XtX[n * i + j] += X[k][i] * X[k][j];
}
}
double[] beta = NNLS.solve(XtX, Xty, NNLS.createWorkspace(n));
System.out.println("\ntrue beta\tbeta");
for (int i = 0; i < beta.length; i++) System.out.println(b[i] + "\t" + beta[i]);
}
private static void ExampleInMatLabDoc() {
// https://jp.mathworks.com/help/matlab/ref/lsqnonneg.html
double[] y = new double[] { 0.8587, 0.1781, 0.0747, 0.8405 };
double[][] x = new double[4][];
x[0] = new double[] { 0.0372, 0.2869 };
x[1] = new double[] { 0.6861, 0.7071 };
x[2] = new double[] { 0.6233, 0.6245 };
x[3] = new double[] { 0.6344, 0.6170 };
double[] b = new double[] { 0.0, 0.6929 };
test(x, y, b);
}
private static void AllPositiveBetaNoiseInY(int n, int m) {
double[] b = new double[n];
for (int i = 0; i < n; i++) b[i] = Math.random() * 100.0; // random value in [0:100]
double[] y = new double[m];
double[][] x = new double[m][];
for (int i = 0; i < m; i++) {
x[i] = new double[n];
x[i][0] = 1.0;
y[i] = b[0];
for (int j = 1; j < n; j++) {
x[i][j] = (2.0 * Math.random() - 1.0) * 100.0; // random value in [-100:100]
y[i] += x[i][j] * b[j];
}
y[i] *= 1.0 + (2.0 * Math.random() - 1.0) * 0.1; // add noise
}
test(x, y, b);
}
private static void SomeNegativesInBeta(int n, int m) {
double[] b = new double[n];
for (int i = 0; i < n; i++) b[i] = (2.0 * Math.random() - 1.0) * 100.0; // random value in [-100:100]
double[] y = new double[m];
double[][] x = new double[m][];
for (int i = 0; i < m; i++) {
x[i] = new double[n];
x[i][0] = 1.0;
y[i] = b[0];
for (int j = 1; j < n; j++) {
x[i][j] = (2.0 * Math.random() - 1.0) * 100.0; // random value in [-100:100]
y[i] += x[i][j] * b[j];
}
}
test(x, y, b);
}
private static void NoCorrelation(int n, int m) {
double[] y = new double[m];
double[][] x = new double[m][];
for (int i = 0; i < m; i++) {
x[i] = new double[n];
x[i][0] = 1.0;
for (int j = 1; j < n; j++)
x[i][j] = (2.0 * Math.random() - 1.0) * 100.0; // random value in [-100:100]
y[i] = (2.0 * Math.random() - 1.0) * 100.0;
}
double[] b = new double[n];
for (int i = 0; i < n; i++) b[i] = 0;
test(x, y, b);
}
}

Java: two dimensional array method - calling it in the main method and its complexity

I haven't yet fully understood what exactly to write in the main method when I have a two dimensional double array method. I want to know what the output of the code is, when A = {{4.00,3.00}, {2.00,1.00}} and B = {{-0.500, 1.500}, {1.000, -2.0000}}. If we assume, that throwing the exception is a constant complexity O(1), is it right, that the complexity of the following method in Big-O is: O(1 + aRows * bColumns + aRows * bColumns * aColumns + 1)? Or is it just O(aRows * bColumns * aColumns)?
public class Exercise {
public static void main(String[] args){
}
public static double[][] m (double[][] A, double [][] B){
int aRows = A.length;
int aColumns = A[0].length;
int bRows = B.length;
int bColumns = B[0].length;
if (aColumns != bRows){
throw new IllegalArgumentException("A: Rows: " + aColumns + " did not match B: Columns " + bRows + ".");
}
double[][] C = new double[aRows][bColumns];
for (int i = 0; i < 2; i++){
for (int j = 0; j < 2; j++){
C[i][j] = 0.00000;
}
}
for (int i = 0; i < aRows; i++ ){
for (int j = 0; j < bColumns; j++){
for (int k = 0; k < aColumns; k++){
C[i][j] += A[i][k] * B[k][j];
}
}
}
return C;
}
}
The syntax for 2D array literals is a bit cumbersome in Java, but it is
double[][] a=new double[][]{ new double[]{1.0,2.0},new double[]{0.3,0.4}}
As for Big O notation, generally you'd look at the slowest loop or the one that gets slower the fastest, and calculate the amount of iterations of the loop. There is nothing wrong with writing the O notation as a sum of the iteration counts of each consecutive loop, though, although you may want to pay attention to the amount of instructions executed in each loop. However, you want to make the case where an error is thrown a separate case in the O notation, since it skips both loops.

Getting the math right for a Hidden Markov Model in Java

In an effort to learn and use hidden markov models, I am writing my own code to implement them. I am using this wiki article to help with my work. I do not wish to resort to pre-written libraries, because I have found I can achieve a better understanding if I write it myself. And no, this isn't a school assignment! :)
Unfortunately, my highest level of education consists of high school computer science and statistics. I have no background in Machine Learning besides the casual poking around with ANN libraries and TensorFlow. I am therefore having a bit of trouble translating mathematical equations into code. Specifically, I'm worried my implementations of the alpha and beta functions aren't functionally correct. If anyone can assist in describing where I messed up and how to correct my mistakes to have a functioning HMM implementation, it'd be greatly appreciated.
Here are my class-wide globals:
public int n; //number of states
public int t; //number of observations
public int time; //iteration holder
public double[][] emitprob; //Emission parameter
public double[][] stprob; //State transition parameter
public ArrayList<String> states, observations, x, y;
My constructor:
public Model(ArrayList<String> sts, ArrayList<String> obs)
{
//the most important algorithm we need right now is
//unsupervised learning through BM. Supervised is
//pretty easy.
//need hashtable of count objects... Aya...
//perhaps a learner...?
states = sts;
observations = obs;
n = states.size();
t = observations.size();
x = new ArrayList();
y = new ArrayList();
time = 0;
stprob = new double[n][n];
emitprob = new double[n][t];
stprob = newDistro(n,n);
emitprob = newDistro(n,t);
}
The newDistro method is for creating a new, uniform, normal distribution:
public double[][] newDistro(int x, int y)
{
Random r = new Random(System.currentTimeMillis());
double[][] returnme = new double[x][y];
double sum = 0;
for(int i = 0; i < x; i++)
{
for(int j = 0; j < y; j++)
{
returnme[i][j] = Math.abs(r.nextInt());
sum += returnme[i][j];
}
}
for(int i = 0; i < x; i++)
{
for(int j = 0; j < y; j++)
{
returnme[i][j] /= sum;
}
}
return returnme;
}
My viterbi algorithm implementation:
public ArrayList<String> viterbi(ArrayList<String> obs)
{
//K means states
//T means observations
//T arrays should be constructed as K * T (N * T)
ArrayList<String> path = new ArrayList();
String firstObservation = obs.get(0);
int firstObsIndex = observations.indexOf(firstObservation);
double[] pi = new double[n]; //initial probs of first obs for each st
int ts = obs.size();
double[][] t1 = new double[n][ts];
double[][] t2 = new double[n][ts];
int[] y = new int[obs.size()];
for(int i = 0; i < obs.size(); i++)
{
y[i] = observations.indexOf(obs.get(i));
}
for(int i = 0; i < n; i++)
{
pi[i] = emitprob[i][firstObsIndex];
}
for(int i = 0; i < n; i++)
{
t1[i][0] = pi[i] * emitprob[i][y[0]];
t2[i][0] = 0;
}
for(int i = 1; i < ts; i++)
{
for(int j = 0; j < n; j++)
{
double maxValue = 0;
int maxIndex = 0;
//first we compute the max value
for(int q = 0; q < n; q++)
{
double value = t1[q][i-1] * stprob[q][j];
if(value > maxValue)
{
maxValue = value; //the max
maxIndex = q; //the argmax
}
}
t1[j][i] = emitprob[j][y[i]] * maxValue;
t2[j][i] = maxIndex;
}
}
int[] z = new int[ts];
int maxIndex = 0;
double maxValue = 0.0d;
for(int k = 0; k < n; k++)
{
double myValue = t1[k][ts-1];
if(myValue > maxValue)
{
myValue = maxValue;
maxIndex = k;
}
}
path.add(states.get(maxIndex));
for(int i = ts-1; i >= 2; i--)
{
z[i-1] = (int)t2[z[i]][i];
path.add(states.get(z[i-1]));
}
System.out.println(path.size());
for(String s: path)
{
System.out.println(s);
}
return path;
}
My forward algorithm, which takes place of the alpha function as described later:
public double forward(ArrayList<String> obs)
{
double result = 0;
int length = obs.size()-1;
for(int i = 0; i < n; i++)
{
result += alpha(i, length, obs);
}
return result;
}
The remaining functions are for implementing the Baum-Welch Algorithm.
The alpha function is what I'm afraid I'm doing wrong of the most on here. I had trouble understanding which "direction" it needs to iterate over the sequence - Do I start from the last element (size-1) or the first (at index zero) ?
public double alpha(int j, int t, ArrayList<String> obs)
{
double sum = 0;
if(t == 0)
{
return stprob[0][j];
}
else
{
String lastObs = obs.get(t);
int obsIndex = observations.indexOf(lastObs);
for(int i = 0; i < n; i++)
{
sum += alpha(i, t-1, obs) * stprob[i][j] * emitprob[j][obsIndex];
}
}
return sum;
}
I'm having similar "correctness" issues with my beta function:
public double beta(int i, int t, ArrayList<String> obs)
{
double result = 0;
int obsSize = obs.size()-1;
if(t == obsSize)
{
return 1;
}
else
{
String lastObs = obs.get(t+1);
int obsIndex = observations.indexOf(lastObs);
for(int j = 0; j < n; j++)
{
result += beta(j, t+1, obs) * stprob[i][j] * emitprob[j][obsIndex];
}
}
return result;
}
I'm more confident in my gamma function; However, since it explicitly requires use of alpha and beta, obviously I'm worried it'll be "off" somehow.
public double gamma(int i, int t, ArrayList<String> obs)
{
double top = alpha(i, t, obs) * beta(i, t, obs);
double bottom = 0;
for(int j = 0; j < n; j++)
{
bottom += alpha(j, t, obs) * beta(j, t, obs);
}
return top / bottom;
}
Same for my "squiggle" function - I do apologize for naming; Not sure of the actual name for the symbol.
public double squiggle(int i, int j, int t, ArrayList<String> obs)
{
String lastObs = obs.get(t+1);
int obsIndex = observations.indexOf(lastObs);
double top = alpha(i, t, obs) * stprob[i][j] * beta(j, t+1, obs) * emitprob[j][obsIndex];
double bottom = 0;
double innerSum = 0;
double outterSum = 0;
for(i = 0; i < n; i++)
{
for(j = 0; j < n; j++)
{
innerSum += alpha(i, t, obs) * stprob[i][j] * beta(j, t+1, obs) * emitprob[j][obsIndex];
}
outterSum += innerSum;
}
return top / bottom;
}
Lastly, to update my state transition and emission probability arrays, I have implemented these functions as aStar and bStar.
public double aStar(int i, int j, ArrayList<String> obs)
{
double squiggleSum = 0;
double gammaSum = 0;
int T = obs.size()-1;
for(int t = 0; t < T; t++)
{
squiggleSum += squiggle(i, j, t, obs);
gammaSum += gamma(i, t, obs);
}
return squiggleSum / gammaSum;
}
public double bStar(int i, String v, ArrayList<String> obs)
{
double top = 0;
double bottom = 0;
for(int t = 0; t < obs.size()-1; t++)
{
if(obs.get(t).equals(v))
{
top += gamma(i, t, obs);
}
bottom += gamma(i, t, obs);
}
return top / bottom;
}
In my understanding, since the b* function includes a piecewise function that returns either 1 or 0, I think implementing it in an "if" statement and only adding the result if the string is equal to the observation history is the same as what is described, since the function would render the call to gamma 0, thus saving a little computation time. Is this correct?
In summation, I want to get my math right, to ensure a successful (albeit simple) HMM implementation. As for the Baum-Welch algorithm, I am having trouble understanding how to implment the complete function - would it be as simple as running aStar over all states (as an n * n FOR loop) and bStar for all observations, inside a loop with a convergence function? Also, what would be a best-practice function for checking for convergence without overfitting?
Please let me know of everything I need to do in order to get this right.
Thank you heavily for any help you can give me!
To avoid underflow, one should use a scaling factor in the forward and backward algorithms. To get the correct result, one uses nested for loops and the steps are forward in the forward method.
The backward method is similar to the forward function.
You invoke them from the method of the Baum-Welch algorithm.

Calculating the exponential of a square matrix

I'm trying to write a method that calculates the exponential of a square matrix. In this instance, the matrix is a square array of value:
[1 0]
[0 10]
and the method should return a value of:
[e 0]
[0 e^10]
However, when I run my code, I get a range of values depending on what bits I've rearranged, non particularly close to the expected value.
The way the method works is to utilise the power series for the matrix, so basically for a matrix A, n steps and an identity matrix I:
exp(A) = I + A + 1/2!*AA + 1/3!*AAA + ... +1/n!*AAA..
The code follows here. The method where I'm having the issue is the method exponential(Matrix A, int nSteps). The methods involved are enclosed, and the Matrix objects take the arguments (int m, int n) to create an array of size double[m][n].
public static Matrix multiply(Matrix m1, Matrix m2){
if(m1.getN()!=m2.getM()) return null;
Matrix res = new Matrix(m1.getM(), m2.getN());
for(int i = 0; i < m1.getM(); i++){
for(int j = 0; j < m2.getN(); j++){
res.getArray()[i][j] = 0;
for(int k = 0; k < m1.getN(); k++){
res.getArray()[i][j] = res.getArray()[i][j] + m1.getArray()[i][k]*m2.getArray()[k][j];
}
}
}
return res;
}
public static Matrix identityMatrix(int M){
Matrix id = new Matrix(M, M);
for(int i = 0; i < id.getM(); i++){
for(int j = 0; j < id.getN(); j++){
if(i==j) id.getArray()[i][j] = 1;
else id.getArray()[i][j] = 0;
}
}
return id;
}
public static Matrix addMatrix(Matrix m1, Matrix m2){
Matrix m3 = new Matrix(m1.getM(), m2.getN());
for(int i = 0; i < m3.getM(); i++){
for(int j = 0; j < m3.getN(); j++){
m3.getArray()[i][j] = m1.getArray()[i][j] + m2.getArray()[i][j];
}
}
return m3;
}
public static Matrix scaleMatrix(Matrix m, double scale){
Matrix res = new Matrix(m.getM(), m.getN());
for(int i = 0; i < res.getM(); i++){
for(int j = 0; j < res.getN(); j++){
res.getArray()[i][j] = m.getArray()[i][j]*scale;
}
}
return res;
}
public static Matrix exponential(Matrix A, int nSteps){
Matrix runtot = identityMatrix(A.getM());
Matrix sum = identityMatrix(A.getM());
double factorial = 1.0;
for(int i = 1; i <= nSteps; i++){
sum = Matrix.multiply(Matrix.scaleMatrix(sum, factorial), A);
runtot = Matrix.addMatrix(runtot, sum);
factorial /= (double)i;
}
return runtot;
}
So my question is, how should I modify my code, so that I can input a matrix and a number of timesteps to calculate the exponential of said matrix after said timesteps?
My way to go would be to keep two accumulators :
the sum, which is your approximation of exp(A)
the nth term of the series M_n, that is A^n/n!
Note that there is a nice recursive relationship with M_n: M_{n+1} = M_n * A / (n+1)
Which yields :
public static Matrix exponential(Matrix A, int nSteps){
Matrix seriesTerm = identityMatrix(A.getM());
Matrix sum = identityMatrix(A.getM());
for(int i = 1; i <= nSteps; i++){
seriesTerm = Matrix.scaleMatrix(Matrix.multiply(seriesTerm,A),1.0/i);
sum = Matrix.addMatrix(seriesTerm, sum);
}
return sum;
}
I totally understand the sort of thrill that implementing such algorithms can give you. But if this is not a hobby project, I concur that you should that you should use a library for this kind of stuff. Making such computations precise and efficient is really not a trivial matter, and a huge wheel to reinvent.

Matrix multiplication - single-dimension * multi-dimensional

I need to multiply two matrices. I understand pretty well how matrices work however in Java I am finding this a bit complex, so I researched a bit and found this.
public static int[][] multiply(int a[][], int b[][]) {
int aRows = a.length,
aColumns = a[0].length,
bRows = b.length,
bColumns = b[0].length;
int[][] resultant = new int[aRows][bColumns];
for(int i = 0; i < aRows; i++) { // aRow
for(int j = 0; j < bColumns; j++) { // bColumn
for(int k = 0; k < aColumns; k++) { // aColumn
resultant[i][j] += a[i][k] * b[k][j];
}
}
}
return resultant;
This code works fine. However the problem with this is that I need to multiply a single dimension matrix (1*5) by a multidimensional matrix (5*4), so the result will be (1*4) matrix and later on in the same program multiply a (1*4) matrix by a (4*3) matrix resulting in (1*3).
And I need to store the single dimension matrix in a normal array (double []) not multidimensional one!
I altered this code to the following but it still doesn't resolve the correct results.
public static double[] multiplyMatrices(double[] A, double[][] B) {
int xA = A.length;
int yB = B[0].length;
double[] C = new double[yB];
for (int i = 0; i < yB; i++) { // bColumn
for (int j = 0; j < xA; j++) { // aColumn
C[i] += A[j] * B[j][i];
}
}
return C;
Thanks in advance for any tips you may give :)
You can use RealMatrix to make it easier.
RealMatrix result = MatrixUtils.createRealMatrix(a).multiply(MatrixUtils.createRealMatrix(b));
double[] array = result.getRow(0);

Categories