My CS teacher asked us to "add a small change" to this code to make it run with time complexity of N3 - N2 instead of the normal N3. I cannot for the life of me figure it out and I was wondering if anyone happened to know. I don't think he is talking about strassens method.
from when I looked at it, maybe it could take advantage of the fact that he only cares about a square (diagonal) matrix.
void multiply(int n, int A[][], int B[][], int C[][]) {
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
{
C[i][j] = 0;
for (int k = 0; k < n; k++)
{
C[i][j] += A[i][k]*B[k][j];
}
}
}
}
You cannot achieve Matrix multiplication in O(N2). However, you can improve the complexity from O(N3). In linear algebra, there are algorithms like the Strassen algorithm which reduces the time complexity to O(N2.8074) by reducing the number of multiplications required for each 2x2 sub-matrix from 8 to 7.
An improved version of the Coppersmith–Winograd algorithm is the fastest known matrix multiplication algorithm with the best time complexity of O(N2.3729).
Related
I have started learning Big O notation and analysing time complexities, and I tried messing around with some code to try and understand its time complexity. Here's one of the lines that I had, but I can't seem to figure out what the time complexity for the sum method is? I reckon it is o(n^3) but my friend says it should be o(n^2). Any insights as to what is the right answer?
public static double sum(double[] array)
{
double sum = 0.0;
for(int k = 0; k < size(array); k++)
sum = sum + get(array, k);
return sum;
}
public static double get(double[] array, int k)
{
for(int x=0; x < k; x++)
if(x==k) return array[k];
return -1;
}
public static int size(double[] array)
{
int size = 0;
for(int k=0; k<array.length; k++)
size++;
return size;
}
I guess you friend is right, time complexity is O(n^2).
Because on each iteration in your loop, you consequently calculate size(), which is order of O(n), and calculate get(), which on average is O(n/2).
So each iteration's complexity is O(1.5 * n) and overall is O(n^2)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
This is the hourglass problem which can be found on Hackerrank website.
Here is a link to the problem : Hourglass
Here is the code that I wrote for the Hourglass problem :
public class Solution
{
public static int hourglass(int[][] a, int n)
{
int max = -999;
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
{
int sum = 0;
boolean flag = false;
if ((i+2) < n && (j+2) < n)
{
sum += a[i][j] + a[i][j+1] + a[i][j+2] + a[i+1][j+1] + a[i+2][j] + a[i+2][j+1] + a[i+2][j+2];
flag = true;
}
if (sum > max && flag == true)
max = sum;
}
}
return max;
}
public static void main(String[] args)
{
Scanner scanner = new Scanner(System.in);
int n = 6;
int[][] a = new int[6][6];
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
a[i][j] = scanner.nextInt();
int maxSum = hourglass(a, n);
System.out.println(maxSum);
}
}
My Question
Now, the above code compiled and ran successfully and even passed all the test cases. However, my code takes O(n^2) time (here the size of the matrix is 6, but if the size were n, then it would take O(n^2) time to finish.)
It takes O(n^2) time to create the array, and that I am not concerned about. What I am interested is to optimise the hourglass() method where it takes O(n^2) time to calculate the sum of the hourglass.
So, is there any way to implement the above problem with further optimisation?
Is it possible to solve the problem in O(n) time?
In fact, I tried to solve the problem in O(n) time by removing the inner loop in the hourglass() method but it did not seem to work.
P.S. I do not need a working code, all I need is some pointers to possible improvements (if any) or an algorithm at the most.
Thanks in advance!
Technically, you're solution is already O(n). You're defining "n" as though it were one side of the 2d array, but if you regarded n as a unique placement on your board, n is a combination of row * col. Redefined this way, you can't beat O(n) on this problem.
You can, however, optimize some. You're essentially laying a 3x3 tile onto a 6x6 board. If a placement were to be defined by the top left corner of your 3x3 tile, then you're trying all 36 placements. If you think about it, many of those placements would leave your tile hanging off the edge of the board. You really only need to consider the first 4x4 positions rather than all 6x6 positions. It's still an O(n) solution, but it would cut 36 iterations down to 16.
I have a method that generates all subsets of an array, what I want to try and implement is the same sort of method but doing it using binary. Gosper's Hack seems to be the best idea but I have no idea how to implement it. The code below works to generate all subsets.The subsets can be unknown (http://imgur.com/KXflVjq) this shows an output after a couple of seconds of running. Thanks for any advice
int m = prop.length;
int list = (1 << m);
for(long i = 1; i<list; i++) {
final List sub = new ArrayList<>();
for(long j=0; j<m; j++) {
if((i & (1<<j)) > 0) {
sub.add(j);
}
}
Collections.sort(sub);
System.out.println(sub);
}
EDIT: As I have not worded the question correctly, what I need as output is:
2 1 0
0 0 1 = 0
0 1 0 = 1
etc.
First, I'd like to note that it's not clear what exactly is it that you're trying to achieve; please consider clarifying the question. I'll assume that you'd like to generate all k-subsets of an n-set. The problem can be easily reduced to that of generating all k-subsets of {1,2,...,n} (i.e. it suffices to compute all k-subsets of indices).
An algorithm for generating k-subsets of an n-set
A while back I wrote this implementation of a method (which I rediscovered few years ago) for generating all k-subsets of an n-set. Hope it helps. The algorithm essentially visists all binary sequences of length n containing exactly k ones in a clever way (without going through all 2^n sequences); see the accompanying note describing the algorithm, which contains detailed description, pseudocode, and a small step-by-step example.
I think the time complexity is of the order O(k {n choose k}). I do not yet have a formal proof for this. (It is obvious that any algorithm will have to take Omega({n choose k}) time.)
The code in C:
#include <stdlib.h>
#include <stdio.h>
void subs(int n, int k);
int main(int argc, char **argv)
{
if(argc != 3) return 1;
int n, k;
n = atoi(argv[1]); k = atoi(argv[2]);
subs(n, k);
return 0;
}
void subs(int n, int k)
{
int *p = (int *)malloc(sizeof(int)*k);
int i, j, r;
for(i = 0; i < k; ++i) p[i] = i; // initialize our ``set''
// the algorithm
while(1)
{ // visit the current k-subset
for(i = 0; i < k; ++i)
printf("%d ", p[i]+1);
printf("\n");
if(p[0] == n-k) break; // if this is the last k-subset, we are done
for(i = k-1; i >= 0 && p[i]+k-i == n; --i); // find the right element
r = p[i]; ++p[i]; j = 2; // exchange them
for(++i; i < k; ++i, ++j) p[i] = r+j; // move them
}
free(p);
}
References
If this is not efficient enough, I highly recommend Knuth's Volume 4 of The Art of Comouter Programming, where he deals with the problem extensively. It's probably the best reference out there (and fairly recent!).
You might even be able to find a draft of the fascicle, TAOCP Volume 4 Fascicle 3, Generating All Combinations and Partitions (2005), vi+150pp. ISBN 0-201-85394-9, on Knuth's homepage (see his news for 2011 or so).
1.
for(i = 0; i < 3; i++){
for(j = 0; j < 10; j++){
print i+j;
}
}
I would assume Big O would be 30 since the most amount of times would be 3*10.
2.
for(i = 0; i < n; i++){
for(j = 0; j < m; j++){
print i+j;
}
}
Would be O be n*m?
3.
for(i = 0; i < n; i++){
for(j = 0; j < m; j++){
for(int k = 1; k < 1000; k *= 2){
print i+j+k;
}
}
}
n * m * log base 2 (1000) The Big O is in nlog(n) time
4.
for(i = 0; i < n - 10; i++){
for(j = 0; j < m/2; j++){
print i+j;
}
}
5.
for(i = 0; i < n; i++){
print i;
}
//n and m are some integers
for(j = 1; j < m; j *= 2){
print j;
}
Can someone give me a hand with this if you know Big O. I am looking at these and at a loss. I hope I am posting this in the right location, I find these problems difficult. I appreciate any help.
I think it's important just to point out that Big O notation is all about functions that, given an arbitrary constant, will be considered upper bounds at some point.
O(1)
This is because each loop iterates in a constant amount of time. We would refer to this as O(1) instead of O(30) because the function which is the upper bound is 1 with an arbitrary constant >=30.
O(n*m)
Simply because we have to loop through m iterations n times.
O(n*m)
This is the same as the previous one, only we're throwing in another loop in the middle. Now you can notice that this loop, similar to the first problem, is just a constant time. Therefore, you don't even need to really spend time figuring out how often it loops since it will always be constant - it is O(1) and would be interpreted as O(n*m*1) which we can simply call O(n*m)
O(n*m)
For the outer loop, don't get caught up on the .. - 10 and realize that we can just say that loop runs in O(n). We can ignore that .. - 10 for the same reason we ignored the exact values in the first problem; constants don't really matter. This same principle applies for the m/2 because you can think of m just being manipulated by a constant of 1/2. So we can just call this O(n*m).
T(n) = O(n) + O(lg m) => O(n + lg m)
So there are two components we have to look at here; the first loop and the second loop. The first loop is clearly O(n), so that's no problem. Now the second loop is a little tricky. Basically, you can notice that the iterator j is growing exponentially (notably power of 2's), therefore that loop will be running the inverse of exponentially (logarithmic). So this function runs in O(n + lg m).
Any constant factor can be ignored. O(30) is equal to O(1), which is what one would typically say for 1).
2) Just so.
3) in O(n*m*log_2(1000)), log_2(1000) is constant, so it's O(n*m).
4) O(n-10) is same as O(n). O(m/2) is same as O(m). Thus, O(n*m) again.
5) Trivially O(n).
6) O(log_2(m)).
I'd really like assessing if any of you could point me towards the most optimized and computetionally quick linear algebra library in terms of Cholesky factorization.
So far I've been using the Apache Commons Math library, but perhaps there are more robust and better-enhanced options already available.
For instance, would PColt, EJML or ojAlgo better choices? The most urgent concerns is mainly one: I need to iteratively calculate (within a 2048 elements for loop generally) the lower triangular Cholesky factor for up to three different matrices; the largest size the matrices will reach is about 2000x2000.
Cholesky factorisation is quite a simple algorithm. Here's the (unoptimised) C# code that I use. C# and Java are quite similar, so should be an easy job for you to convert to Java and make whatever improvements you deem necessary.
public class CholeskyDecomposition {
public static double[,] Do(double[,] input) {
int size = input.GetLength(0);
if (input.GetLength(1) != size)
throw new Exception("Input matrix must be square");
double[] p = new double[size];
double[,] result = new double[size, size];
Array.Copy(input, result, input.Length);
for (int i = 0; i < size; i++) {
for (int j = i; j < size; j++) {
double sum = result[i, j];
for (int k = i - 1; k >= 0; k--)
sum -= result[i, k] * result[j, k];
if (i == j) {
if (sum < 0.0)
throw new Exception("Matrix is not positive definite");
p[i] = System.Math.Sqrt(sum);
} else
result[j, i] = sum / p[i];
}
}
for (int r = 0; r < size; r++) {
result[r, r] = p[r];
for (int c = r + 1; c < size; c++)
result[r, c] = 0;
}
return result;
}
}
Have a look at the Java Matrix Benchmark. The "Inver Symm" case test inverting a matrix using the cholesky decomposition. If you get the source code for the benchmark there is also a pure cholesky decomposition test that you can turn on.
Here's another comparison of various matrix decompositions between ojAlgo and JAMA