Find all triplets in array in O(n^2) - java

I am trying to print all triplets in array, unlike 3SUM or anything similiar, they don't satisfy any condition. I just want to print them all.
My simple solution
for (int i = 0; i < arr.length - 2; i++) {
for (int j = i + 1; j < arr.length - 1; j++) {
for (int k = j + 1; k < arr.length; k++){
System.out.println(arr[i] + " " + arr[j] + " " + arr[k]);
}
}
}
runs in O(n^3) with the amount of triplets being ((n)*(n-1)*(n-2))/(3!).
So my question is, can this be done any faster than O(n^3)?

n choose 3, which is the number of combinations of triplets, is
O(n^3).
So no, theoretically it is impossible to do better than that since the mere operation of printing them will take O(n^3) operations.

You can't print n^3 triplets less in n^3 operation.
If time of computing is problem you can divide your print operation in n threads so in theory that will reduce time by n.

Related

How do you calculate T(n) in an equation from a code fragment?

I'm having trouble converting code fragments into equations that figure out T(n) of said equation. An example code fragment is this:
a = b + c;
d = a + e;
This specific questions asks to determine T(n). How would I go about doing that? Another example code given is as follows:
sum = 0;
for (i=0; i<3; i++)
for (j=0; j<n; j++)
sum++;
I've attempted to follow instruction and examples from other questions that are similar, but am getting stuck on how the equation is found exactly. Here's an example and result that I know of so far:
Code:
sum = 0;
i = 1;
while (i <= N)
{
sum = sum + 1;
i++;
}
With the result being this:
T(N) = 2 + (Σ with N on top and i+1 on bottom) * 2
and that ends up simplifying to 2+2N
I'm unsure of how this result is found though.
Here is how you can approach this problem:
sum = 0; // 1 - O(1)
for (i=0; i<3; i++) // This will run 3x times - O(3)
for (j=0; j<n; j++) // This will run n times (O(n)), meaning the sum will be increased n times
sum++;
Then you can write T(n)=1+3*(1+n+2n)≈1+3n. This means that the time complexity for this code is O(n).
Generally, when looking at the simple for loop, like the one here, you can determine its complexity by looking at the condition statement. For j<n the complexity will be O(n), and for i<3 the complexity will be O(3).

Run time complexity for Two for loops in One for loop

I am writing a java method that finds the stability indexes for an array. My algorithm works fine but I am unsure of its run time complexity. I believe it is O(n) since the first loop is O(n) and the two inner loops are O(2n), but again I'm not sure.
int[] arr = {0, -3, 5, -4, -2, 3, 1, 0};
for(int num = 0; num < arr.length; num++){
int sumLeft= 0;
int sumRight = 0;
for(int i = 0; i<num; i++){
sumLeft= sumLeft + arr[i];
}
for(int i = num + 1; i < arr.length;i++){
sumRight= sumRight + arr[i];
}
if(sumLeft==sumRight){
System.out.println(num);
}
}
Output:
0
3
7
We can do two things:
We can give you an answer, and that answer is O(N^2).
We can explain how to work it out for yourself.
The way to work it out is to count the operations.
When I say "count", I don't mean that literally. What I actually mean is that you need to work out an algebraic formula for the number of times some indicative operation is performed.
So in your example, I would identify these two statement as the ones that are the most important:
sumLeft= sumLeft + arr[i];
sumRight= sumRight + arr[i];
(Why did I pick those statements? Intuition / experience! The pedantic way to do this is to count all operations. But with experience, you can pick the important ones ... and the rest don't matter.)
So now for the formulae:
In one iteration of the outer loop, the first statement is executed from 0 to num-1; i.e. num times.
In one iteration of the outer loop, the second statement is executed from num+1 to array.length - 1; i.e. array.length - num - 1 times.
So, in one iteration of the outer loop, the two statements are executed num + array.length - num - 1 times which reduces to array.length - 1 times.
But the outer loop runs array.length times. So the two statements are executed array.length x (array.length - 1) times.
Finally, by the definition of Big Oh, array.length x (array.length - 1) is in the complexity class O(N^2) where N is the array size.
it is O(n^2)
for(int num = 0; num < arr.length; num++){
int sumLeft= 0;
int sumRight = 0;
for(int i = 0; i<num; i++){// <------------- 1 ~ n
//https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B%AF
sumLeft= sumLeft + arr[i];
}
for(int i = num + 1; i < arr.length;i++){
sumRight= sumRight + arr[i];
}
if(sumLeft==sumRight){
System.out.println(num);
}
}

Worst Case Big O with Java Algorithms

1.
for(i = 0; i < 3; i++){
for(j = 0; j < 10; j++){
print i+j;
}
}
I would assume Big O would be 30 since the most amount of times would be 3*10.
2.
for(i = 0; i < n; i++){
for(j = 0; j < m; j++){
print i+j;
}
}
Would be O be n*m?
3.
for(i = 0; i < n; i++){
for(j = 0; j < m; j++){
for(int k = 1; k < 1000; k *= 2){
print i+j+k;
}
}
}
n * m * log base 2 (1000) The Big O is in nlog(n) time
4.
for(i = 0; i < n - 10; i++){
for(j = 0; j < m/2; j++){
print i+j;
}
}
5.
for(i = 0; i < n; i++){
print i;
}
//n and m are some integers
for(j = 1; j < m; j *= 2){
print j;
}
Can someone give me a hand with this if you know Big O. I am looking at these and at a loss. I hope I am posting this in the right location, I find these problems difficult. I appreciate any help.
I think it's important just to point out that Big O notation is all about functions that, given an arbitrary constant, will be considered upper bounds at some point.
O(1)
This is because each loop iterates in a constant amount of time. We would refer to this as O(1) instead of O(30) because the function which is the upper bound is 1 with an arbitrary constant >=30.
O(n*m)
Simply because we have to loop through m iterations n times.
O(n*m)
This is the same as the previous one, only we're throwing in another loop in the middle. Now you can notice that this loop, similar to the first problem, is just a constant time. Therefore, you don't even need to really spend time figuring out how often it loops since it will always be constant - it is O(1) and would be interpreted as O(n*m*1) which we can simply call O(n*m)
O(n*m)
For the outer loop, don't get caught up on the .. - 10 and realize that we can just say that loop runs in O(n). We can ignore that .. - 10 for the same reason we ignored the exact values in the first problem; constants don't really matter. This same principle applies for the m/2 because you can think of m just being manipulated by a constant of 1/2. So we can just call this O(n*m).
T(n) = O(n) + O(lg m) => O(n + lg m)
So there are two components we have to look at here; the first loop and the second loop. The first loop is clearly O(n), so that's no problem. Now the second loop is a little tricky. Basically, you can notice that the iterator j is growing exponentially (notably power of 2's), therefore that loop will be running the inverse of exponentially (logarithmic). So this function runs in O(n + lg m).
Any constant factor can be ignored. O(30) is equal to O(1), which is what one would typically say for 1).
2) Just so.
3) in O(n*m*log_2(1000)), log_2(1000) is constant, so it's O(n*m).
4) O(n-10) is same as O(n). O(m/2) is same as O(m). Thus, O(n*m) again.
5) Trivially O(n).
6) O(log_2(m)).

What's the big-O of this code?

Could somebody please help me find the big-O of this code? I've already tried to calculate it but I just don't understand how.
int n = //user input
for (int i = 0; i < n; i++){
for (int j = 1; j < n; j = j * 2){
System.out.println(i * j);
}
}
You should ask yourself how many iterations are in the outer loop and how many iterations are in the inner loop. Then you multiply the two.
The outer loop is simple - i grows from 0 to n-1 in increments of 1, so the total number of iterations is n, which is O(n).
In the inner loop j grows from 1 to n-1, but j is multiplied by 2 in each iteration. If n=2^k, for some integer k, there would be k iterations, and log n = k. Therefore there are O(log n) iterations in the inner loop.
Multiplying the two, you get O(n)*O(log(n)) = O(n log(n));
O(n Log n). Why?
for(int i=0; i<n;i++){ // n
for(int j=1; j<n; j=j*2){ //Log n (multiplying by 2)
System.out.println(i*j);
}
}`
Using Sigma notation, you may do the following:

Finding all possible combinations of elements of six arrays

I have 6 arrays, each has 8 elements. I'd like to write a method that reveals all the possible combinations of all the elements of all arrays like:
firstArray firstElement, secondArray firstElement,.... sixthArray firstElement
firstArray secondElement, secondArray firstElement,.... sixthArray firstElement
....etc...
firstArray lastElement, secondArray lastElement,.... sixthArray lastElement
How can I do this in the most efficient way, the most performance-friendly way?
for (int i = 0; i < A.length; i++) {
for (int j = 0; j < B.length; j++) {
for (int h = 0; h < C.length; h++) {
for (int k = 0; k < D.length; k++) {
for (int l = 0; l < E.length; l++) {
for (int n = 0; n < F.length; n++) {
System.out.println(A[i] + " "
+ B[j] + " "
+ C[h] + " "
+ D[k] + " "
+ E[l] + " "
+ F[n]);
}
}
}
}
}
}
Simplest Code would be:
for (first array a) {
for (second array b) {
for (third array c) {
for (fourth array d) {
for (fifth array e) {
for (sixth array f) {
System.out.println(a[], b[], c[], d[], e[], f[]);
}
}
}
}
}
}
This is not good performance wise as it will take - no. of arrays * element per array * element per array time.
This is fast becoming an SO FAQ, but for the life of me I can't find the right question that this is a duplicate of, so here's the FPA (frequently provided answer).
Generate all the 6-digit base-8 numbers from 000000 to 777777 in turn. Each number specifies one of the sets you are looking for: the first digit identifies the element of the first array, the second digit the element of the second array, etc.
That should be enough to get you started, any 'help' I provided in Java would be laughed at. Whether this is better than the answer you already have (or indeed, materially different from it), I leave you and others to judge.
For your future reference you are trying to compute the cartesian product of your 6 arrays. As to the efficiency of these approaches, well computing the cartesian product of 2 sets each of n elements is O(n^2) and there is no getting around that by clever programming. So for 6 sets, each of n elements, the computational complexity is going to be O(n^6).

Categories