I have started learning Big O notation and analysing time complexities, and I tried messing around with some code to try and understand its time complexity. Here's one of the lines that I had, but I can't seem to figure out what the time complexity for the sum method is? I reckon it is o(n^3) but my friend says it should be o(n^2). Any insights as to what is the right answer?
public static double sum(double[] array)
{
double sum = 0.0;
for(int k = 0; k < size(array); k++)
sum = sum + get(array, k);
return sum;
}
public static double get(double[] array, int k)
{
for(int x=0; x < k; x++)
if(x==k) return array[k];
return -1;
}
public static int size(double[] array)
{
int size = 0;
for(int k=0; k<array.length; k++)
size++;
return size;
}
I guess you friend is right, time complexity is O(n^2).
Because on each iteration in your loop, you consequently calculate size(), which is order of O(n), and calculate get(), which on average is O(n/2).
So each iteration's complexity is O(1.5 * n) and overall is O(n^2)
Related
I am trying to understand the time complexity of this program, and why. I've made some notes of what I think it is, but I'm unsure if I understood it correct.
public static int countSteps(int n) {
int pow = 2; // O(1)
int steps = 0; // O(1)
for (int i = 0; i < n; i++) { // O(n)
if (i == pow) { // O(1)
pow *= 2; // O(1)
for (int j = 0; j < n; j++) { // O(n)
steps++; // O(1)
}
}
else {
steps++; // O(1)
}
}
return steps; // O(1)
}
The inner loop spends a lot of time iterating through n every time the if-statement is triggered, does that affect the time complexity or is it still constant?
The outer loop means it's at least O(n). The inner loop will run only when i is a power of two, so it's O(n*log n).
So I'm preparing for a technical interview, and one of my practice questions is the Kth smallest number.
I know that I can do a sort for O(n * log(n)) time and use a heap for O(n * log(k)). However I also know I can partition it (similar to quicksort) for an average case of O(n).
The actual calculated average time complexity should be:
I've double checked this math using WolframAlpha, and it agrees.
So I've coded my solution, and then I calculated the actual average time complexity on random data sets. For small values of n, it's pretty close. For example n=5 might give me an actual of around 6.2 when I expect around 5.7. This slightly more error is consistent.
This only gets worse as I increase the value of n. For example, for n=5000, I get around 15,000 for my actual average time complexity, when it should be slightly less than 10,000.
So basically, my question is where are these extra iterations coming from? Is my code wrong, or is it my math? My code is below:
import java.util.Arrays;
import java.util.Random;
public class Solution {
static long tc = 0;
static void swap(int[] arr, int i, int j) {
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
static int kMin(int[] arr, int k) {
arr = arr.clone();
int pivot = pivot(arr);
if(pivot > k) {
return kMin(Arrays.copyOfRange(arr, 0, pivot), k);
} else if(pivot < k) {
return kMin(Arrays.copyOfRange(arr, pivot + 1, arr.length), k - pivot - 1);
}
return arr[k];
}
static int pivot(int[] arr) {
Random rand = new Random();
int pivot = rand.nextInt(arr.length);
swap(arr, pivot, arr.length - 1);
int i = 0;
for(int j = 0; j < arr.length - 1; j++) {
tc++;
if(arr[j] < arr[arr.length - 1]) {
swap(arr, i, j);
i++;
}
}
swap(arr, i, arr.length - 1);
return i;
}
public static void main(String args[]) {
int iterations = 10000;
int n = 5000;
for(int j = 0; j < iterations; j++) {
Random rd = new Random();
int[] arr = new int[n];
for (int i = 0; i < arr.length; i++) {
arr[i] = rd.nextInt();
}
int k = rd.nextInt(arr.length - 1);
kMin(arr, k);
}
System.out.println("Actual: " + tc / (double)iterations);
double expected = 2.0 * n - 2.0 - (Math.log(n) / Math.log(2));
System.out.println("Expected: " + expected);
}
}
As you and others have pointed out in the comments, your calculation assumed that the array was split in half on each iteration by the random pivot, which is incorrect. This uneven splitting has a significant impact: when the element you're trying to select is the actual median, for instance, the expected size of the array after one random pivot choice is 75% of the original, since you'll always choose the larger of the two arrays.
For an accurate estimate of the expected comparisons for each value of n and k, David Eppstein published an accessible analysis here and derives this formula:
This is a very close estimate for your values, even though this assumes no duplicates in the array.
Calculating the expected number of comparisons for k from 1 to n-1, as you do, gives ~7.499 * 10^7 total comparisons when n=5000, or almost exactly 15,000 comparisons per call of Quickselect as you observed.
I'm trying to figure out what is Big O and big Omega from the following piece of code down below.
This code inputs an array of ints, and sorts them in ascending order.
The worst case would be all in descending order {5,4,3,2,1}
,and the best case would be ascending order {1,2,3,4,5}.
static int counter = 0;
static int counter1 = 0;
static int counter2 = 0;
public static int[] MyAlgorithm(int[]a) {
int n = a.length;
boolean done = true;
int j = 0;
while(j<=n-2) {
counter++;
if(a[j]>a[j+1]) {
int temp = a[j];
a[j] = a[j+1];
a[j+1] = temp;
done = false;
}
j = j+1;
}
j = n-1;
while(j>=1) {
counter1++;
if(a[j]<a[j-1]) {
int temp = a[j-1];
a[j-1] = a[j];
a[j] = temp;
done = false;
}
j = j-1;
}
if(!done) {
counter2++;
MyAlgorithm(a);
}
return a;
}
Worst case for each while loop i got was n-1, and for the recursion it was n/2.
Best case is n-1 while loops, and zero recursion
So my big Omega is (n) ( no recursion )
but for Big O, here is the confusing part for me, since there are n/2 recursion calls, does this mean i do N X N (because of n/2 recursion) big O (n^2)? or does it stay big O(n)???
As you said the Omega is Omega(n). In case all numbers in the array a are already in sorted order the code iterates over the array twice, once per while loop. This are n steps O(1) times.
In the worst case you are correct in assuming O(n^2). As you saw, an array sorted in reverse order produces such a worst case scenario. We can also produce a worst case scenario by having a sorted array in increasing order and then only swap the first and last number. Then each run of MyAlgorithm moves that last/first number two positions. After n/2 steps (runs of MyAlgorithm) the numbers reach their final position. Hence, O(n/2 * n) = O(n^2).
Small side note, sorting in general is in O(n log n), so you can sort something only under some circumstances in O(n).
My CS teacher asked us to "add a small change" to this code to make it run with time complexity of N3 - N2 instead of the normal N3. I cannot for the life of me figure it out and I was wondering if anyone happened to know. I don't think he is talking about strassens method.
from when I looked at it, maybe it could take advantage of the fact that he only cares about a square (diagonal) matrix.
void multiply(int n, int A[][], int B[][], int C[][]) {
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
{
C[i][j] = 0;
for (int k = 0; k < n; k++)
{
C[i][j] += A[i][k]*B[k][j];
}
}
}
}
You cannot achieve Matrix multiplication in O(N2). However, you can improve the complexity from O(N3). In linear algebra, there are algorithms like the Strassen algorithm which reduces the time complexity to O(N2.8074) by reducing the number of multiplications required for each 2x2 sub-matrix from 8 to 7.
An improved version of the Coppersmith–Winograd algorithm is the fastest known matrix multiplication algorithm with the best time complexity of O(N2.3729).
Can anyone help to analyze the time complexity of this can and please explain why.
I'm comparing the array elements with each other with one being max.
I'm not sure how to calculate the time complexity for this.
Can anyone help me with this?
class Largest
{
public static void main (String[] args)
{
int array[] = {33,55,13,46,87,42,10,34};
int max = array[0]; // Assume array[0] to be the max for time-being
for( int i = 1; i < array.length; i++) // Iterate through the First Index and compare with max
{
if( max < array[i])
{
max = array[i];
}
}
System.out.println("Largest is: "+ max);
}
}
It is O(n)
//loop n-1 times, O(n)
for( int i = 1; i < array.length; i++) // Iterate through the First Index and compare with max
{
//one comparison operation, O(1)
if( max < array[i])
{
//one assignment operation, O(1)
max = array[i];
}
}
You do 2 constant operations, n-1 times.
O(n) * [O(1) + O(1)] = O(n)
You loop goes around n - 1 times so the complexity is O(n)