I want to analyze the execution time complexity of the below program.
Please answer with the explanation.
private static void printSecondLargest(int[] arr) {
int length = arr.length, temp;
for (int i = 0; i < 2; i++) {
for (int j = i+1; j < length; j++) {
if(arr[i]<arr[j]) {
temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}
}
System.out.println("Second largest is: "+arr[1]);
}
It's O(n) where n represents the length of the array.
The body of the inner most loop:
if(arr[i]<arr[j]) {
temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
runs in constant time.
This code will be executed first arr.length-1 times, then arr.length-2 times. That is 2 * arr.length - 3. Thus the execution time is proportional to 2n-3 which is O(n).
It is clearly O(n). The outer loop is running only 2 times and inner loop N times. So, overall complexity is O(2*n).
The outer loop will run two times and inner loop runs (length-1) and second time (length-2)
suppose length is N
so it will be 2*((N-1)/2+(N-2/)2)==2*(2n-3)/2
Which is final (2N-3) and in O notation it is O(N)
private static void printSecondLargest(int[] arr) {
int length = arr.length, temp; // **it takes contant time**
for (int i = 0; i < 2; i++) { // as loop goes only two step it also takes constant time
for (int j = i+1; j < length; j++) { // this loop takes n time if we consider arr length of size n
if(arr[i]<arr[j]) {
temp = arr[i]; // it also takes constant time
arr[i] = arr[j];
arr[j] = temp;
}
}
}
System.out.println("Second largest is: "+arr[1]);
}
So as per above calculation we neglect constant time and calculate all varying time constraint and as per code complexity will be O(n).
Related
I am writing code to sort an array in order. I checked some algorithm for sort and merge, however I found that if I just go through the array and compare every 2 elements and swap them and repeat that until the array is sorted.
So if array[I] > array[i++], swap, repeat.
It is not working so far. I also need a break point to avoid stack overflow: I need some help please
Array:
int[] newArray = new int[] {3,9,5,7,4,6,1};
SortArray s = new SortArray();
s.sortThisArray(newArray, 0, 0);
Recursive function:
public String sortThisArray(int[] array, double counter, double a)
{
int swap0 = 0;
int swap1 = 0;
if (a > 1000)
{
return "reached the end" ;
}
for (int i =0; i<array.length; i++)
{
if (array[i] > array[i++])
{
swap0 = array[i];
swap1 = array[i++];
array[i++] = swap0;
array[i] = swap1;
counter = counter++;
a = array.length * counter;
sortThisArray (array, counter, a);
}
}
for (int j = 0; j<array.length ; j++)
{
System.out.println(array[j]);
}
return "completed";
}
}
What you are searching for is recursive bubble sort algorithm.
The main error is confusing i++ (which increments i every time) with i+1, that is only the position in the array after i, without incrementing. Then, there is no need to use double for counter, and also the a variable is not needed. You need only the length of the current segment, this way:
import java.util.*;
public class RecursiveBubbleSort {
public static void main(String[] args) throws Exception {
int[] newArray = new int[] {3,9,5,7,4,6,1};
sortThisArray(newArray, newArray.length);
System.out.println("Sorted array : ");
System.out.println(Arrays.toString(newArray));
}
public static int[] sortThisArray(int[] array, int n) {
if (n == 1) {
return array; //finished sorting
}
int temp;
for (int i = 0; i < n-1; i++) {
if (array[i+1] < array[i]) {
temp = array[i];
array[i] = array[i+1];
array[i+1] = temp;
}
}
return sortThisArray(array, n-1);
}
}
The ++ operator evalutes separately. So in the case i++, we will read i and then increment it by 1. And the other way around, ++i, would first apply the increment to i, and then read i. So when you want to check for the next element in the array, [i+1] is what you are looking for. With the same reasoning, can you figure out whats wrong with this line?
counter = counter++;
And do we even need it?
How do I calculate the time complexity of the following program?
int[] vars = { 2, 4, 5, 6 };
int len = vars.length;
int[] result = new int[len];
for (int i = 0; i < len; i++) {
int value = 1;
for (int k = 0; k < i; k++) {
value = value * vars[k];
}
for (int j = i + 1; j < len; j++) {
value = value * vars[j];
}
result[i] = value;
}
and how is the above one same as below?
for (int i = 0; i < len; i++) {
int value = 1;
for (int j = 0; j < len; j++) {
if(j != i) {
value = value * vars[j];
}
}
result[i] = value;
}
The i for loop is of time complexity O(n), because it performs one iteration for every element of the array. For every element in the array, you are looping through it once more -- half on average in the k for loop, and half on average in the j for loop. Each of these is O(n) as well. If there are 4 elements in the array, the number of operations is proportional to n*(n - 1), but in time-complexity, constants such as the 1 are ignored.
The number of operations your method will perform is proportional to the number of elements in it multiplied by itself, therefore, overall, the method is O(n2).
For the first fragment:
For the second fragment:
A general approach in determining the complexity is counting the iterations.
In your example, you have an outer for loop with two loops nested in it. Note: Instead of len, I'll write n.
The outer loop
for (int i = 0; i < n; i++)
iterates n-times.
The number of iterations of the two next loops are actually more easy to count, than it looks like:
The second loop iterates i-times and the third n-i-times. If you add them together you get n-many iterations within the outer loop.
Finally, if the outer loop does n iterations and within each of these iteration the code loops another n times you get the result of n^2 iterations. In the traditional notation of complexity-theory you'd write, that the algorithm has an upper-bound of n^2 or is in O(n).
Here's a little exercise i'm working on about dynamic programming. I have the following function :
I have to program this function with two approaches (top-down with memoization and bottom-up).
Here's what I currently do for bottom up:
public static int functionBottomUp (int n){
int [] array = new int[n+1];
array[0] = 1;
for(int i = 1; i < array.length; i++){
if(i == 1)
array[i] = array[i - 1];
else {
for(int p = 0; p < i; p ++)
array[i] += array[p];
}
}
return array[n];
}
And for memoization :
public static int functionMemoization(int n){
int[] array = new int[n+1];
for(int i = 0; i < n; i++)
array[i] = 0;
return compute(array, n);
}
private static int compute(int[] array, int n){
int ans = 0;
if(array[n] > 0)
return array[n];
if(n == 0 || n == 1)
ans = 1;
else
for(int i = 0; i < n; i++)
ans += compute(array, i);
array[n] = ans;
return array[n];
}
I get correct outputs for both but now i'm struggling myself to calculate the complexities of both.
First the complexity of f(n) is 2^n because f(3) make 7 calls to f(0) and f(4) make 15 calls to f(0) (I know this is not a formal proof but this is just to give me an idea).
But now i'm stuck for calculating the complexity of both functions.
Bottom-Up : I would say that the complexity is O(n) (because of the for(int i = 1; i < array.length; i++)) but there is this inner loop for(int p = 0; p < i; p ++) and I don't know if this modifies the complexity.
Memoization : Clearly this is a most O(n) because of the first loop which initialize the array. But I don't know how the compute function could modify this complexity.
Could someone clarify this for me ?
Let's take a look at your functions. Here's the bottom-up DP version:
public static int functionBottomUp (int n){
int [] array = new int[n+1];
array[0] = 1;
for(int i = 1; i < array.length; i++){
if(i == 1)
array[i] = array[i - 1];
else {
for(int p = 0; p < i; p ++)
array[i] += array[p];
}
}
return array[n];
}
To count up the work that's being done, we can look at how much work is required to complete loop iteration i for some arbitrary i. Notice that if i = 1, the work done is O(1). Otherwise, the loop runtime is taken up by this part here:
for(int p = 0; p < i; p ++)
array[i] += array[p];
The time complexity of this loop is proportional to i. This means that loop iteration i does (more or less) i work. Therefore, the total work done is (approximately)
1 + 2 + 3 + ... + n = Θ(n2)
So the runtime here is Θ(n2) rather than O(n) as you conjectured in your question.
Now, let's look at the top-down version:
public static int functionMemoization(int n){
int[] array = new int[n+1];
for(int i = 0; i < n; i++)
array[i] = 0;
return compute(array, n);
}
private static int compute(int[] array, int n){
int ans = 0;
if(array[n] > 0)
return array[n];
if(n == 0 || n == 1)
ans = 1;
else
for(int i = 0; i < n; i++)
ans += compute(array, i);
array[n] = ans;
return array[n];
}
You initially do Θ(n) work to zero out the array, then call compute to compute all the values. You're eventually going to fill in all of array with values and will do so exactly once per array element, so one way to determine the time complexity is to determine, for each array entry, how much work is required to fill it. In this case, the work done is determined by this part:
for(int i = 0; i < n; i++)
ans += compute(array, i);
Since you're memoizing values, when determining the work required to evaluate the function on value n, we can pretend each recursive call takes time O(1); the actual work will be accounted for when we sum up across all n. As before, the work here done is proportional to n. Therefore, since n ranges from 1 to n, the work done is roughly
1 + 2 + 3 + ... + n = Θ(n2)
Which again is more work than your estimated O(n).
However, there is a much faster way to evaluate this recurrence. Look at the first few values of f(n):
f(0) = 1
f(1) = 1
f(2) = 2
f(3) = 4
f(4) = 8
f(5) = 16
...
f(n) = 2n-1
Therefore, we get that
f(0) = 1
f(n) = 2n-1 if n > 0
Therefore, the following function evaluates f time:
int f(int n) {
return n == 0? 1 : 1 << (n - 1);
}
Assuming that you're working with fixed-sized integers (say, 32-bit or 64-bit integers), this takes time O(1). If you're working with arbitrary-precision integers, this will take time Θ(n) because you can't express 2n-1 without writing out Θ(n) bits, but if we're operating under this assumption the runtimes for the original code would also need to be adjusted to factor in the costs of the additions. For simplicity, I'm going to ignore it or leave it as an exercise to the reader. ^_^
Hope this helps!
I have two implementations for two different sorts, InsertionSort and ShellSort.
They are as follows:
InsertionSort:
for (int pos = 0; pos < arrayToBeSorted.length; pos++) {
for (int secondMarker = pos; secondMarker > 0; secondMarker--) {
int currentValue = arrayToBeSorted[secondMarker];
int valueBeingCheckedAgainst = arrayToBeSorted[secondMarker - 1];
if (currentValue > valueBeingCheckedAgainst) {
break;
}
arrayToBeSorted[secondMarker] = arrayToBeSorted[secondMarker - 1];
arrayToBeSorted[secondMarker - 1] = currentValue;
}
}
ShellSort:
for (int gap = a.length / a.length; gap > 0; gap = (gap / 2)) {
for (int i = gap; i < a.length; i++) {
int tmp = a[i];
int j = i;
for (; j >= gap && tmp < (a[j - gap]); j -= gap) {
a[j] = a[j - gap];
}
a[j] = tmp;
}
}
I also have 10 array of integers which hold 32000 integers. I get the time before I call the static sortArray methods in these classes. Here are the results:
For InsertionSort.sortArray:
Solving array with: 32000 elements.
Time in milliseconds:264
Time in milliseconds:271
Time in milliseconds:268
Time in milliseconds:263
Time in milliseconds:259
Time in milliseconds:257
Time in milliseconds:258
Time in milliseconds:260
Time in milliseconds:259
Time in milliseconds:261
And for ShellSort:
Solving array with: 32000 elements.
Time in milliseconds:357
Time in milliseconds:337
Time in milliseconds:167
Time in milliseconds:168
Time in milliseconds:165
Time in milliseconds:168
Time in milliseconds:167
Time in milliseconds:167
Time in milliseconds:166
Time in milliseconds:167
So how come there is so much difference between them? They are basically the same algorithms?
Also, how is it possible that the first 2 runs for ShellSort takes longer, but the rest is quicker?
This is the results for 128000 elements, InsertionSort first again:
Solving array with: 128000 elements.
Time in milliseconds:4292
Time in milliseconds:4267
Time in milliseconds:4241
Time in milliseconds:4252
Time in milliseconds:4253
Time in milliseconds:4248
Time in milliseconds:4261
Time in milliseconds:4260
Time in milliseconds:4333
Time in milliseconds:4261
ShellSort:
Solving array with: 128000 elements.
Time in milliseconds:5358
Time in milliseconds:5335
Time in milliseconds:2676
Time in milliseconds:2656
Time in milliseconds:2662
Time in milliseconds:2654
Time in milliseconds:2661
Time in milliseconds:2656
Time in milliseconds:2660
Time in milliseconds:2673
I am sure that the arrays are I am passing to the methods are exactly same and they are quite random.
In your insertion sort, you are being more complicated,
for (int pos = 0; pos < arrayToBeSorted.length; pos++) {
for (int secondMarker = pos; secondMarker > 0; secondMarker--) {
int currentValue = arrayToBeSorted[secondMarker];
int valueBeingCheckedAgainst = arrayToBeSorted[secondMarker - 1];
if (currentValue > valueBeingCheckedAgainst) {
break;
}
arrayToBeSorted[secondMarker] = arrayToBeSorted[secondMarker - 1];
arrayToBeSorted[secondMarker - 1] = currentValue;
}
}
You read the value from the array in the inner loop, and while the value at the preceding position is not smaller, you write two values to the array.
In the shell sort,
for (int i = gap; i < a.length; i++) {
int tmp = a[i];
int j = i;
for (; j >= gap && tmp < (a[j - gap]); j -= gap) {
a[j] = a[j - gap];
}
a[j] = tmp;
}
you read the value to be placed once, outside the inner loop, and only have a single write in the inner loop body, writing the value only once after the inner loop.
That is more efficient, and so it's understandable that the shell sort is faster. That the two first shell sorts are slower is probably because the wrapping
for (int gap = a.length / a.length; gap > 0; gap = (gap / 2)) {
confuses the JIT for a while before it notices that gap can be replaced with 1 and the wrapping loop eliminated.
hi i need to find the time and space complexity of the program, pls help, if possible please suggest the optimization that can be performed,
.........................................................................................................................................................................................
public class Sol {
public int findMaxRectangleArea(int [][] as) {
if(as.length == 0)
return 0;
int[][] auxillary = new int[as.length][as[0].length];
for(int i = 0; i < as.length; ++i) {
for(int j = 0; j < as[i].length; ++j) {
auxillary[i][j] = Character.getNumericValue(as[i][j]);
}
}
for(int i = 1; i < auxillary.length; ++i) {
for(int j = 0; j < auxillary[i].length; ++j) {
if(auxillary[i][j] == 1)
auxillary[i][j] = auxillary[i-1][j] + 1;
}
}
int max = 0;
for(int i = 0; i < auxillary.length; ++i) {
max = Math.max(max, largestRectangleArea(auxillary[i]));
}
return max;
}
private int largestRectangleArea(int[] height) {
Stack<Integer> stack =
new Stack<Integer>();
int max = 0;
int i = 0;
while(i < height.length) {
if(stack.isEmpty() ||
height[i] >= stack.peek()) {
stack.push(height[i]);
i++;
}
else {
int count = 0;
while(!stack.isEmpty() &&
stack.peek() > height[i]) {
count++;
int top = stack.pop();
max = Math.max(max, top * count);
}
for(int j = 0; j < count + 1; ++j) {
stack.push(height[i]);
}
i++;
}
}
int count = 0;
while(!stack.isEmpty()) {
count++;
max = Math.max(max, stack.pop() * count);
}
return max;
}
thank you in advance
To find the space complexity take a look at the variables you declare and are larger than a single primitive variable. In fact I believe your space complexity will be determined my the array auxilary and the Stack stack. The size of the first one is pretty clear and I don't completely understand the second one but I see it's size will never be greater than the one of the array. So I would say the space complexity is O(size of(auxilary)) or O(N * M) where N=as.length() and M = as[0].length.
Now the time complexity is a bit trickier. You have two cycles over the whole auxilary array so for sure time complexity is at least O( N * M). You also have another cycle that invokes largestRectangleArea for each row of auxilary. If I get the code in this function correctly it seems this function is again linear, but I am not sure here. Since you know the logic better probably you will be able to compute its complexity better.
Hope this helps.