Complexity of a dynamic programming versus memoization in this case? - java

Here's a little exercise i'm working on about dynamic programming. I have the following function :
I have to program this function with two approaches (top-down with memoization and bottom-up).
Here's what I currently do for bottom up:
public static int functionBottomUp (int n){
int [] array = new int[n+1];
array[0] = 1;
for(int i = 1; i < array.length; i++){
if(i == 1)
array[i] = array[i - 1];
else {
for(int p = 0; p < i; p ++)
array[i] += array[p];
}
}
return array[n];
}
And for memoization :
public static int functionMemoization(int n){
int[] array = new int[n+1];
for(int i = 0; i < n; i++)
array[i] = 0;
return compute(array, n);
}
private static int compute(int[] array, int n){
int ans = 0;
if(array[n] > 0)
return array[n];
if(n == 0 || n == 1)
ans = 1;
else
for(int i = 0; i < n; i++)
ans += compute(array, i);
array[n] = ans;
return array[n];
}
I get correct outputs for both but now i'm struggling myself to calculate the complexities of both.
First the complexity of f(n) is 2^n because f(3) make 7 calls to f(0) and f(4) make 15 calls to f(0) (I know this is not a formal proof but this is just to give me an idea).
But now i'm stuck for calculating the complexity of both functions.
Bottom-Up : I would say that the complexity is O(n) (because of the for(int i = 1; i < array.length; i++)) but there is this inner loop for(int p = 0; p < i; p ++) and I don't know if this modifies the complexity.
Memoization : Clearly this is a most O(n) because of the first loop which initialize the array. But I don't know how the compute function could modify this complexity.
Could someone clarify this for me ?

Let's take a look at your functions. Here's the bottom-up DP version:
public static int functionBottomUp (int n){
int [] array = new int[n+1];
array[0] = 1;
for(int i = 1; i < array.length; i++){
if(i == 1)
array[i] = array[i - 1];
else {
for(int p = 0; p < i; p ++)
array[i] += array[p];
}
}
return array[n];
}
To count up the work that's being done, we can look at how much work is required to complete loop iteration i for some arbitrary i. Notice that if i = 1, the work done is O(1). Otherwise, the loop runtime is taken up by this part here:
for(int p = 0; p < i; p ++)
array[i] += array[p];
The time complexity of this loop is proportional to i. This means that loop iteration i does (more or less) i work. Therefore, the total work done is (approximately)
1 + 2 + 3 + ... + n = Θ(n2)
So the runtime here is Θ(n2) rather than O(n) as you conjectured in your question.
Now, let's look at the top-down version:
public static int functionMemoization(int n){
int[] array = new int[n+1];
for(int i = 0; i < n; i++)
array[i] = 0;
return compute(array, n);
}
private static int compute(int[] array, int n){
int ans = 0;
if(array[n] > 0)
return array[n];
if(n == 0 || n == 1)
ans = 1;
else
for(int i = 0; i < n; i++)
ans += compute(array, i);
array[n] = ans;
return array[n];
}
You initially do Θ(n) work to zero out the array, then call compute to compute all the values. You're eventually going to fill in all of array with values and will do so exactly once per array element, so one way to determine the time complexity is to determine, for each array entry, how much work is required to fill it. In this case, the work done is determined by this part:
for(int i = 0; i < n; i++)
ans += compute(array, i);
Since you're memoizing values, when determining the work required to evaluate the function on value n, we can pretend each recursive call takes time O(1); the actual work will be accounted for when we sum up across all n. As before, the work here done is proportional to n. Therefore, since n ranges from 1 to n, the work done is roughly
1 + 2 + 3 + ... + n = Θ(n2)
Which again is more work than your estimated O(n).
However, there is a much faster way to evaluate this recurrence. Look at the first few values of f(n):
f(0) = 1
f(1) = 1
f(2) = 2
f(3) = 4
f(4) = 8
f(5) = 16
...
f(n) = 2n-1
Therefore, we get that
f(0) = 1
f(n) = 2n-1 if n > 0
Therefore, the following function evaluates f time:
int f(int n) {
return n == 0? 1 : 1 << (n - 1);
}
Assuming that you're working with fixed-sized integers (say, 32-bit or 64-bit integers), this takes time O(1). If you're working with arbitrary-precision integers, this will take time Θ(n) because you can't express 2n-1 without writing out Θ(n) bits, but if we're operating under this assumption the runtimes for the original code would also need to be adjusted to factor in the costs of the additions. For simplicity, I'm going to ignore it or leave it as an exercise to the reader. ^_^
Hope this helps!

Related

Why is the computational complexity O(n^4)?

int sum = 0;
for(int i = 1; i < n; i++) {
for(int j = 1; j < i * i; j++) {
if(j % i == 0) {
for(int k = 0; k < j; k++) {
sum++;
}
}
}
}
I don't understand how when j = i, 2i, 3i... the last for loop runs n times. I guess I just don't understand how we came to that conclusion based on the if statement.
Edit: I know how to compute the complexity for all the loops except for why the last loop executes i times based on the mod operator... I just don't see how it's i. Basically, why can't j % i go up to i * i rather than i?
Let's label the loops A, B and C:
int sum = 0;
// loop A
for(int i = 1; i < n; i++) {
// loop B
for(int j = 1; j < i * i; j++) {
if(j % i == 0) {
// loop C
for(int k = 0; k < j; k++) {
sum++;
}
}
}
}
Loop A iterates O(n) times.
Loop B iterates O(i2) times per iteration of A. For each of these iterations:
j % i == 0 is evaluated, which takes O(1) time.
On 1/i of these iterations, loop C iterates j times, doing O(1) work per iteration. Since j is O(i2) on average, and this is only done for 1/i iterations of loop B, the average cost is O(i2 / i) = O(i).
Multiplying all of this together, we get O(n × i2 × (1 + i)) = O(n × i3). Since i is on average O(n), this is O(n4).
The tricky part of this is saying that the if condition is only true 1/i of the time:
Basically, why can't j % i go up to i * i rather than i?
In fact, j does go up to j < i * i, not just up to j < i. But the condition j % i == 0 is true if and only if j is a multiple of i.
The multiples of i within the range are i, 2*i, 3*i, ..., (i-1) * i. There are i - 1 of these, so loop C is reached i - 1 times despite loop B iterating i * i - 1 times.
The first loop consumes n iterations.
The second loop consumes n*n iterations. Imagine the case when i=n, then j=n*n.
The third loop consumes n iterations because it's executed only i times, where i is bounded to n in the worst case.
Thus, the code complexity is O(n×n×n×n).
I hope this helps you understand.
All the other answers are correct, I just want to amend the following.
I wanted to see, if the reduction of executions of the inner k-loop was sufficient to reduce the actual complexity below O(n⁴). So I wrote the following:
for (int n = 1; n < 363; ++n) {
int sum = 0;
for(int i = 1; i < n; ++i) {
for(int j = 1; j < i * i; ++j) {
if(j % i == 0) {
for(int k = 0; k < j; ++k) {
sum++;
}
}
}
}
long cubic = (long) Math.pow(n, 3);
long hypCubic = (long) Math.pow(n, 4);
double relative = (double) (sum / (double) hypCubic);
System.out.println("n = " + n + ": iterations = " + sum +
", n³ = " + cubic + ", n⁴ = " + hypCubic + ", rel = " + relative);
}
After executing this, it becomes obvious, that the complexity is in fact n⁴. The last lines of output look like this:
n = 356: iterations = 1989000035, n³ = 45118016, n⁴ = 16062013696, rel = 0.12383254507467704
n = 357: iterations = 2011495675, n³ = 45499293, n⁴ = 16243247601, rel = 0.12383580700180696
n = 358: iterations = 2034181597, n³ = 45882712, n⁴ = 16426010896, rel = 0.12383905075183874
n = 359: iterations = 2057058871, n³ = 46268279, n⁴ = 16610312161, rel = 0.12384227647628734
n = 360: iterations = 2080128570, n³ = 46656000, n⁴ = 16796160000, rel = 0.12384548432498857
n = 361: iterations = 2103391770, n³ = 47045881, n⁴ = 16983563041, rel = 0.12384867444612208
n = 362: iterations = 2126849550, n³ = 47437928, n⁴ = 17172529936, rel = 0.1238518469862343
What this shows is, that the actual relative difference between actual n⁴ and the complexity of this code segment is a factor asymptotic towards a value around 0.124... (actually 0.125). While it does not give us the exact value, we can deduce, the following:
Time complexity is n⁴/8 ~ f(n) where f is your function/method.
The wikipedia-page on Big O notation states in the tables of 'Family of Bachmann–Landau notations' that the ~ defines the limit of the two operand sides is equal. Or:
f is equal to g asymptotically
(I chose 363 as excluded upper bound, because n = 362 is the last value for which we get a sensible result. After that, we exceed the long-space and the relative value becomes negative.)
User kaya3 figured out the following:
The asymptotic constant is exactly 1/8 = 0.125, by the way; here's the exact formula via Wolfram Alpha.
Remove if and modulo without changing the complexity
Here's the original method:
public static long f(int n) {
int sum = 0;
for (int i = 1; i < n; i++) {
for (int j = 1; j < i * i; j++) {
if (j % i == 0) {
for (int k = 0; k < j; k++) {
sum++;
}
}
}
}
return sum;
}
If you're confused by the if and modulo, you can just refactor them away, with j jumping directly from i to 2*i to 3*i ... :
public static long f2(int n) {
int sum = 0;
for (int i = 1; i < n; i++) {
for (int j = i; j < i * i; j = j + i) {
for (int k = 0; k < j; k++) {
sum++;
}
}
}
return sum;
}
To make it even easier to calculate the complexity, you can introduce an intermediary j2 variable, so that every loop variable is incremented by 1 at each iteration:
public static long f3(int n) {
int sum = 0;
for (int i = 1; i < n; i++) {
for (int j2 = 1; j2 < i; j2++) {
int j = j2 * i;
for (int k = 0; k < j; k++) {
sum++;
}
}
}
return sum;
}
You can use debugging or old-school System.out.println in order to check that i, j, k triplet is always the same in each method.
Closed form expression
As mentioned by others, you can use the fact that the sum of the first n integers is equal to n * (n+1) / 2 (see triangular numbers). If you use this simplification for every loop, you get :
public static long f4(int n) {
return (n - 1) * n * (n - 2) * (3 * n - 1) / 24;
}
It is obviously not the same complexity as the original code but it does return the same values.
If you google the first terms, you can notice that 0 0 0 2 11 35 85 175 322 546 870 1320 1925 2717 3731 appear in "Stirling numbers of the first kind: s(n+2, n).", with two 0s added at the beginning. It means that sum is the Stirling number of the first kind s(n, n-2).
Let's have a look at the first two loops.
The first one is simple, it's looping from 1 to n. The second one is more interesting. It goes from 1 to i squared. Let's see some examples:
e.g. n = 4
i = 1
j loops from 1 to 1^2
i = 2
j loops from 1 to 2^2
i = 3
j loops from 1 to 3^2
In total, the i and j loops combined have 1^2 + 2^2 + 3^2.
There is a formula for the sum of first n squares, n * (n+1) * (2n + 1) / 6, which is roughly O(n^3).
You have one last k loop which loops from 0 to j if and only if j % i == 0. Since j goes from 1 to i^2, j % i == 0 is true for i times. Since the i loop iterates over n, you have one extra O(n).
So you have O(n^3) from i and j loops and another O(n) from k loop for a grand total of O(n^4)

How would I write a function in Java that prints out perfect numbers less than n?

I am attempting to write a function in Processing that prints out all perfect numbers less than n for my homework pset. However, I am having trouble finding an algorithm that matches the problem.
I have written a for-loop that cycles through all of the numbers between 1 and n. Since the sum of all of its divisors equals the perfect number, I made an if-statement checking the remainders of n and then adding them onto a sum variable, called "result." Then, at the end of this loop, if result equals to n, I printed it out.
void perfect(int n) {
int result = 0;
for(int i = 1; i < n; i++) {
if(n % i == 0) {
result = result + i;
}
}
if( result == n) {
println(n);
}
}
Currently, my code is not printing out anything. When I removed the if-statement towards the end, it printed out all of the values of n, but not perfect numbers. I believe that there is an error somewhere in my code that is making it so that n is never equal to "result."
you will have to use nested for-loop to get all the perfect numbers from 1 to n. As below:
int i, sum = 1;
System.out.print("Perfect nos from 1 to n are 1,");
for (int j = 2; j <= n; j++)
{
sum = 1;
for (i = 2; i < j; i++)
{
if (j % i == 0)
sum = sum + i;
}
if (j == sum)
System.out.print(j + ",");
}

How to calculate the number of combinations of 3 integers less than n whose sum is greater than n * 2?

I am solving some java algorithm-analysis questions and this problem has me stumped. This particular problem asks for the value that is returned by x(10) where is x is the following function:
public static int x(int n)
{
int count = 0;
for(int i = 0; i <= n; i++)
{
for (int j = 0; j <= n; j++)
{
for (int k = 0; k <= n; k++)
{
System.out.println(i+","+j+","+k);
if (i + j + k > 2 * n)
count++;
}
}
}
return count;
}
Essentially, the problem is asking for the number of combinations of 3 integers less than n whose sum is greater than n * 2.
What is the fastest problem-solving technique for this problem, and just general "complicated" nested loop problems?
I set up a variable table and kept track of variables a, b, and c representing the 3 integers and a count variable which increments each time 'a+b+c > n*2' but after n=3, the tables became unnecessarily tedious. There must be a mathematical solution.
x(10) returns 220, but I do not know how the algorithm arrives at that answer and how to find, say, x(7).
Your code is incorrect.
First, the for loops should be corrected as
i < n,
j < n,
k < n,
because you mentioned "less than".
As you mentioned "Combination", but your code doesn't remove some repeated combinations, for example, if n = 5, there are only two combinations which satisfise the conditions, they are (4, 4, 4) and (4, 4, 3), thus the result is 2,
apparently your code will return a bigger number which is incorrect.
Could the result of this problem be a mathmatic expression ? think about this follow equation:
n1 + n2 + n3 = 2 * n
this equation is a typical se called "Diophantine Equation", which is proved that
there doesn't exist general algorithm to resolve all of them, and this equation is so relative to the origin problem, so i guess no.
I've changed your code, using hashset to remove all repeated combinations, hope is helpful.
public static int getCombinationNumber(int num) {
HashSet<String> hs = new HashSet(); // To save the unic form (or representation) for each combination
int count = 0;
for (int i = 0; i < num; i++)
for (int j = 0; j < num; j++)
for (int k = 0; k < num; k++) {
int[] nums = {i, j, k};
sort(nums); // To ensure all the combinations of i, j, k form a unic array
String unicForm = Arrays.toString(nums); // Convert array to string in order to compare and save
if (i + j + k > 2 * num && !hs.contains(unicForm)) {
count++;
hs.add(unicForm);
System.out.println(i + ", " + j + ", " + k);
}
}
return count;
}

how to find the sum of two elements in an array closest to zero

How to find two elements from an array whose sum is closest to zero but not zero(note: -1 is closest to zero than +2).I tried this...
int a[]=new int[n];
int b[]=new int[2];
int prev=0;
for(int i=0;i<n;a[i++]=in.nextInt());
for(int i=0;i<a.length;i++){
for(int j=i+1;j<a.length;j++){
int sum=a[i]+a[j];
if(prev==0)
prev=sum;
if(sum > 0){
if(sum < prev ){
prev=sum;
b[0]=a[i];
b[1]=a[j];
}
}
else if(sum < 0){
if(-sum < prev){
prev=sum;
b[0]=a[i];
b[1]=a[j];
}
}
}
}
Sop(b[0]+" "+b[1]);
I have a few remarks, you are using 3 for loops, which can be improved to just 2 nested for loops (the outer loop for selecting the current element and the inner loop to compare with the other elements).
Also you have multiple if tests to check if the sum is now closer to zero then the previous sum. However these if tests can be reduced to just one if test, by taking the absolute value of the sum instead of testing on sum > 0 and sum < 0, which is fine for the readability.
This is what i came up with :
int array[] = new int[5];
array[0] = -3; array[1] = -2; array[2] = -1; array[3] = 1; array[4] = 2; // Fill array
int idx[] = new int[2]; // Will store the result (index of the two elements that need to be added)
double lowest_sum = Double.POSITIVE_INFINITY; // Of type double to be able to use infinity
for(int i = 0; i < array.length; i++) {
// Outer loop --> Uses a current (array[i]) from left to right
int current = array[i];
for(int j = i+1; j < array.length; j++) {
// Inner loop --> Check all elements we didn't used as current till now
int compare_with = array[j];
if((Math.abs(current + compare_with) < lowest_sum) && ((current + compare_with) != 0)) {
// We found two elements whose sum is closer to zero
lowest_sum = Math.abs(current + compare_with);
idx[0] = i; // Index of the first element to add
idx[1] = j; // Index of second element to add
}
}
}
int res_idx1 = idx[0];
int res_idx2 = idx[1];
System.out.println("The first element to add is : " + array[res_idx1] + "\nThe second element to add is : " + array[res_idx2]);
Input : array = [-3, -2, -1, 1, 2] , Output : The first element to add is : -3,
The second element to add is : 2
Note that this code will print a solution and not all solutions (if multiple solutions exists). It should be fairly trivial to edit the code such that it returns all solutions.
you can try:
int a[]=new int[n];
int b[]=new int[2];
int prev=0;
for(int i=0;i<n;a[i++]=in.nextInt());
for(int i=0;i<a.length;i++){
for(int j=i+1;j<a.length;j++){
int sum=a[i]+a[j];
if(prev==0)
prev=sum;
if(Math.abs(sum)>0 && Math.abs(sum)<Math.abs(prev)){
prev=sum;
b[0]=a[i];
b[1]=a[j];
}
}
}
Sop(b[0]+" "+b[1]);
This problem can be solved in O(N*log(N)). The most expensive operation in this case will be sorting your array. If your domain allows you to use non-comparative sorts, such as counting sort then you'll be able to reduce time complexity of the whole solution to linear time.
The idea is that in sorted array, you can iterate elements in ascending and descending order in parallel and thus find all pairs with minimal/maximal sum in linear time. The only disadvantage of such approach in application to your task is that you need to find minimal absolute value of the sum, that means finding minimum among positive sums and maximum among negative sums. This will require two linear passes.
My solution is below. It is verified on randomized data against the bruteforce O(N^2) solution.
// note: mutates argument!
static Solution solve(int a[]) {
Arrays.sort(a);
int i = 0;
int j = a.length - 1;
// -1 indicates uninitialized min value
int minI = -1;
int minJ = -1;
int min = 0;
// finding maximal sum among negative sums
while (i < j) {
int cur = a[i] + a[j];
if (cur != 0 && (minI == -1 || Math.abs(cur) < Math.abs(min))) {
min = cur;
minI = i;
minJ = j;
}
// if current sum is below zero, increase it
// by trying the next, larger element
if (cur < 0) {
i++;
} else { // sum is already non-negative, move to the next element
j --;
}
}
i = 0;
j = a.length - 1;
// finding minimal sum among positive sums
while (i < j) {
int cur = a[i] + a[j];
if (cur != 0 && (minI == -1 || Math.abs(cur) < Math.abs(min))) {
min = cur;
minI = i;
minJ = j;
}
if (cur > 0) {
j--;
} else {
i ++;
}
}
if (minI >=0) {
return new Solution(minI, minJ, min);
//System.out.printf("a[%d]=%d, a[%d]=%d, sum=%d", minI, minJ, a[minI], a[minJ], min);
} else {
return null;
//System.out.println("No solution");
}
}
I just realized that sorting messes the indices, so minI and minJ will not correspond to the indices in the original non-sorted array. The fix is simple — original array should be converted to the array of pairs (value, original_index) before sort. Though I will not implement this fix in my example snippet, as it will further affect readability.

Is this java leetcode solution use quadratic time in the worst case?

I was doing exercise in the leetcode website. I tried this solution for the Longest Substring Without Repeating Characters problem. The judging system accept the answer and return a good runtime. When I tried to analysis the time complexity, I found it took quadratic time when the input string is unique.The inside for loop will execute i-1 times every time, which means it will execute (n-1)+(n-2)+.....+1=(n-1)n/2 times. Am I right?
public class Solution {
public int lengthOfLongestSubstring(String s) {
// Note: The Solution object is instantiated only once and is reused by each test case.
if(s == null) return 0;
char[] str = s.toCharArray();
if(str.length == 0) return 0;
int max = 1;
int barrier = 0;
for(int i = 1; i < str.length; i++){
for(int j = i - 1; j >= barrier;j--){
if(str[i] == str[j]){
barrier = j + 1;
break;
}
}
max = Math.max(max, i - barrier + 1);
}
return max;
}
}
It does not seem completly quadratic to me, bacause int j = i - 1. So in the beginning j is significally shorter than i. But in the end J becomes I and you get a quadratic time, but not for the whole string. Just for the last part.
Yes, you're right it's quadratic. If every character is unique, then the condition will always be false, and barrier will remain 0. This gives you the worst-case scenario. If you ignore the constant-time statements, and replace barrier with 0, you end up with:
for(int i = 1; i < n; i++) {
for(int j = i - 1; j >= 0; j--) {
// ....
}
}
The number of iterations here is:
0 + 1 + 2 + ... + (n - 1) = ((n - 1) * n) / 2 = (n^2 - n) / 2 = O(n^2)

Categories