Coin Change - Java Fail to pass example 3 - java

Problem: You are given coins of different denominations and a total amount of money amount. Write a function to compute the fewest number of coins that you need to make up that amount. If that amount of money cannot be made up by any combination of the coins, return -1.
Example 1:
Input: coins = [1, 2, 5], amount = 11
Output: 3
Explanation: 11 = 5 + 5 + 1
Example 2:
Input: coins = 2, amount = 3
Output: -1
You may assume that you have an infinite number of each kind of coin.
My code:
public int coinChange(int[] coins, int amount) {
Arrays.sort(coins);
int new_amount=amount;
int count_coin=0;
int q=0,r=0,a=0;
int k=coins.length-1;
while(amount>0 && k>=0) {
q = new_amount / coins[k];
count_coin = count_coin + q;
r = new_amount % coins[k];
new_amount=r;
a+=q*coins[k];
k--;
}
if(a==amount){
return count_coin;
}
else{
return -1;
} }
My code work well for given two example. After working with this example I took another test case.
Example 3:Input: coins = [186,419,83,408], amount = 6249
Output: 20
My output: -1
I fail to understand this example. If any one have any idea about this example or any other better algorithm besides mine please share it with me.
I see Coin Change (Dynamic Programming) link. But cannot understand.
I also studied Why does the greedy coin change algorithm not work for some coin sets?
but cannot understand what does it try to say.So I raised this question.
Thank you in advance.

Your code uses greedy approach that does not work properly for arbitrary coin nominals (for example, set 3,3,4 cannot produce answer 6)
Instead use dynamic programming approach (example)
For example, make array A of length amount+1, fill it with zeros, make A[0] = 1 and traverse array for every coin nominal from n-th entry down, choosing the best result for every cell.
Pseudocode:
for (j=0; j < coins.length; j++) {
c = coins[j];
for (i=amount; i >= c; i--){
if (A[i - c] > 0)
A[i] = Min(A[i], A[i - c] + 1);
}
}
result = A[amount] - 1;

Related

Minimum car required to accommodate given people

I had one coding round where question statement was like this
*You have a given number of friends and seating capacity of their cars now you need to find minimum number of cars required to accommodate them all.
Example:
People = [1, 4, 1]
SeatingCapacity = [1, 5, 1]
In this case we need minimum 2 cars as number of people on 0th index can adjust with index 1 car.
Example 2:
People = [4, 4, 5, 3]
SeatingCapacity = [5, 5, 7, 3]
This case answer will be as index 3 people can be accommodate into 0,1,2 or 1,2 index car*
I wrote code like this
int numberOfCars(int[] p, int[] s) {
int noOfCars=p.length;
Int extraSeats=0;
for(int i=0;i<p.length;i++) {
extraSeats+= (s[i] - p[i])
}
for(int i=0;i<p.length;i++) {
if(extraSeats-p[i] >=0){
extraSeats-= p[i] ;
noOfCars--;
}
}
return noOfCars;
}
However my code failed for many cases as well as it was saying some performance issue.
Can anyone please tell me which cases I missed?
This can be solved by just greedy approach. Like below:
people = [1,4,1]
p = sum(people) //6
cars = [1,5,1]
sort(cars, descending) //cars = [5,1,1]
idx = 0
while(p > 0) { p -= cars[idx]; idx += 1 }
answer = idx
Handle the corner case where total capacity in cars is less than number of people.
Complexity : sorting cars O(n log n) + while loop O(n) = O(n log n)
This would be my solution in Javascript:
function peopleCars (persons, seats) {
let numberOfCars = 0;
let people = persons.reduce((previousValue, currentValue) => previousValue + currentValue, 0); //Calculate total number of persons
seats.sort((a,b) => {return b-a}); //Rearrange the total number of seats in each car in ascending order to fill the seats starting from the one that can take the most persons
while(people > 0) {
people -= seats[numberOfCars]; //subtract the numbers of seats of each car from the number of persons available. This will now leave us with the remaining people
numberOfCars += 1 //increment the number of cars to be used.
}
return numberOfCars
}
// console.log (peopleCars( [2,1,2,2], [5,4,2,5]));

house robber problem how to do this this way

You are a professional robber planning to rob houses along a street. Each house has a certain amount of money stashed, the only constraint stopping you from robbing each of them is that adjacent houses have security system connected and it will automatically contact the police if two adjacent houses were broken into on the same night.
Given a list of non-negative integers representing the amount of money of each house, determine the maximum amount of money you can rob tonight without alerting the police.
Example 1:
Input: [1,2,3,1]
Output: 4
Explanation: Rob house 1 (money = 1) and then rob house 3 (money = 3).
Total amount you can rob = 1 + 3 = 4.
Example 2:
Input: [2,7,9,3,1]
Output: 12
Explanation: Rob house 1 (money = 2), rob house 3 (money = 9) and rob house 5 (money = 1).
Total amount you can rob = 2 + 9 + 1 = 12.
class Solution {
public int rob(int[] nums) {
int sim=0;
int sum=0;
int i,j;
for(i=0;i<nums.length;i++,i++){
sim+=nums[i];
}
for(j=1;j<nums.length;j++,j++){
sum+=nums[j];
}
int r= Math.max(sim,sum);
return r;
}
}
How to do this logic when array length is in odd ?
can we do that this way
output is correct for even length though
Your solution is skipping one house after robbing previous one. This would not always give maximum output. Consider this case: [100, 1, 1, 100]. According to your solution, sim == 101 and sum == 101, however, the correct solution would be 200. (robbing the 0th and 3rd house).
I propose two possible solutions: 1. using recursion, 2. using dp.
Using recursion, you can choose either to rob a house and skip next one, or do not rob a house and go on to the next one. Thus, you will have two recursive cases which would result in O(2^n) time complexity and O(n) space complexity.
public int rob(int[] nums) {
return robHelper(nums, 0, 0);
}
private int robHelper(int[] nums, int ind, int money) {
if (ind >= nums.length) return money;
int rec1 = robHelper(nums, ind+1, money);
int rec2 = robHelper(nums, ind+2, money+nums[ind]);
return Math.max(rec1, rec2);
}
Using dp would optimize time and space complexity from above solution. You can keep track of two values: currMax and prevMax. While prevMax is max money excluding the previous house, currMax is max money considering the previous house. Since prevMax is guaranteed that money from previous house is not included, you can add money from current house to prevMax and compare it with currMax to find total max money up to that point. Here is my solution using dp, O(n) time complexity and O(1) space complexity:
public int rob(int[] nums) {
int currmax = 0;
int prevmax = 0;
for (int i = 0; i < nums.length; i++) {
int iSum = prevmax + nums[i];
prevmax = currmax;
currmax = Math.max(currmax, iSum);
}
return currmax;
}
As pointed out by siralexsir88 in the comments it is not enough to only check for the solutions for robbing the even/odd numbered houses since it may happen that the best strategy is to skip more than one house in a row.
The given example illustrates this fact: suppose you have [1, 3, 5, 2, 1, 7], here indexes 3 and 4 must be skipped to pick the latter 7.
Proposed solution
This problem is a typical example of dynamic programming and can be solved by building up a solution recursively.
For every house there are two options: you either rob it, our you don't. Let's keep track of the best solution for both cases and for each house: let's name R[i] the maximum profit up to the ith house if we rob the ith house. Let's define NR[i] the same way for not robbing the ith hose.
For example, suppose we have [1, 3]. In this case:
R[0] = 1
NR[0] = 0
R[1] = 3 The best profit while robbing house #1 is 3
NR[1] = 1 The best profit while not robbing house #1 is 1
Let's also call P[i] the profit that gives us robbing the ith house.
We can build our solution recursively in terms of R and NR this way:
1) R[i] = NR[i-1] + P[i]
2) NR[i] = max(NR[i-1], R[i-1])
3) R[0] = P[0]
4) NR[0] = 0
Let's break it down.
The recursive relation 1) says that if we rob the ith house, we must not have robed the previous house, and hence take the not robbed best score for the previous house.
The recursive relation 2) says that if we do not rob the ith house, then our score is the best between the ones for robbing and not robbing the previous house. This makes sense because we are not adding anything to our total profit, we just keep the best profit so far.
3) and 4) are just the initial conditions for the first house, which should make sense up to this point.
Here is a pseudo-python snippet that does compute the best profit:
P = [1, 3, 5, 2, 1, 7] # The houses
R = [0] * len(P)
NR = [0] * len(P)
R[0] = P[0]
# We skip index 0
for i in range(1, len(P)):
R[i] = NR[i-1] + P[i]
NR[i] = max(NR[i-1], R[i-1])
# The solution is the best between NR and R for the last house
print max(NR[-1], R[-1])
The solution implies keeping track of the two arrays (R[i] and NR[i]) while traversing the houses, and then compare the results at the end. If you just want the maximum profit, you may keep the results R and NR for the previous house and ditch them as you move on. However, if you want to know specifically which sequence of houses leads to the best result, you need to keep track of the whole array and once you are done, backtrack and reconstruct the solution.
private static int rob(int[] money) {
int max = 0;
for (int i = 0; i < money.length; i++) {
int skips = 2;
while (skips < money.length) {
int sum = 0;
for (int j = 0; j < money.length; j += skips) {
sum += money[j];
}
if (sum > max) {
max = sum;
}
skips++;
}
}
return max;
}

Finding minimal "factorization" of an int to square-numbers

The problem I am trying to solve:
Given an int n, return the minimal "factorization" of this int to numbers which are all squares.
We define factorization here not in the usual manner: a factorization of k to m numbers (m1, m2, m3...) will be such that: m1 + m2 + m3 + ... = k.
For example: let n = 12. The optimal solution is: [4,4,4] since 4 is the square of 2 and 4 + 4 + 4 = 12. There is also [9,1,1,1] though it is not minimal since it's 4 numbers instead of 3 in the former.
My attempt to solve this:
My idea was given the number n we will perform the following algorithm:
First we will find the closest square number to n (for example if n = 82 we will find 81.
Then we will compute, recursively, the number we got minus the square closest to it.
Here is a flow example: assume n = 12 and our function is f, we compute f(3) UNION {9} and then f(12-4) UNION {4} and then f(12-2) UNION {2}. From each we get a list of square combinations, we take the minimal list from those. We save those in a HashMap to avoid duplications (dynamic-programming style).
Code attempt in Java (incomplete):
public List<Integer> getShortestSquareList(int n){
HashMap<Integer,List<Integer>> map = new HashMap<Integer,List<Integer>();
map.put(1, 1);
List<Integer> squareList = getSquareList(n);
return internalGetShortestSquareList(n, map, squareList);
}
List<Integer> getSquareList(int n){
List<Integer> result=new ArrayList<Integer>();
int i = 1;
while(i*i <= n){
result.add(i*i);
i++;
}
return result;
}
public int getClosestSquare(int n,List<Integer> squareList){
// getting the closestSquareIndex
}
public List<Integer> internalGetShortestSquareList(int n, HashMap<Integer m, HashMap<Integer,List<Integer>> map, List<Integer> squareList){
if (map.contains(n)) {return map.get(n);}
int closestSqureIndex=getClosestSquare(m,squareList);
List<Integer> minSquareList;
int minSize=Integer.MAX_INT;
for(int i=closestSqureIndex; i>-1; i--) {
int square = squareList.get(closestSqureIndex);
List<Integer> tempSquares= new ArrayList<Integer>(square);
tempSquares.addAll(f(n-square, map, squareList));
if (tempSquares.size() < minSize) {
minSize = tempSize;
minSquareList = tempSquares;
}
}
map.put(n, minSquareList);
return map.get(n);
}
My question:
It seems that my solution is not optimal (imo). I think that the time complexity for my solution is O(n)*O(Sqrt(n)) since the maximal recursion depth is n and the maximum number of children is Sqrt(n). My solution is probably full of bugs - which doesn't matter to me at the moment. I will appreciate any guidance to find a more optimal solution (pseudo-code or otherwise).
Based on #trincot's link, I would suggest a simple O(n sqrt n) algorithm. The idea is :
Use exhaustive search on the squares smaller or equal to n to find out if n is a square itself, or a sum of any two or three squares less than n. This can be done in sqrt(n)^3 time, which is O(n sqrt n).
If this fails, then find a "factorization" of n in four squares.
To recursively find 4-factorization of a number m, there are three cases now:
m is a prime number and m mod 4 = 1. According to the math, we know that n is a product of two squares. Both simple exhaustive search or more "mathy" methods should give an easy answer.
m is a prime number and m mod 4 = 3. This case still requires working out the details, but could be implemented using the math described in the link.
m is a composite number. This is the recursive case. First factorize m in two factors, i.e. integers u and v so that u*v=m. For performance reasons, they should be as close as possible, but this is a minor detail.
Afterwards, recursively find the 4-factorization of u and v.
Then, using the formula:
(a^2+b^2+c^2+d^2) (A^2+B^2+C^2+D^2) = (aA+bB+cC+dD)^2 + (aB-bA+cD-dC)^2 + (aC-bD-cA+dB)^2 + (aD-dA+bC-cB)^2
find the 4-factorization of m. Here I denoted u = (a^2+b^2+c^2+d^2) and v = (A^2+B^2+C^2+D^2), as their 4-factorization is known at this point.
Much simpler solution:
This is a version of the Coin Change problem.
You can call the following method with coins as the list of the square number that smaller than amount (n in your example).
Example: amount=12 , coins={1,2,4,9}
public int coinChange(int[] coins, int amount) {
int max = amount + 1;
int[] dp = new int[amount + 1];
Arrays.fill(dp, max);
dp[0] = 0;
for (int i = 1; i <= amount; i++) {
for (int j = 0; j < coins.length; j++) {
if (coins[j] <= i) {
dp[i] = Math.min(dp[i], dp[i - coins[j]] + 1);
}
}
}
return dp[amount] > amount ? -1 : dp[amount];
}
The complexity of it is O(n*m) where m is the number of coins. So in your example it the same complexity like you mention O(n*sqrt(n))
It solved with Dynamic programming - Bottom up approch.
The code has been taken from here.

How to calculate Big O of Dynamic programming (Memoization) algorithm

How would I go about calculating the big O of a DP algorithm. I've come to realize my methods for calculating algorithms doesn't always work. I would use simple tricks to extract what the Big O was. For example if I were evaluating the none memoized version of the algorithm below (removing the cache mechanism) I would look at the number of times the recursive method called itself in this case 3 times. I would then raise this value to n giving O(3^n). With DP that isn't right at all because the recursive stack doesn't go as deep. My intuition tells me that the Big O of the DP solution would be O(n^3). How would we verbally explain how we came up with this answer. More importantly what is a technique that can be used to find the Big O of similar problems. Since it is DP I'm sure the number of sub problems is important how do we calculate the number of sub problems.
public class StairCase {
public int getPossibleStepCombination(int n) {
Integer[] memo = new Integer[n+1];
return getNumOfStepCombos(n, memo);
}
private int getNumOfStepCombos(int n, Integer[] memo) {
if(n < 0) return 0;
if(n == 0) return 1;
if(memo[n] != null) return memo[n];
memo[n] = getNumOfStepCombos(n - 1, memo) + getNumOfStepCombos(n - 2, memo) + getNumOfStepCombos(n-3,memo);
return memo[n];
}
}
The first 3 lines do nothing but compare int values, access an array by index, and see if an Integer reference is null. Those things are all O(1), so the only question is how many times the method is called recursively.
This question is very complicated, so I usually cheat. I just use a counter to see what's going on. (I've made your methods static for this, but in general you should avoid static mutable state wherever possible).
static int counter = 0;
public static int getPossibleStepCombination(int n) {
Integer[] memo = new Integer[n+1];
return getNumOfStepCombos(n, memo);
}
private static int getNumOfStepCombos(int n, Integer[] memo) {
counter++;
if(n < 0) return 0;
if(n == 0) return 1;
if(memo[n] != null) return memo[n];
memo[n] = getNumOfStepCombos(n - 1, memo) + getNumOfStepCombos(n - 2, memo) + getNumOfStepCombos(n-3,memo);
return memo[n];
}
public static void main(String[] args) {
for (int i = 0; i < 10; i++) {
counter = 0;
getPossibleStepCombination(i);
System.out.print(i + " => " + counter + ", ");
}
}
This program prints
0 => 1, 1 => 4, 2 => 7, 3 => 10, 4 => 13, 5 => 16, 6 => 19, 7 => 22, 8 => 25, 9 => 28,
so it looks like the final counter values are given by 3n + 1.
In a more complicated example, I might not be able to spot the pattern, so I enter the first few numbers (e.g. 1, 4, 7, 10, 13, 16) into the Online Encyclopedia of Integer Sequences and I usually get taken to a page containing a simple formula for the pattern.
Once you've cheated in this way to find out the rule, you can set about understanding why the rule works.
Here's how I understand where 3n + 1 comes from. For each value of n you only have to do the line
memo[n] = getNumOfStepCombos(n - 1, memo) + getNumOfStepCombos(n - 2, memo) + getNumOfStepCombos(n-3,memo);
exactly once. This is because we are recording the results and only doing this line if the answer has not already been calculated.
Therefore, when we start with n == 5 we run that line exacly 5 times; once for n == 5, once with n == 4, once with n == 3, once with n == 2 and once with n == 1. So that's 3 * 5 == 15 times the method getNumOfStepCombos gets called from itself. The method also gets called once from outside itself (from getPossibleStepCombination), so the total number of calls is 3n + 1.
Therefore this is an O(n) algorithm.
If an algorithm has lines that are not O(1) this counter method cannot be used directly, but you can often adapt the approach.
Paul's answer is technically not wrong but is a bit misleading. We should be calculating big O notation by how the function responds to changes in input size. Paul's answer of O(n) makes the complexity appear to be linear time when it really is exponential to the number of bits required to represent the number n. So for example, n=10 has ~30 calculations and m=2 bits. n=100 has ~300 calculations and m=3 bits. n=1000 has ~3000 calculations and m=4 bits.
I believe that your function's complexity would be O(2^m) where m is number of bits needed to represent n. I referred to https://www.quora.com/Why-is-the-Knapsack-problem-NP-complete-even-when-it-has-complexity-O-nW for a lot of my answer.

Alternating factorial terms using Java

I'm trying to write a loop in Java that can output the sum of a series which has this form... 1! -3! + 5! – 7! + ... up to n (user gives n as a positive odd number). For example, if the user inputs 5 for n, then the series should calculate the sum of 1! -3! + 5! (hard part) & display it to the user with a basic print statement (easy part). If the user gives 9, then the sum calculated would come from 1! -3! + 5! - 7! + 9!.
For ease, just assume the user always puts in a positive odd number at any time. I'm just concerned about trying to make a sum using a loop for now.
The closest code I've come up with to do this...
int counter = 1;
int prod = 1;
n = console.nextInt();
while (counter <= n)
{
prod = prod * counter;
counter++;
}
System.out.println(prod);
This does n!, but I'm finding it hard to get it do as specified. Any pointers would be great.
As you calculate the factorials, keep a running total of the series so far. Whenever counter % 4 == 1, add the factorial to the running total. Whenever counter % 4 == 3, subtract the factorial from the running total.
You said "any pointers" - I assume that means you don't want me to write the code for you.
Update
This is closely based on your original code, so that it would be as easy as possible for you to understand. I have changed the bare minimum that I needed to change, to get this working.
int counter = 1;
long prod = 1;
long total = 0;
n = console.nextInt();
while (counter <= n)
{
prod = prod * counter;
if( counter % 4 == 1 ) {
total += prod;
} else if (counter % 4 == 3) {
total -= prod;
}
counter++;
}
System.out.println(total);
First up, notice that I have changed prod to a long. That's because factorials get very big very fast. It would be even better to use a BigInteger, but I'm guessing you haven't learnt about these yet.
Now, there are those two conditions in there, for when to add prod to the total, and when to subtract prod from the total. These both work by checking the remainder when counter is divided by 4 - in other words, checking which factorial we're up to, and doing the right operation accordingly.
First of all, you need to introduce a variable int sum = 0; to store the value of the alternate series.
To only sum every second value, you should skip every second value. You can check that using the modulo operation, e.g. if( counter % 2 == 1 ).
If that is true, you can add/subtract the current value of prod to the sum.
To get the alternating part, you can use a boolean positive = true; like this:
if( positive ) {
sum += prod;
} else {
sum -= prod;
}
positive = !positive;
Based on the boolean, the prod is either added or subtracted. The value is altered afterwards.
Because factorials become very large very fast, it would be better to use variables of type long.

Categories