minimum sum subarray in O(N) by Kadane's algorithm - java

We all know about the maximum sum subarray and the famous Kadane's algorithm. But can we use the same algorithm to find minimum sum also?
My take is:
change the sign and find the max sum in that, same as the way we calculate the maximum sum subarray. Than change the sign of the
elements in the array to make it in initial state.
Please help me in correcting the algo if it has any issue.
corner case: I know there is an issue if all the elements are positive and we can handle this case by doing some preprocessing i.e. traverse the array if all are +ve than just return the minimum number from the array.
The above mention algorithm will work and well supported (explained) by dasblinkenlight.

Will the approach that I have mentioned work to find the minimum sum?
Yes, it will. You can re-state the problem of finding the minimum sum as finding a negative sum with the largest absolute value. When you switch the signs of your numbers and keep the rest of the algorithm in place, that's the number that the algorithm is going to return back to you.
I know there is an issue if all the elements are positive
No, there's no issue: consider the original Kadane's algorithm when all elements are negative. In this case the algorithm returns an empty sequence for the sum of zero - the highest one possible under the circumstances. In other words, when all elements are negative, your best solution is to take none of them.
Your modified algorithm is going to do the same in case when all numbers are positive: again, your best solution is to not take numbers at all.
If you add a requirement that the range returned back from the algorithm may not be empty, you could modify the algorithm slightly to find the smallest positive number (or the greatest negative number) in case when Kadane's algorithm returns an empty range as the optimal solution.

static void subArraySumMin(int a[]) {
int minendingHere = 0;
int minSoFar = a[0];
for (int i = 1; i < a.length; i++) {
minendingHere = Math.min(a[i], minendingHere + a[i]);
minSoFar = Math.min(minSoFar, minendingHere);
}
System.out.println(minSoFar);
}

Just replace max with min.
//O(n)
public static int minSubArraySum(int[] arr) {
int minSum = 0;
int curSum = 0;
for (int num : arr) {
curSum += num;
minSum = Math.min(minSum, curSum);
curSum = Math.min(curSum, 0);
}
return minSum;
}

Related

Get the Sum of all positive numbers in a Circularly ordered array in O(Log N)

I've been given an exercise in class that requires the following:
An array v formed by N integers is circularly ordered if, either the array is ordered, or else v[N‐1] ≤ v[0] and ∃k with 0<k<N such as ∀i≠k v[i] ≤ v[i+1].
Example:
Given a circularly ordered array with as much as 10 positive items, calculate the sum of the positive values. For this last example the answer would be 27.
I've been required to implement it using a Divide-and-Conquer scheme in java, given that the complexity is in the worst case O(Log N), being N the array size.
So far I tried to pivot a value until I find a positive value, then knowing the other positive values are adjacent, it's possible to sum the maximum of 10 positive values with a O(1) complexity.
I thought of doing a binary search to achieve O(Log N) complexity, but this would not follow the divide and conquer pattern.
I'm easily able to implement it through a O(N) complexity like this:
public static int addPositives(int[] vector){
return addPositives(vector,0,vector.length-1
}
public static int addPositives(int[] vector, int i0, int iN){
int k = (i0+iN)/2;
if (iN-i0 > 1){
return addPositives(vector,i0,k) + addPositives(vector,k+1,iN);
}else{
int temp = 0;
for (int i = i0; i <= iN; i++) {
if (vector[i]>0) temp+=vector[i];
}
return temp;
}
}
However trying to land the O(Log N) gets me nowhere, how could I achieve it?
You can improve your divide and conquer implementation to meet the required running time if you prune irrelevant branches of the recursion.
After you divide the current array into two sub-arrays, compare the first and last elements of each sub-array. If both are negative and the first is smaller than the last, you know for sure that all the elements in this sub-array are negative and you don't have to make the recursive call on it (since you know it will contribute 0 to the total sum).
You can also stop the recursion if all the elements in a sub-array are positive (which can also be verified by comparing the first and last elements of the sub-array) - in that case you have to sum all the elements of that sub-array, so there's no point to continue the recursion.
My advice for the O(Log N) would be a direct comparison to meet the second of the two criteria: the last item being less than the first.
return vector[0] >= vector[iN-1]
If you want something with greater complexity, I forget the algorithm name, but you could get the array at the halfway point, and do two ordered searches from there: from the mid to the start and then the mid to the end

Checking whether there is a subset of size k of an array which has a sum multiple of n

Good evening, I have an array in java with n integer numbers. I want to check if there is a subset of size k of the entries that satisfies the condition:
The sum of those k entries is a multiple of m.
How may I do this as efficiently as possible? There are n!/k!(n-k)! subsets that I need to check.
You can use dynamic programming. The state is (prefix length, sum modulo m, number of elements in a subset). Transitions are obvious: we either add one more number(increasing the number of elements in a subset and computing new sum modulo m), or we just increase prefix lenght(not adding the current number). If you just need a yes/no answer, you can store only the last layer of values and apply bit optimizations to compute transitions faster. The time complexity is O(n * m * k), or about n * m * k / 64 operations with bit optimizations. The space complexity is O(m * k). It looks feasible for a few thousands of elements. By bit optimizations I mean using things like bitset in C++ that can perform an operation on a group of bits at the same time using bitwise operations.
I don't like this solution, but it may work for your needs
public boolean containsSubset( int[] a , int currentIndex, int currentSum, int depth, int divsor, int maxDepth){
//you could make a, maxDepth, and divisor static as well
//If maxDepthis equal to depth, then our subset has k elements, in addition the sum of
//elements must be divisible by out divsor, m
//If this condition is satisafied, then there exists a subset of size k whose sum is divisible by m
if(depth==maxDepth&&currentSum%divsor==0)
return true;
//If the depth is greater than or equal maxDepth, our subset has more than k elements, thus
//adding more elements can not satisfy the necessary conditions
//additionally we know that if it contains k elements and is divisible by m, it would've satisafied the above condition.
if(depth>=maxdepth)
return false;
//boolean to be returned, initialized to false because we have not found any sets yet
boolean ret = false;
//iterate through all remaining elements of our array
for (int i = currentIndex+1; i < a.length; i++){
//this may be an optimization or this line
//for (int i = currentIndex+1; i < a.length-maxDepth+depth; i++){
//by recursing, we add a[i] to our set we then use an or operation on all our subsets that could
//be constructed from the numbers we have so far so that if any of them satisfy our condition (return true)
//then the value of the variable ret will be true
ret |= containsSubset(a,i,currentSum+a[i],depth+1,divisor, maxDepth);
} //end for
//return the variable storing whether any sets of numbers that could be constructed from the numbers so far.
return ret;
}
Then invoke this method as such
//this invokes our method with "no numbers added to our subset so far" so it will try adding
// all combinations of other elements to determine if the condition is satisfied.
boolean answer = containsSubset(myArray,-1,0,0,m,k);
EDIT:
You could probably optimize this by taking everything modulo (%) m and deleting repeats. For examples with large values of n and/or k, but small values of m, this could be a pretty big optimization.
EDIT 2:
The above optimization I listed isn't helpful. You may need the repeats to get the correct information. My bad.
Happy Coding! Let me know if you have any questions!
If numbers have lower and upper bounds, it might be better to:
Iterate all multiples of n where lower_bound * k < multiple < upper_bound * k
Check if there is a subset with sum multiple in the array (see Subset Sum problem) using dynamic programming.
Complexity is O(k^2 * (lower_bound + upper_bound)^2). This approach can be optimized further, I believe with careful thinking.
Otherwise you can find all subsets of size k. Complexity is O(n!). Using backtracking (pseudocode-ish):
function find_subsets(array, k, index, current_subset):
if current_subset.size = k:
add current_subset to your solutions list
return
if index = array.size:
return
number := array[index]
add number to current_subset
find_subsets(array, k, index + 1, current_subset)
remove number from current_subset
find_subsets(array, k, index + 1, current_subset)

repeated element in Array

I have an array of N elements and contain 1 to (N-1) integers-a sequence of integers starting from 1 to the max number N-1-, meaning that there is only one number is repeated, and I want to write an algorithm that return this repeated element, I have found a solution but it only could work if the array is sorted, which is may not be the case.
?
int i=0;
while(i<A[i])
{
i++
}
int rep = A[i];
I do not know why RC removed his comment but his idea was good.
With the knowledge of N you easy can calculate that the sum of [1:N-1]. then sum up all elementes in your array and subtract the above sum and you have your number.
This comes at the cost of O(n) and is not beatable.
However this only works with the preconditions you mentioned.
A more generic approach would be to sort the array and then simply walk through it. This would be O(n log(n)) and still better than your O(n²).
I you know the maximum number you may create a lookup table and init it with all zeros, walk through the array and check for one and mark the entries with one. The complexity is also just O(n) but at the expense of memory.
if the value range is unknown a simiar approach can be used but instead of using a lookup table a hashset canbe used.
Linear search will help you with complexity O(n):
final int n = ...;
final int a[] = createInput(n); // Expect each a[i] < n && a[i] >= 0
final int b[] = new int[n];
for (int i = 0; i < n; i++)
b[i]++;
for (int i = 0; i < n; i++)
if (b[i] >= 2)
return a[i];
throw new IllegalArgumentException("No duplicates found");
A possible solution is to sum all elements in the array and then to compute the sym of the integers up to N-1. After that subtract the two values and voila - you found your number. This is the solution proposed by vlad_tepesch and it is good, but has a drawback - you may overflow the integer type. To avoid this you can use 64 bit integer.
However I want to propose a slight modification - compute the xor sum of the integers up to N-1(that is compute 1^2^3^...(N-1)) and compute the xor sum of your array(i.e. a0^a1^...aN-1). After that xor the two values and the result will be the repeated element.

Splitting an array into two subarrays with minimal sum

My question is if given an array,we have to split that into two sub-arrays such that the absolute difference between the sum of the two arrays is minimum with a condition that the difference between number of elements of the arrays should be atmost one.
Let me give you an example.Suppose
Example 1: 100 210 100 75 340
Answer :
Array1{100,210,100} and Array2{75,340} --> Difference = |410-415|=5
Example 2: 10 10 10 10 40
Answer : Array1{10,10,10} and Array2{10,40} --> Difference = |30-50|=20
Here we can see that though we can divide the array into {10,10,10,10} and {40},we are not dividing because the constraint "the number of elements between the arrays should be atmost 1" will be violated if we do so.
Can somebody provide a solution for this ?
My approach:
->Calculate sum of the array
->Divide the sum by 2
->Let the size of the knapsack=sum/2
->Consider the weights of the array values as 1.(If you have come across the knapsack problem ,you may know about the weight concept)
->Then consider the array values as the values of the weights.
->Calculate the answer which will be array1 sum.
->Total sum-answer=array2 sum
This approach fails.
Calculating the two arrays sum is enough.We are not interested in which elements form the sum.
Thank you!
Source: This is an ICPC problem.
I have an algorithm that works in O(n3) time, but I have no hard proof it is optimal. It seems to work for every test input I give it (including some with negative numbers), so I figured it was worth sharing.
You start by splitting the input into two equally sized arrays (call them one[] and two[]?). Start with one[0], and see which element in two[] would give you the best result if swapped. Whichever one gives the best result, swap. If none give a better result, don't swap it. Then move on to the next element in one[] and do it again.
That part is O(2) by itself. The problem is, it might not get the best results the first time through. If you just keep doing it until you don't make any more swaps, you end up with an ugly bubble-type construction which makes it O(n3) total.
Here's some ugly Java code to demonstrate (also at ideone.com if you want to play with it):
static int[] input = {1,2,3,4,5,-6,7,8,9,10,200,-1000,100,250,-720,1080,200,300,400,500,50,74};
public static void main(String[] args) {
int[] two = new int[input.length/2];
int[] one = new int[input.length - two.length];
int totalSum = 0;
for(int i=0;i<input.length;i++){
totalSum += input[i];
if(i<one.length)
one[i] = input[i];
else
two[i-one.length] = input[i];
}
float goal = totalSum / 2f;
boolean swapped;
do{
swapped = false;
for(int j=0;j<one.length;j++){
int curSum = sum(one);
float curBestDiff = Math.abs(goal - curSum);
int curBestIndex = -1;
for(int i=0;i<two.length;i++){
int testSum = curSum - one[j] + two[i];
float diff = Math.abs(goal - testSum);
if(diff < curBestDiff){
curBestDiff = diff;
curBestIndex = i;
}
}
if(curBestIndex >= 0){
swapped = true;
System.out.println("swapping " + one[j] + " and " + two[curBestIndex]);
int tmp = one[j];
one[j] = two[curBestIndex];
two[curBestIndex] = tmp;
}
}
} while(swapped);
System.out.println(Arrays.toString(one));
System.out.println(Arrays.toString(two));
System.out.println("diff = " + Math.abs(sum(one) - sum(two)));
}
static int sum(int[] list){
int sum = 0;
for(int i=0;i<list.length;i++)
sum += list[i];
return sum;
}
Can you provide more information on the upper limit of the input?
For your algorithm, I think your are trying to pick floor(n/2) items and find it's maximum sum of value as array1 sum...(If this is not your original thought then please ignore the following lines)
If this is the case, then knapsack size should be n/2 instead of sum/2,
but even so, I think it's still not working. The ans is min(|a - b|) and maximizing a is a different issue. For eg, {2,2,10,10}, you will get a = 20, b = 4, while the ans is a = b = 12.
To answer the problem, I think I need more information of the upper limit of the input..
I cannot come up with a brilliant dp state but a 3-dimensional state
dp(i,n,v) := in first i-th items, pick n items out and make a sum of value v
each state is either 0 or 1 (false or true)
dp(i,n,v) = dp(i-1, n, v) | dp(i-1, n-1, v-V[i])
This dp state is so naive that it has a really high complexity which usually cannot pass a ACM / ICPC problem, so if possible please provide more information and see if I can come up another better solution...Hope I can help a bit :)
DP soluction will give lg(n) time. Two array, iterate one from start to end, and calculate the sum, the other iterate from end to start, and do the same thing. Finally, iterate from start to end and get minimal difference.

Efficiently determine the parity of a permutation

I have an int[] array of length N containing the values 0, 1, 2, .... (N-1), i.e. it represents a permutation of integer indexes.
What's the most efficient way to determine if the permutation has odd or even parity?
(I'm particularly keen to avoid allocating objects for temporary working space if possible....)
I think you can do this in O(n) time and O(n) space by simply computing the cycle decomposition.
You can compute the cycle decomposition in O(n) by simply starting with the first element and following the path until you return to the start. This gives you the first cycle. Mark each node as visited as you follow the path.
Then repeat for the next unvisited node until all nodes are marked as visited.
The parity of a cycle of length k is (k-1)%2, so you can simply add up the parities of all the cycles you have discovered to find the parity of the overall permutation.
Saving space
One way of marking the nodes as visited would be to add N to each value in the array when it is visited. You would then be able to do a final tidying O(n) pass to turn all the numbers back to the original values.
I selected the answer by Peter de Rivaz as the correct answer as this was the algorithmic approach I ended up using.
However I used a couple of extra optimisations so I thought I would share them:
Examine the size of data first
If it is greater than 64, use a java.util.BitSet to store the visited elements
If it is less than or equal to 64, use a long with bitwise operations to store the visited elements. This makes it O(1) space for many applications that only use small permutations.
Actually return the swap count rather than the parity. This gives you the parity if you need it, but is potentially useful for other purposes, and is no more expensive to compute.
Code below:
public int swapCount() {
if (length()<=64) {
return swapCountSmall();
} else {
return swapCountLong();
}
}
private int swapCountLong() {
int n=length();
int swaps=0;
BitSet seen=new BitSet(n);
for (int i=0; i<n; i++) {
if (seen.get(i)) continue;
seen.set(i);
for(int j=data[i]; !seen.get(j); j=data[j]) {
seen.set(j);
swaps++;
}
}
return swaps;
}
private int swapCountSmall() {
int n=length();
int swaps=0;
long seen=0;
for (int i=0; i<n; i++) {
long mask=(1L<<i);
if ((seen&mask)!=0) continue;
seen|=mask;
for(int j=data[i]; (seen&(1L<<j))==0; j=data[j]) {
seen|=(1L<<j);
swaps++;
}
}
return swaps;
}
You want the parity of the number of inversions. You can do this in O(n * log n) time using merge sort, but either you lose the initial array, or you need extra memory on the order of O(n).
A simple algorithm that uses O(n) extra space and is O(n * log n):
inv = 0
mergesort A into a copy B
for i from 1 to length(A):
binary search for position j of A[i] in B
remove B[j] from B
inv = inv + (j - 1)
That said, I don't think it's possible to do it in sublinear memory. See also:
https://cs.stackexchange.com/questions/3200/counting-inversion-pairs
https://mathoverflow.net/questions/72669/finding-the-parity-of-a-permutation-in-little-space
Consider this approach...
From the permutation, get the inverse permutation, by swapping the rows and
sorting according to the top row order. This is O(nlogn)
Then, simulate performing the inverse permutation and count the swaps, for O(n). This should give the parity of the permutation, according to this
An even permutation can be obtained as the composition of an even
number and only an even number of exchanges (called transpositions) of
two elements, while an odd permutation be obtained by (only) an odd
number of transpositions.
from Wikipedia.
Here's some code I had lying around, which performs an inverse permutation, I just modified it a bit to count swaps, you can just remove all mention of a, p contains the inverse permutation.
size_t
permute_inverse (std::vector<int> &a, std::vector<size_t> &p) {
size_t cnt = 0
for (size_t i = 0; i < a.size(); ++i) {
while (i != p[i]) {
++cnt;
std::swap (a[i], a[p[i]]);
std::swap (p[i], p[p[i]]);
}
}
return cnt;
}

Categories