How to determine the best postcondition in this question? - java

Consider the following program segment:
/** Precondition: a[0]...a[n-1] is an initialized array of integers, and 0 < n <= a.length. **/
int c = 0;
for (int i = 0; i < n; i++)
if (a[i] >= 0)
{
a[c] = a[i];
c++;
}
n = c;
Which is the best postcondition for the segment?
a[0] to a[n-1] has been stripped of all positive integers.
a[0] to a[n-1] has been stripped of all negative integers.
a[0] to a[n-1] has been stripped of all nonnegative integers.
4a[0] to a[n-1] has been stripped of all occurrences of zero.
The updated value of n is less than or equal to the value of n before execution
of the segment.
This is a question on the AP CSA exam. The answer key says that 2. is the answer (it states that 5. is also correct but would not be the "best" postcondition). But I am just thinking: "what if ALL the elements in the initial list are negative numbers?" If that's the case, wouldn't it be impossible to strip away any negative number?

"what if ALL the elements in the initial list are negative numbers?" If that's the case, wouldn't it be impossible to strip away any negative number?
I thought so too at first, but note that the end n is updated:
n = c
With that and only negative numbers n would be zero and a[0] to a[n-1] would be an empty set, for which the statement It does not contain negative numbers is true.

Note, that the algorithm is changing the array's values in place, so it really "strips away" negative numbers. It does it by iterating through all elements of array a and moving it to a lower index if it's a non-negative number.
Then at the end, it sets n to the last index of the element which met the criteria (being non-negative). That's why you have n = c; as the last statement.
So statement 2 is true. The new size of the array is always smaller or equal to the original, so statement 5 is also true.
One thing to notice: In Java, you can still reach all elements if you iterate through the original array. Because in Java every array has a fixed length since being initialized. Let's see the following array:
int a[] = {-1,2,3,-4,5,-6,7,8,-9,-10};
So if I iterate using n at the end, I just get the values that meet the criteria:
for (int i = 0; i < n; i++) {
System.out.print(a[i] + ",");
}
This will print 2,3,5,7,8, but if I iterate through a I get all the values from the end of the array:
for (int e : a) {
System.out.print(e + ",");
}
Now it will print 2,3,5,7,8,-6,7,8,-9,-10. So this algorithm leaves some garbage behind.

Related

What is this algorithm doing?

I got a pseudocode:
Input: Array A with n (= length) >= 2
Output: x
x = 0;
for i = 1 to n do
for j = i+1 to n do
if x < |A[i] - A[j]| then
x = |A[i] - A[j]|;
end if
end for
end for
return x;
I have converted that to a real code to see better what it does:
public class Test
{
public static void main (String[] args)
{
int A[] = {1,2,3,4,5,6,7,8,9};
int x = 0;
for (int i = 1; i < A.length; i++)
{
for (int j = i + 1; j < A.length; j++)
{
if (x < Math.abs(A[i] - A[j]))
{
x = Math.abs(A[i] - A[j]);
}
}
}
System.out.println(x);
}
}
The output was 7 with the array in the code.
I have used another array (1 to 20) and the putput was 18.
Array 1-30, the output was 28.
The pattern seems clear, the algorithm gives you the antepenultimate / third from last array value. Or am I wrong?
I think the pseudo code tries to find the greater of the difference between any 2 elements within an array.
Your real code however, starts from 1 instead of 0 and therefore excludes the first element within this array.
I think pseudocode is trying to find the greatest difference between two numbers in an array. It should be the difference between the minimum and maximum value of the array.
I personally think this is a really poor algorithm since it is doing this task in O(n^2). You can find the min and maximum value of an array in O(n). and take the difference between those numbers and result will be the same. check the pseudocode
Input: Array A with n (= length) >= 2
min=A[0];max = A[0];
for i = 1 to n do
if min > A[i] then
min = A[i];
end if
if max < A[i] then
max = A[i]
end if
end for
return (max-min);
The code gives the biggest difference between any two elements in the array.
There are 2 nested loops, each running over each element of the array. The second loop starts at the element after the first loop's element, so that each possible pair is considered only once.
The variable x is the current maximum, initialized to 0. If x is less than the absolute value of the current pair's difference, then we have a new maximum and it is stored.
However, because you directly copied the pseudocode's starting index of 1, you are inadvertently skipping the first element of the array, with index 0. So your Java code is giving you the maximum difference without considering the first element.
If you have an array of values between 1 and n, you are skipping the 1 (in index 0) and the returned value is n - 2, which happens to be the third-to-last value in the array. If you had shuffled the values in the array as a different test case, then you would see that the returned value would have changed to n - 1 as now both 1 and n would be considered (as long as n itself wasn't in the first position).
In any case, you would need to set the index of the first element to 0 so that the first element is considered. Then {1,2,3,4,5,6,7,8,9} would yield 8 (or any other order of those same elements).
Assuming all positive integers, the algorithm in a nutshell finds the difference between the maximum and the minimum value in the array. However, it will not work correctly unless you initialize i to 0 in the for loop.
for (int i = 0; i < A.length; i++)

What is wrong with my Java solution to Codility MissingInteger? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am trying to solve the codility MissingInteger problem link:
Write a function:
class Solution { public int solution(int[] A); }
that, given a non-empty zero-indexed array A of N integers, returns the minimal positive integer that does not occur in A.
For example, given:
A[0] = 1
A[1] = 3
A[2] = 6
A[3] = 4
A[4] = 1
A[5] = 2
the function should return 5.
Assume that:
N is an integer within the range [1..100,000];
each element of array A is an integer within the range [−2,147,483,648..2,147,483,647].
Complexity:
expected worst-case time complexity is O(N);
expected worst-case space complexity is O(N), beyond input storage (not counting the storage required for input arguments).
Elements of input arrays can be modified.
My solution is:
class Solution {
TreeMap<Integer,Object> all = new TreeMap<Integer,Object>();
public int solution(int[] A) {
for(int i=0; i<A.length; i++)
all.put(i+1,new Object());
for(int i=0; i<A.length; i++)
if(all.containsKey(A[i]))
all.remove(A[i]);
Iterator notOccur = all.keySet().iterator();
if(notOccur.hasNext())
return (int)notOccur.next();
return 1;
}
}
The test result is:
Can anyone explain me why I got this two wrong answers? Especially the first one, if there is only one element in the array, shouldn't the only right answer be 1?
Here is my answer, got 100/100.
import java.util.HashSet;
class Solution {
public int solution(int[] A) {
int num = 1;
HashSet<Integer> hset = new HashSet<Integer>();
for (int i = 0 ; i < A.length; i++) {
hset.add(A[i]);
}
while (hset.contains(num)) {
num++;
}
return num;
}
}
returns the minimal positive integer that does not occur in A.
So in an array with only one element, if that number is 1, you should return 2. If not, you should return 1.
I think you're probably misunderstanding the requirements a little. Your code is creating keys in a map based on the indexes of the given array, and then removing keys based on the values it finds there. This problem shouldn't have anything to do with the array's indexes: it should simply return the lowest possible positive integer that isn't a value in the given array.
So, for example, if you iterate from 1 to Integer.MAX_VALUE, inclusive, and return the first value that isn't in the given array, that would produce the correct answers. You'll need to figure out what data structures to use, to ensure that your solution scales at O(n).
I have done the answer inspired by the answer of Denes but a simpler one.
int counter[] = new int[A.length];
// Count the items, only the positive numbers
for (int i = 0; i < A.length; i++)
if (A[i] > 0 && A[i] <= A.length)
counter[A[i] - 1]++;
// Return the first number that has count 0
for (int i = 0; i < counter.length; i++)
if (counter[i] == 0)
return i + 1;
// If no number has count 0, then that means all number in the sequence
// appears so the next number not appearing is in next number after the
// sequence.
return A.length + 1;
returns the minimal positive integer that does not occur in A
The key here is that zero is not included in the above (as it is not positive integer). So the function should never return 0. I believe this covers both of your failed cases above.
edit: due to the fact that question has been changed since this was written this answer isn't really relevant anymore
Very little wrong.
Just the last line
return 1;
should read
return A.length + 1;
because at this point you've found & removed ALL KEYS from 1 to A.length since you have array entries matching each of them. The test demands that in this situation you must return the next integer above the greatest value found in array A.
All other eventualities (e.g. negative entries, missing 1, missing number between 1 and A.length) are covered by returning the first unremoved key found under iteration. Iteration here is done by "natural ordering", i.e. 1 .. max, by default for a TreeMap. The first unremoved key will therefore be the smallest missing integer.
This change should make the 2 incorrect tests okay again. So 50/50 for correctness.
Efficiency, of course, is another matter and one that carries another 50 points.
Your use of the TreeMap data structure here brings a time penalty when evaluating the test results. Simpler data structures (that essentially use your algorithm) would be faster.
This more primitive algorithm avoids sorting and copies all entries > 1 onto a new array of length 100001 so that index x holds value x. It actually runs faster than Serdar's code with medium and large input arrays.
public int solution(int[] A)
{
int i = 0,
count = 0,
N = A.length;
int[] B = new int[100001]; // Initially all entries are zero
for (i = 0; i < N; i++) // Copy all entries > 0 into array B ...
{
if (A[i] > 0 && A[i] < 100001)
{
B[A[i]] = A[i]; // ... putting value x at index x in B ...
count++; // ... and keep a count of positives
}
}
for (i = 1; i < count + 1; i++) // Find first empty element in B
{
if (B[i] == 0)
{
return i; // Index of empty element = missing int
}
}
// No unfilled B elements above index 0 ?
return count + 1; // => return int above highest filled element
}

Compute smaller and bigger values for an array position

I have the following problem I need to optimize. For a given array(with duplicated keys allowed), for each position i in the array, I need to compute all bigger values right of i, and all smaller values left of i. If we have:
1 1 4 3 5 6 7 and i = 3(value 3), the count of smaller values to left of i is 1(no repeated keys), and to the right, the number of bigger values is 3.
The brute force solution of this problem is ~N^2, and with some extra space I can manage to compute the smaller values from the bigger ones, so reducing complexity to ~(N^2)/2.
My question is: is there a faster way to get it done? Maybe NlgN? I imagine there is a data structure out there I don't know which will allow me to do the computation faster.
EDIT: Thank you all for your replies and discussions. You can find two good solutions two the problem below. Always a pleasure learning from developers in stackoverflow.
Here's an O(n log n) solution.
As hinted by #SayonjiNakate, the solution using segment tree (I used Fenwick tree in my implementation) runs in O(n log M) time, where M is the maximum possible value in the array.
Firstly, note that the problem "number of smaller elements on the left" is equivalent to the problem "number of greater elements on the right" by reversing and negating the array. So, in my explanation below I only describe the "number of smaller elements on the left", which I call "lesser_left_count".
Algorithm for lesser_left_count:
The idea is to be able to find the total of numbers smaller than a specific number.
Define an array tree with size upto MAX_VALUE, which will store the value 1 for seen numbers and 0 otherwise.
Then as we traverse the array, when we see a number num, just assign the value 1 to tree[num] (update operation). Then lesser_left_count for a number num is the sum from 1 to num-1 (sum operation) so far, since all smaller numbers to the left of current position would have been set to 1.
Simple right? If we use Fenwick tree, the update and sum operation can be done each in O(log M) time, where M is the maximum possible value in the array. Since we are iterating over the array, total time is O(n log M).
The only disadvantage of the naive solution is that it uses a lot of memory as M gets bigger (I set M=2^20-1 in my code, which take around 4MB of memory). This can be improved by mapping distinct integers in the array into smaller integers (in a way that preserve the order). The mapping can be done in simply O(n log n) by sorting the array. So the number M can be reinterpreted as "number of distinct elements in the array".
So the memory wouldn't be any problem anymore, because if after this improvement you indeed need huge memory, that means there are that many distinct numbers in your array, and the time complexity of O(n) will already be too high to be calculated in normal machine anyway.
For the sake of simplicity, I didn't include that improvement in my code.
Oh, and since Fenwick tree only works for positive numbers, I converted the numbers in the array to be minimum 1. Note that this doesn't change the result.
Python code:
MAX_VALUE = 2**20-1
f_arr = [0]*MAX_VALUE
def reset():
global f_arr, MAX_VALUE
f_arr[:] = [0]*MAX_VALUE
def update(idx,val):
global f_arr
while idx<MAX_VALUE:
f_arr[idx]+=val
idx += (idx & -idx)
def cnt_sum(idx):
global f_arr
result = 0
while idx > 0:
result += f_arr[idx]
idx -= (idx & -idx)
return result
def count_left_less(arr):
reset()
result = [0]*len(arr)
for idx,num in enumerate(arr):
cnt_prev = cnt_sum(num-1)
if cnt_sum(num) == cnt_prev: # If we haven't seen num before
update(num,1)
result[idx] = cnt_prev
return result
def count_left_right(arr):
arr = [x for x in arr]
min_num = min(arr)
if min_num<=0: # Got nonpositive numbers!
arr = [min_num+1+x for x in arr] # Convert to minimum 1
left = count_left_less(arr)
arr.reverse() # Reverse for greater_right_count
max_num = max(arr)
arr = [max_num+1-x for x in arr] # Negate the entries, keep minimum 1
right = count_left_less(arr)
right.reverse() # Reverse the result, to align with original array
return (left, right)
def main():
arr = [1,1,3,2,4,5,6]
(left, right) = count_left_right(arr)
print 'Array: ' + str(arr)
print 'Lesser left count: ' + str(left)
print 'Greater right cnt: ' + str(right)
if __name__=='__main__':
main()
will produce:
Original array: [1, 1, 3, 2, 4, 5, 6]
Lesser left count: [0, 0, 1, 1, 3, 4, 5]
Greater right cnt: [5, 5, 3, 3, 2, 1, 0]
or if you want Java code:
import java.util.Arrays;
class Main{
static int MAX_VALUE = 1048575;
static int[] fArr = new int[MAX_VALUE];
public static void main(String[] args){
int[] arr = new int[]{1,1,3,2,4,5,6};
System.out.println("Original array: "+toString(arr));
int[][] leftRight = lesserLeftRight(arr);
System.out.println("Lesser left count: "+toString(leftRight[0]));
System.out.println("Greater right cnt: "+toString(leftRight[1]));
}
public static String toString(int[] arr){
String result = "[";
for(int num: arr){
if(result.length()!=1){
result+=", ";
}
result+=num;
}
result+="]";
return result;
}
public static void reset(){
Arrays.fill(fArr,0);
}
public static void update(int idx, int val){
while(idx < MAX_VALUE){
fArr[idx]+=val;
idx += (idx & -idx);
}
}
public static int cntSum(int idx){
int result = 0;
while(idx > 0){
result += fArr[idx];
idx -= (idx & -idx);
}
return result;
}
public static int[] lesserLeftCount(int[] arr){
reset();
int[] result = new int[arr.length];
for(int i=0; i<arr.length; i++){
result[i] = cntSum(arr[i]-1);
if(cntSum(arr[i])==result[i]) update(arr[i],1);
}
return result;
}
public static int[][] lesserLeftRight(int[] arr){
int[] left = new int[arr.length];
int min = Integer.MAX_VALUE;
for(int i=0; i<arr.length; i++){
left[i] = arr[i];
if(min>arr[i]) min=arr[i];
}
for(int i=0; i<arr.length; i++) left[i]+=min+1;
left = lesserLeftCount(left);
int[] right = new int[arr.length];
int max = Integer.MIN_VALUE;
for(int i=0; i<arr.length; i++){
right[i] = arr[arr.length-1-i];
if(max<right[i]) max=right[i];
}
for(int i=0; i<arr.length; i++) right[i] = max+1-right[i];
right = lesserLeftCount(right);
int[] rightFinal = new int[right.length];
for(int i=0; i<right.length; i++) rightFinal[i] = right[right.length-1-i];
return new int[][]{left, rightFinal};
}
}
which will produce same result.
Try segment tree data structure used for solving RMQ.
It would give you exactly n log n.
And look through RMQ problem generally, your problem may be reduced to it.
Here's a relatively simple solution that's O(N lg(N)) that doesn't rely on the entries being among finitely many integers (in particular, it should work for any ordered data type).
We assume the output is to be stored in two arrays; lowleft[i] will at the end contain the number of distinct values x[j] with j < i and x[j] < x[i], and highright[i] will at the end contain the number of distinct values x[j] with j > i and x[j] > x[i].
Create a balanced tree data structure that maintains in each node, the number of nodes in the subtree rooted at that node. This is fairly standard, but not a part of the Java standard library I think; it's probably easiest to do an AVL tree or so. The type of the values in the nodes should be the type of the values in your array.
Now first iterate forward through the array. We start with an empty balanced tree. For every value x[i] we encounter, we enter it into the balanced tree (near the end there are O(N) entries in this tree, so this step takes O(lg(N)) time). When searching for the position to enter x[i], we keep track of the number of values less than x[i] by adding up the sizes of all left subtrees whenever we take the right subtree, and adding what will be the size of the left subtree of x[i]. We enter this number into lowleft[i].
If the value x[i] is already in the tree, we just carry on with the next iteration of this loop. If the value x[i] is not in there, we enter it and rebalance the tree, taking care to update the subtree sizes correctly.
Each iteration of this loop takes O(lg(N)) steps, for a total of O(N lg(N)). We now start with an empty tree and do the same thing iterating backward through the array, finding the position for every x[i] in the tree, and every time recording the size of all subtrees to the right of the new node as highright[i]. Total complexity therefore O(N lg(N)).
Here is an algorithm which should give you O(NlgN):
Iterate over the list once and build a map of key => indexList. So for ever key (element in the array) you store a list of all the indices where that key is in the array. This will take O(N) (iterate over the list) + N*O(1) (appending N items to lists) steps. So this step is O(N). The second step requires that these lists are sorted which they will be as we are iterating over the list from left to right so a newly inserted index in a list will always be larger than all the other ones which are already in there.
Iterate over the list again and for each element search the index lists for all keys which are larger than the current element for the first index which is after the current index. This gives you the number of elements to the right of the current one which are larger than the current element. As the index lists are sorted you can do a binary search which will take O(k * lgN) steps with k being the number of keys larger then the current one. If the number of keys has an upper limit then this is a constant as far as big-O is concerned. The second step here is to search all smaller keys and find the first index in the list which is prior to the current one. This will give you the number of element to the left of the current one which are smaller. Same reasoning as above this is O(k * lgN)
So assuming the number of keys is limited this should give you O(N) + N * 2 * O(lgN) so overall O(NlgN) if I'm not mistaken.
Edit: Pseudo code:
int[] list;
map<int => int[]> valueIndexMap;
foreach (int i = 0; i < list.length; ++i) { // N iterations
int currentElement = list[i]; // O(1)
int[] indexList = valueIndexMap[currentElement]; // O(1)
indexList.Append(i); // O(1)
}
foreach (int i = 0; i < list.length; ++i) { // N iterations
int currentElement = list[i]; // O(1)
int numElementsLargerToTheRight;
int numElementsSmallerToTheLeft;
foreach (int k = currentElement + 1; k < maxKeys; ++k) { // k iterations with k being const
int[] indexList = valueIndexMap[k]; // O(1)
int firstIndexBiggerThanCurrent = indexList.BinaryFindFirstEntryLargerThan(i); // O(lgN)
numElementsLargerToTheRight += indexList.Length - firstIndexBiggerThanCurrent; // O(1)
}
foreach (int k = currentElement - 1; k >= 0; --k) { // k iterations with k being const
int[] indexList = valueIndexMap[k]; // O(1)
int lastIndexSmallerThanCurrent = indexList.BinaryFindLastEntrySmallerThan(i); // O(lgN)
numElementsSmallerToTheLeft += lastIndexSmallerThanCurrent; // O(1)
}
}
Update: I tinkered around with a C# implementation in case anyone is interested;

Find sum of integer array without overflow

Given an array of integers (positive and negative), each having at most K bits (plus the sign bit), and it is known that the sum of all the integers in the array also has at most K bits (plus the sign bit). Design an algorithm that computes the sum of integers in the array, with all intermediate sums also having at most K bits (plus the sign bit). [Hint: find in what order you should add positive and negative numbers].
This is a question from interview material not a homework
I am actually thinking of creating two separate arrays one for positive and other for negative, sort both of them and then add both so that most negative gets added to most positive... But this seems to have O(nlogn) time complexity(to sort) and O(n) space complexity> Please help!
First note that even if you let the immediate results overflow, the final result will always be correct if it can be represented. This is because integral types of any size act like cyclic groups under addition in most languages including Java (not in C, where integer overflow is undefined behavior, and C#, which is able to throw you an overflow exception).
If you still want to prevent overflow, here's how to perform it in-place and in linear time:
split the array in-place to its negative entries (in any order) and to its positive entries (in any order). Zero can end up anywhere. In other words, perform one quick-sort step with the pivot being zero.
Let ni point to the start of the array (where negative entries are located).
Let pi point to the end of the array.
Let sum be zero.
While pi >= ni
if sum is negative
add arr[pi] to the sum.
if arr[pi] is negative (we've run out of positive addends) and sum is positive (an overflow has occured), the result overflows.
decrement pi
else
add arr[ni] to the sum.
if arr[ni] is positive and sum is negative, the result overflows.
increment ni.
Finally, check if sum has more than K bits. If it does, declare the result overflows.
The main idea is to iterate the array with two indexes: for positive and negative elements. When the sum is negative we will search for next positive element (using corresponding iterator) to add to the sum, otherwise - for the negative one.
This code should work:
public final class ArrayAdder {
#NotNull
private final int[] array;
private int sum;
public ArrayAdder(#NotNull int[] array) {
this.array = array;
}
public int sum() {
sum = 0;
final IntPredicate positive = v -> v > 0;
final Index positiveIndex = new Index(positive);
final Index negativeIndex = new Index(positive.negate());
while (positiveIndex.index < array.length || negativeIndex.index < array.length) {
sum += sum < 0 ? sum(positiveIndex, negativeIndex) : sum(negativeIndex, positiveIndex);
}
return sum;
}
private int sum(#NotNull Index mainIndex, #NotNull Index secondaryIndex) {
int localSum = 0;
// searching for the next suitable element
while (mainIndex.index < array.length && secondaryIndex.sign.test(array[mainIndex.index])) {
mainIndex.index++;
}
if (mainIndex.index < array.length) {
localSum += array[mainIndex.index++];
} else {
// add the remaining elements
for (; secondaryIndex.index < array.length; secondaryIndex.index++) {
if (secondaryIndex.sign.test(array[secondaryIndex.index])) {
localSum += array[secondaryIndex.index];
}
}
}
return localSum;
}
private static final class Index {
#NotNull
private final IntPredicate sign;
private int index;
public Index(#NotNull IntPredicate sign) {
this.sign = sign;
}
}
}
Option 1:
Sort the array in-place and iterate over half of it. At every step, add the ith element with the size-i-1th element.
Doesn't work if there's a few large numbers but many small negative numbers (or vice versa).
Option 2 (improvement):
Sort in-place.
Keep two indexes - one at the start and one at the end. Exit a loop when they meet. At every step if the result so far is negative add the value at the second index and advance it. If the result is positive add the value at the first index and advance it.

Understanding bitwise condition check for getting all possible sums of combinations in an Array paased

I got an algorithm whose objective is to give all possible sums of all combinations in an array of Integer.
private void arraySumPermutation(int value ,int[] arr){
int N = arr.length;
for(int i=0;i<1<<N;i++){
int sum = 0;
for(int j=0;j<N;j++){
if((i & 1<<j)>0){
iCount++;
sum += arr[j];
//S.O.P(sum);
}
}
}
}
I am not able to understand the inner if condition added with bitwise AND.
What is the objective of inner if loop.
if((i & 1<<j)>0)
Let's represent cominations of N-element set as N-bit numbers, where jth bit is 1 if jth item is included in the combintation, and 0 otherwise. This way you can represent all possible combinations as numbers in range [0, 2N).
The outer loop iterates over these numbers (1 << N == 2N).
The inner loop iterates over items of the set and if condition checks whether jth item is included into the current combination. In other words, it checks if jth bit of i is 1.
1<<j gives you a number where only jth bit is 1, i & (1 << j) resets all bits of i other than that bit, and > checks that result is not 0.
Note that this code (with ints) only works for N < 31.

Categories