I have found a problem on a net for generating some sequence.
A = [A1, A2, ..., AN]
and
where A1 < A2 < ... < Am > Am+1 > ... > AN for some index m, with m between 1 and N inclusive).
I want to find the minimum swaps to accomplish this
For Ex
1 8 10 3 7
Swap between 3 and 7 will give me the required seq.
Ans=1
I found this code in the editorial:
boolean[] done = new boolean[n];
for(int i=0;i<n;i++) {
int index = -1;
for(int j=0;j<n;j++) {
if(!done[j] && (index == -1 || values[j] < values[index]))
index = j;
}
int left = 0, right = 0;
for(int j=0;j<index;j++)
if(!done[j])
left++;
for(int j=index+1;j<n;j++)
if(!done[j])
right++;
res += Math.min(left, right);
done[index] = true;
}
return res;
I can't understand the code what its doing How can i found the minimum swap ? Is this a standard algorithm question.And time complexity is O(n^2) is this good.
(Note: swap seems to mean swap of two adjacent elements.)
The idea behind the code is as follows. The minimum element x must be moved either to the first or last position in the array. The number of swaps to move x does not depend on which swaps not involving x are performed. For every valid solution, the swaps not involving x are a solution for the array with x removed. Thus, there exists an optimal solution that moves x to the first or last position and then recursively deals with the remaining elements.
The structure of the code is to find the minimum unprocessed element (inner loop 1; done[j] is a flag indicating whether the element at position j has been processed), determine how many swaps are needed to move it to the first position (inner loop 2), and determine how many swaps are needed to move it to the last position (inner loop 3). We don't count swaps over processed elements because, when we're moving the current element, those elements already have been moved out of the way.
The running time could be improved to O(n log n) by substituting merge sort for the implicit selection sort and using a Fenwick tree to count the number of unprocessed elements in a range.
Related
I don't know how to prove the recursive algorithm of the problem. I can't use the mathematical induction to solve the this proving.(although I am familiar with the mathematical induction).
The problem:
Given an array of integers nums and a positive integer k, find whether it's possible to divide this array into k non-empty subsets whose sums are all equal.
Example 1:
Input: nums = [4, 3, 2, 3, 5, 2, 1], k = 4
Output: True
Explanation: It's possible to divide it into 4 subsets (5), (1, 4), (2,3), (2,3) with equal sums.
Note:
1 <= k <= len(nums) <= 16. 0 < nums[i] < 10000.
The algorithm:
(https://leetcode.com/problems/partition-to-k-equal-sum-subsets/solution/)
I first tried when i = 0 at the first recursion, the groups[i] = v, and I have to judge the search(groups, row, nums, target). However, at this time, I don't know how to think what the return value that is true or false will influence.
class Solution {
public boolean search(int[] groups, int row, int[] nums, int target) {
if (row < 0) return true;
int v = nums[row--];
for (int i = 0; i < groups.length; i++) {
if (groups[i] + v <= target) {
groups[i] += v;
if (search(groups, row, nums, target)) return true;
groups[i] -= v;
}
if (groups[i] == 0) break;
}
return false;
}
public boolean canPartitionKSubsets(int[] nums, int k) {
int sum = Arrays.stream(nums).sum();
if (sum % k > 0) return false;
int target = sum / k;
Arrays.sort(nums);
int row = nums.length - 1;
if (nums[row] > target) return false;
while (row >= 0 && nums[row] == target) {
row--;
k--;
}
return search(new int[k], row, nums, target);
}
}
The method canPartitionKSubsets first computes the sum sum of all the numbers.
If the partition existed then the sum of the elements in each partition must be target = sum//k. They check that sum is divisible by k.
They check if the last number is greater than target. If it were, then this number couldn't be in any group. So, they return false in that case.
Now it comes the search call. But lets make clear the interpretation of the variables.
In each call to search the variable groups represent the sums of the numbers currently being considered added to each one of the k groups. The variable row represents the position in the original list of the number currently being considered to be added to one of the groups.
Inside search the loop all the cases of adding the number in position row to each one of the k groups. It adds it and recursively tries to search if the is a complete solution that way. If not, it removes it back from the group that it was added.
The groups are attempted to be filled with numbers from the list in order, stating from group 0 to group number k-1.
They break the loop when they reach a group that currently has sum zero. This is an error in the algorithm. For the statement as you wrote it. This step is here only to cut some loops, but it only makes sense under the assumption that all numbers are positive, which at least in your transcription is not given. If the problem allows non-positive numbers, just remove that line from the code.
The algorithm works simply because it tries all cases. If you arrange all possibilities of placing some of the numbers from the list into some of the k groups in a tree, starting with placing none as the root, and branching each time an additional number is placed, then the tree nodes are the same as stack calls, and the leaves of the tree are the arrangements in which all numbers have been placed. The algorithm is doing Depth First Search on the tree, except for the line
if (groups[i] == 0) break;
which is wrong for the problem as stated.
After reading above thought, I take a tumble.
As we know, row represents the position in the original list of the number currently being considered to be added to one of the groups. If row < 0, we will get the all numbers added in the group.The try to put v in the suitable group. And the group in the bottom of the for loop at least have one element because all groups can be seen as the a set. If this group can't contain the one element, another can't, too. If the current group is not fit for the v, then try next group.
Note: The problem limit nums[i > 0. See the Note in the problem I edit.
1 <= k <= len(nums) <= 16. 0 < nums[i] < 10000.
Suppose you have a method subArrayLeftShift(a,i) which shifts left the sub array a[i,...,n-1] when n is the array length. That means that the elements a[i+1],...,a[n-1] are moving one place to the left, and the original a[i] will become the last one.
More formally, here is the function implementation:
public static void subArrayLeftShift(int[] a, int i){
if (a.length == 0) return;
int last = a.length - 1;
int insertToLast = a[i];
for (; i < last; i++){
a[i] = a[i + 1];
}
a[last] = insertToLast;
}
Now for the question: implement a function that receives an unsorted array, and returns the minimal number of calls to subArrayLeftShift for sorting the array.
In the interview I couldnt find the way to do it. I succeed to find the minimal number of calls for every example I wrote for intuition, but couldn't find a way for generalizing it.
Do you know how to solve it?
I propose the following algorithm to solve the problem:
Find the minimum number in the array that is not sorted ( has a smaller number on the right in the array). Let this number be x.
Count how many numbers in the array are greater than the previously found number x. Let this number be y.
Since for each call to the function, the unsorted number will end up at the last position, the optimum strategy is to call the function for each unsorted number in increasing order. Using what was found previously we start with x. We continue with the next unsorted number bigger than x, because in this way, it will end up on the right of x, hence it will be sorted. Continue in the same fashion. How much? How many bigger number than x we have? Well, that's y. So as a total, the number of calls to the function is 1 + y.
public static int minimumCalls(int[] a) {
int minCalls = 0;
for (int i = 0; i < a.length - 1; i++) {
for (int j = i+1; j < a.length; j++) {
if (a[i] > a[j]) {
minCalls++;
break;
}
}
}
return minCalls;
}
The idea behind my thinking is that you must invoke the method once whenever there exists in the SubArray any value less than the current i. The name of the method subArrayShiftLeft, i feel, is designed to throw you off and drag your attention away from thinking of this easily.
If there's any values less than the current one further on in the array, just invoke the method.
It's much easier to think of this as moving a single larger value to the end of the array than trying to shift the smaller ones to the left.
I have an assignment where I have to write an algorithm which 'splits' the array in two. Left side should be odd numbers, and right side should be even numbers. Both sides should be sorted in ascending order. I'm not allowed to use temp arrays or existing api.
I have managed to make a working method, problem is with an array of say 100 000 integers, it takes approximately 15 seconds to finish. The requirement is 0,1 seconds, so I obviously have a lot to improve. I'm not looking for someone to spoon-feed me the answer, just a nudge in the right direction. Please don't write any working code for me, though I would like to know if and why something I've written is bad!
What I have so far:
public static void delsortering(int[] a){
int oddnum = 0;
int n = a.length;
for(int k : a){ //finds how many odd numbers there are
if((k & 1) != 0) oddnum++;
}
for(int i = 0; i < n; i++){
if((a[i] & 1) != 0){ //finds odd numbers
for(int j = 0; j < n; j++){
if((a[j] & 1) == 0) //looks for even numbers to change pos with
switch(a, j, i);
}
}
}
for (int i = 0; i < n; i++){
int from = i < oddnum ? 0 : oddnum;
int to = i < oddnum ? oddnum - i: n - i + oddetall;
int m = maxValue(a, from, to); //finds max value in specified range
switch(a, m, to - 1); //puts said max value at specified index
}
}
Appreciate all the help I can get!
A better solution would be:
Firstly keep two variables that point to the first and last elements of the array e.g x=0; y=N-1.
Then start moving x to the right until you find an even number (all numbers until now are odd !!!), then start moving y to the left until you find an odd number (all number you examine while decreasing-moving left y are even until the first one odd you find !!!)
Swap values x,y ,increase x,y and repeat the same procedure until x,y get crossed.
Then you have the array with evens on the right and odd on the left but not ordered. So you could count during the above procedure the number of odds and evens in order to know where there are separated in the array let's say in k index.
Sort array[0 - k], Sort array[k+1 - N].
Complexity: O(n) for the first part (x,y are only once moving to one direction) and O(nlogn) for both sorts so O(nlogn) which is better than O(n^2) that is your solution.
I got a pseudocode:
Input: Array A with n (= length) >= 2
Output: x
x = 0;
for i = 1 to n do
for j = i+1 to n do
if x < |A[i] - A[j]| then
x = |A[i] - A[j]|;
end if
end for
end for
return x;
I have converted that to a real code to see better what it does:
public class Test
{
public static void main (String[] args)
{
int A[] = {1,2,3,4,5,6,7,8,9};
int x = 0;
for (int i = 1; i < A.length; i++)
{
for (int j = i + 1; j < A.length; j++)
{
if (x < Math.abs(A[i] - A[j]))
{
x = Math.abs(A[i] - A[j]);
}
}
}
System.out.println(x);
}
}
The output was 7 with the array in the code.
I have used another array (1 to 20) and the putput was 18.
Array 1-30, the output was 28.
The pattern seems clear, the algorithm gives you the antepenultimate / third from last array value. Or am I wrong?
I think the pseudo code tries to find the greater of the difference between any 2 elements within an array.
Your real code however, starts from 1 instead of 0 and therefore excludes the first element within this array.
I think pseudocode is trying to find the greatest difference between two numbers in an array. It should be the difference between the minimum and maximum value of the array.
I personally think this is a really poor algorithm since it is doing this task in O(n^2). You can find the min and maximum value of an array in O(n). and take the difference between those numbers and result will be the same. check the pseudocode
Input: Array A with n (= length) >= 2
min=A[0];max = A[0];
for i = 1 to n do
if min > A[i] then
min = A[i];
end if
if max < A[i] then
max = A[i]
end if
end for
return (max-min);
The code gives the biggest difference between any two elements in the array.
There are 2 nested loops, each running over each element of the array. The second loop starts at the element after the first loop's element, so that each possible pair is considered only once.
The variable x is the current maximum, initialized to 0. If x is less than the absolute value of the current pair's difference, then we have a new maximum and it is stored.
However, because you directly copied the pseudocode's starting index of 1, you are inadvertently skipping the first element of the array, with index 0. So your Java code is giving you the maximum difference without considering the first element.
If you have an array of values between 1 and n, you are skipping the 1 (in index 0) and the returned value is n - 2, which happens to be the third-to-last value in the array. If you had shuffled the values in the array as a different test case, then you would see that the returned value would have changed to n - 1 as now both 1 and n would be considered (as long as n itself wasn't in the first position).
In any case, you would need to set the index of the first element to 0 so that the first element is considered. Then {1,2,3,4,5,6,7,8,9} would yield 8 (or any other order of those same elements).
Assuming all positive integers, the algorithm in a nutshell finds the difference between the maximum and the minimum value in the array. However, it will not work correctly unless you initialize i to 0 in the for loop.
for (int i = 0; i < A.length; i++)
I have the following problem I need to optimize. For a given array(with duplicated keys allowed), for each position i in the array, I need to compute all bigger values right of i, and all smaller values left of i. If we have:
1 1 4 3 5 6 7 and i = 3(value 3), the count of smaller values to left of i is 1(no repeated keys), and to the right, the number of bigger values is 3.
The brute force solution of this problem is ~N^2, and with some extra space I can manage to compute the smaller values from the bigger ones, so reducing complexity to ~(N^2)/2.
My question is: is there a faster way to get it done? Maybe NlgN? I imagine there is a data structure out there I don't know which will allow me to do the computation faster.
EDIT: Thank you all for your replies and discussions. You can find two good solutions two the problem below. Always a pleasure learning from developers in stackoverflow.
Here's an O(n log n) solution.
As hinted by #SayonjiNakate, the solution using segment tree (I used Fenwick tree in my implementation) runs in O(n log M) time, where M is the maximum possible value in the array.
Firstly, note that the problem "number of smaller elements on the left" is equivalent to the problem "number of greater elements on the right" by reversing and negating the array. So, in my explanation below I only describe the "number of smaller elements on the left", which I call "lesser_left_count".
Algorithm for lesser_left_count:
The idea is to be able to find the total of numbers smaller than a specific number.
Define an array tree with size upto MAX_VALUE, which will store the value 1 for seen numbers and 0 otherwise.
Then as we traverse the array, when we see a number num, just assign the value 1 to tree[num] (update operation). Then lesser_left_count for a number num is the sum from 1 to num-1 (sum operation) so far, since all smaller numbers to the left of current position would have been set to 1.
Simple right? If we use Fenwick tree, the update and sum operation can be done each in O(log M) time, where M is the maximum possible value in the array. Since we are iterating over the array, total time is O(n log M).
The only disadvantage of the naive solution is that it uses a lot of memory as M gets bigger (I set M=2^20-1 in my code, which take around 4MB of memory). This can be improved by mapping distinct integers in the array into smaller integers (in a way that preserve the order). The mapping can be done in simply O(n log n) by sorting the array. So the number M can be reinterpreted as "number of distinct elements in the array".
So the memory wouldn't be any problem anymore, because if after this improvement you indeed need huge memory, that means there are that many distinct numbers in your array, and the time complexity of O(n) will already be too high to be calculated in normal machine anyway.
For the sake of simplicity, I didn't include that improvement in my code.
Oh, and since Fenwick tree only works for positive numbers, I converted the numbers in the array to be minimum 1. Note that this doesn't change the result.
Python code:
MAX_VALUE = 2**20-1
f_arr = [0]*MAX_VALUE
def reset():
global f_arr, MAX_VALUE
f_arr[:] = [0]*MAX_VALUE
def update(idx,val):
global f_arr
while idx<MAX_VALUE:
f_arr[idx]+=val
idx += (idx & -idx)
def cnt_sum(idx):
global f_arr
result = 0
while idx > 0:
result += f_arr[idx]
idx -= (idx & -idx)
return result
def count_left_less(arr):
reset()
result = [0]*len(arr)
for idx,num in enumerate(arr):
cnt_prev = cnt_sum(num-1)
if cnt_sum(num) == cnt_prev: # If we haven't seen num before
update(num,1)
result[idx] = cnt_prev
return result
def count_left_right(arr):
arr = [x for x in arr]
min_num = min(arr)
if min_num<=0: # Got nonpositive numbers!
arr = [min_num+1+x for x in arr] # Convert to minimum 1
left = count_left_less(arr)
arr.reverse() # Reverse for greater_right_count
max_num = max(arr)
arr = [max_num+1-x for x in arr] # Negate the entries, keep minimum 1
right = count_left_less(arr)
right.reverse() # Reverse the result, to align with original array
return (left, right)
def main():
arr = [1,1,3,2,4,5,6]
(left, right) = count_left_right(arr)
print 'Array: ' + str(arr)
print 'Lesser left count: ' + str(left)
print 'Greater right cnt: ' + str(right)
if __name__=='__main__':
main()
will produce:
Original array: [1, 1, 3, 2, 4, 5, 6]
Lesser left count: [0, 0, 1, 1, 3, 4, 5]
Greater right cnt: [5, 5, 3, 3, 2, 1, 0]
or if you want Java code:
import java.util.Arrays;
class Main{
static int MAX_VALUE = 1048575;
static int[] fArr = new int[MAX_VALUE];
public static void main(String[] args){
int[] arr = new int[]{1,1,3,2,4,5,6};
System.out.println("Original array: "+toString(arr));
int[][] leftRight = lesserLeftRight(arr);
System.out.println("Lesser left count: "+toString(leftRight[0]));
System.out.println("Greater right cnt: "+toString(leftRight[1]));
}
public static String toString(int[] arr){
String result = "[";
for(int num: arr){
if(result.length()!=1){
result+=", ";
}
result+=num;
}
result+="]";
return result;
}
public static void reset(){
Arrays.fill(fArr,0);
}
public static void update(int idx, int val){
while(idx < MAX_VALUE){
fArr[idx]+=val;
idx += (idx & -idx);
}
}
public static int cntSum(int idx){
int result = 0;
while(idx > 0){
result += fArr[idx];
idx -= (idx & -idx);
}
return result;
}
public static int[] lesserLeftCount(int[] arr){
reset();
int[] result = new int[arr.length];
for(int i=0; i<arr.length; i++){
result[i] = cntSum(arr[i]-1);
if(cntSum(arr[i])==result[i]) update(arr[i],1);
}
return result;
}
public static int[][] lesserLeftRight(int[] arr){
int[] left = new int[arr.length];
int min = Integer.MAX_VALUE;
for(int i=0; i<arr.length; i++){
left[i] = arr[i];
if(min>arr[i]) min=arr[i];
}
for(int i=0; i<arr.length; i++) left[i]+=min+1;
left = lesserLeftCount(left);
int[] right = new int[arr.length];
int max = Integer.MIN_VALUE;
for(int i=0; i<arr.length; i++){
right[i] = arr[arr.length-1-i];
if(max<right[i]) max=right[i];
}
for(int i=0; i<arr.length; i++) right[i] = max+1-right[i];
right = lesserLeftCount(right);
int[] rightFinal = new int[right.length];
for(int i=0; i<right.length; i++) rightFinal[i] = right[right.length-1-i];
return new int[][]{left, rightFinal};
}
}
which will produce same result.
Try segment tree data structure used for solving RMQ.
It would give you exactly n log n.
And look through RMQ problem generally, your problem may be reduced to it.
Here's a relatively simple solution that's O(N lg(N)) that doesn't rely on the entries being among finitely many integers (in particular, it should work for any ordered data type).
We assume the output is to be stored in two arrays; lowleft[i] will at the end contain the number of distinct values x[j] with j < i and x[j] < x[i], and highright[i] will at the end contain the number of distinct values x[j] with j > i and x[j] > x[i].
Create a balanced tree data structure that maintains in each node, the number of nodes in the subtree rooted at that node. This is fairly standard, but not a part of the Java standard library I think; it's probably easiest to do an AVL tree or so. The type of the values in the nodes should be the type of the values in your array.
Now first iterate forward through the array. We start with an empty balanced tree. For every value x[i] we encounter, we enter it into the balanced tree (near the end there are O(N) entries in this tree, so this step takes O(lg(N)) time). When searching for the position to enter x[i], we keep track of the number of values less than x[i] by adding up the sizes of all left subtrees whenever we take the right subtree, and adding what will be the size of the left subtree of x[i]. We enter this number into lowleft[i].
If the value x[i] is already in the tree, we just carry on with the next iteration of this loop. If the value x[i] is not in there, we enter it and rebalance the tree, taking care to update the subtree sizes correctly.
Each iteration of this loop takes O(lg(N)) steps, for a total of O(N lg(N)). We now start with an empty tree and do the same thing iterating backward through the array, finding the position for every x[i] in the tree, and every time recording the size of all subtrees to the right of the new node as highright[i]. Total complexity therefore O(N lg(N)).
Here is an algorithm which should give you O(NlgN):
Iterate over the list once and build a map of key => indexList. So for ever key (element in the array) you store a list of all the indices where that key is in the array. This will take O(N) (iterate over the list) + N*O(1) (appending N items to lists) steps. So this step is O(N). The second step requires that these lists are sorted which they will be as we are iterating over the list from left to right so a newly inserted index in a list will always be larger than all the other ones which are already in there.
Iterate over the list again and for each element search the index lists for all keys which are larger than the current element for the first index which is after the current index. This gives you the number of elements to the right of the current one which are larger than the current element. As the index lists are sorted you can do a binary search which will take O(k * lgN) steps with k being the number of keys larger then the current one. If the number of keys has an upper limit then this is a constant as far as big-O is concerned. The second step here is to search all smaller keys and find the first index in the list which is prior to the current one. This will give you the number of element to the left of the current one which are smaller. Same reasoning as above this is O(k * lgN)
So assuming the number of keys is limited this should give you O(N) + N * 2 * O(lgN) so overall O(NlgN) if I'm not mistaken.
Edit: Pseudo code:
int[] list;
map<int => int[]> valueIndexMap;
foreach (int i = 0; i < list.length; ++i) { // N iterations
int currentElement = list[i]; // O(1)
int[] indexList = valueIndexMap[currentElement]; // O(1)
indexList.Append(i); // O(1)
}
foreach (int i = 0; i < list.length; ++i) { // N iterations
int currentElement = list[i]; // O(1)
int numElementsLargerToTheRight;
int numElementsSmallerToTheLeft;
foreach (int k = currentElement + 1; k < maxKeys; ++k) { // k iterations with k being const
int[] indexList = valueIndexMap[k]; // O(1)
int firstIndexBiggerThanCurrent = indexList.BinaryFindFirstEntryLargerThan(i); // O(lgN)
numElementsLargerToTheRight += indexList.Length - firstIndexBiggerThanCurrent; // O(1)
}
foreach (int k = currentElement - 1; k >= 0; --k) { // k iterations with k being const
int[] indexList = valueIndexMap[k]; // O(1)
int lastIndexSmallerThanCurrent = indexList.BinaryFindLastEntrySmallerThan(i); // O(lgN)
numElementsSmallerToTheLeft += lastIndexSmallerThanCurrent; // O(1)
}
}
Update: I tinkered around with a C# implementation in case anyone is interested;