Any better options available in java 8 for the below problem.
There are two arrays, one integral array and one incremental array.Apply each of the increment values to the elements in integral array and get the sum of absolute values of the elements of incremental after adding each increment.
import java.util.Arrays;
public class ArrayChallenge {
public static void main(String[] args) {
long[] nA = {-3, -2, 4, 5};
long[] iA = {2, 4, -6};
long[] sumArr = findAbsValueSum(nA, iA);
System.out.println(Arrays.toString(sumArr));
}
public static long[] findAbsValueSum(long[] numArr, long[] incrArr) {
long[] sumArr = new long[incrArr.length];
for (int i = 0; i < incrArr.length; i++) {
long sum = 0;
for (int j = 0; j < numArr.length; j++) {
sum = sum + Math.abs(numArr[j] + incrArr[i]);
numArr[j] = numArr[j] + incrArr[i];
}
sumArr[i] = sum;
}
return sumArr;
}
}
Result:
[14, 28, 14]
Is there any better options (Performance wise) to do the same in java 8?
The code you pasted is obviously as efficient as it is going to get. Computers aren't magic; if you find some other language or library that has a sumAll function, it'd just be doing this under the hood.
If you want it to be more efficient, you need to setup rules. Restrict the input or widen the things one is allowed to do, then you can get this to be more efficient.
For example, if you tell me that numArr is known well in advance and therefore any work done to transform numArr into different, more efficient (for this specific task) data types is 'free', because the only thing that is relevant is to return the answer as fast as possible once an incrArr is available, then:
Sort numArr in place. (free - can be done without knowing incrArr).
Build an incremental sum array. This array is the sum of the absolute value of all numbers at this index + all previous indices; {-3, -2, 4, 5} turns into {0, 3, 5, 9, 14}. (free - can be done without knowing incrArr)
To calculate the sumAbs for a positive increment
For this example, let's say your increment (I) is 2.
First, do a binary search for the index at which -I occurs; we shall call this IDX(-I). Here, IDX(-2) = 1 (because numArr[1] is -2). If -I isn't in the list, the nearest smaller number (Had -2 not been in your list, find -3 instead). (cost: O(logn)).
For this number, and all numbers in numArr below this index, the answer is trivial: It is the sum of the absolute value of all those numbers, minus X*I. This is O(1): it is simply sumArr[IDX(-I)] - (IDX(-I) * I).
Next find the index of 0 (cost: O(logn)). For all numbers 0 and up, the answer is again trivial. We need the sum of all positive numbers first, which is sumArr[sumArr.length - 1] - sumArr[idx(0)], then add X*I to this for each number in it, analogous to how we handled the negative numbers.
This leaves the interesting ones in between, such as -1 - which contributes only 1 to the sum total (-1 + 2 = +1). There is no speedy way out, so for only this slice of the input, we must iterate through it(so from IDX(-I) to IDX(0) exclusive, doing the math. This is technically O(n), except n is heavily limited; it can never be more than I unless there are duplicates in your list (and if there are, there are ways to handle those in bulk as well by making a weight array in the free precalculation phase), and is usually much less; it is the overlap: All values in the input which are between 0 and -I.
the increment is negative
The exact same algorithm applies, but reversed: For an increment such as -6, all numbers at 0 or below are trivial, as are all numbers at 6 or higher. The loop needs to only cover all numbers between 1 and 5, inclusive.
This results in an algorithm that is O(logn) +O(restricted-n) instead of the O(n) algorithm you have. In purely mathematical terms, it's still O(n), but in almost all scenarios it's orders of magnitude fewer operations.
Building the sum tables is itself O(n), so if the preptime is not 'free', there is no point to any of this, and what you described is as fast as it is going to get.
The "stream" version may look like this:
public static long[] findAbsValueSumStream(long[] numArr, long[] incrArr) {
return Arrays.stream(incrArr)
.map(inc -> IntStream.range(0, numArr.length)
.mapToLong(i -> Math.abs(numArr[i] += inc))
.sum()
)
.toArray();
}
Update
Shorter form can be used:
a lambda (i -> {long abs = Math.abs(numArr[i] + inc); numArr[i] += inc; return abs;})
may be replaced with equivalent (i -> Math.abs(numArr[i] += inc))
It provides the same output for the given test data:
long[] nA = {-3, -2, 4, 5};
long[] iA = {2, 4, -6};
long[] sumArr = findAbsValueSum(nA, iA);
System.out.println("loop: " + Arrays.toString(sumArr));
long[] sumArrStream = findAbsValueSumStream(nA, iA);
System.out.println("stream: " + Arrays.toString(sumArrStream));
Output:
loop: [14, 28, 14]
stream: [14, 28, 14]
However, the stream solution does not look as more performant because it uses similar nested loop.
Moreover, stream should not be used here at all because one of the input arrays numArr is modified while processing the stream, so it has side effects and cannot be run in parallel to increase performance because the results would be incorrect.
instead of (original code segment)
sum = sum + Math.abs(numArr[j] + incrArr[i]);
numArr[j] = numArr[j] + incrArr[i];
first micro-optimization (switched the 2 lines to remove the sum from second one):
numArr[j] = numArr[j] + incrArr[i];
sum = sum + Math.abs(numArr[j]);
and more micro-optimization (using compound assignment)
numArr[j] += incrArr[i];
sum = sum + Math.abs(numArr[j]);
and even more micro-optimization (using variable instead of array access)
var n = numArr[j] += incrArr[i];
sum = sum + Math.abs(n);
but eventually the JIT-compiler is already doing that (for bigger arrays).
unable to know if that really helps or if the difference is sensitive at all since this is micro-optimization and no testing/timing was done
Please find the efficient way to find answer to the above problem
import java.util.Arrays;
public class BenchMarkedSolution {
public static long[] findAbsValueSum(long[] numArr, long[] incrArr) {
int n = numArr.length;
Arrays.sort(numArr);
long[] res = new long[incrArr.length];
long[] cumArr = new long[n];
long sum = 0, cumIncr = 0;
for (int i = 0; i < n; ++i) {
sum += numArr[i];
cumArr[i] = sum;
}
for (int i = 0; i < incrArr.length; ++i) {
cumIncr += incrArr[i];
int p = Arrays.binarySearch(numArr, -cumIncr);
p = p < 0 ? Math.max(-p - 2, 0) : p;
res[i] = Math.abs(cumArr[p] + cumIncr * (p + 1)) + Math.abs((cumArr[n - 1] - cumArr[p]) + cumIncr * (n - p - 1));
}
return res;
}
public static void main(String[] args) {
long[] numArr;
long[] incArr;
long[] sumArr;
numArr = new long[]{-3, -2, 4, 5};
incArr = new long[]{2, 4, -6};
sumArr = findAbsValueSum(numArr, incArr);
System.out.println(Arrays.toString(sumArr));
}
}
Related
I'm trying to understand the logic behind the following code however I'm unclear about 2 parts of the code partially because the math supporting the logic is not totally clear to me at this moment.
CONFUSION 1: I don't understand why would we put 0 with count = 1 in the map before we start finding the sum of the array? How does it help?
CONFUSION 2: If I move the map.put(sum, map.getOrDefault(sum)+1) after the if() condition, I get the correct solution. However if I put it at the place as shown in the code below, it gives me wrong result. The question is why does the position of this matters, when we're searching for the value of sum-k in the map for finding the count
public int subarraySum(int[] nums, int k) {
HashMap<Integer,Integer> prefixSumMap = new HashMap<>();
prefixSumMap.put(0, 1); // CONFUSION 1
int sum = 0;
int count = 0;
for(int i=0; i<nums.length; i++) {
sum += nums[i];
prefixSumMap.put(sum, prefixSumMap.getOrDefault(sum, 0)+1); //CONFUSION 2
if(prefixSumMap.containsKey(sum - k)) {
count += prefixSumMap.get(sum - k);
}
}
return count;
}
You may find this interesting. I modified the method to use longs to prevent integer overflow resulting in negative numbers.
Both of these methods work just fine for positive numbers. Even though the first one is much simpler, they both return the same count for the test array.
public static void main(String[] args) {
Random r = new Random();
long[] vals = r.longs(10_000_000, 1, 1000).toArray();
long k = 29329;
System.out.println(positiveValues(vals, k));
System.out.println(anyValues(vals, k));
public static int positiveValues(long[] array, long k) {
Map<Long,Long> map = new HashMap<>(Map.of(0L,1L));
int count = 0;
long sum = 0;
for (long v : array) {
sum += v;
map.put(sum,1L);
if (map.containsKey(sum-k)) {
count++;
}
}
return count;
}
public static int anyValues(long[] nums, long k) {
HashMap<Long,Long> prefixSumMap = new HashMap<>();
prefixSumMap.put(0L, 1L);
long sum = 0;
int count = 0;
for(int i=0; i<nums.length; i++) {
sum += nums[i];
prefixSumMap.put(sum, prefixSumMap.getOrDefault(sum, 0L)+1L);
if(prefixSumMap.containsKey(sum - k)) {
count += prefixSumMap.get(sum - k);
}
}
return count;
}
Additionally, the statement
long v = prefixSumMap.getOrDefault(sum, 0L) + 1L;
Always returns 1 for positive arrays. This is because previous sums can never be re-encountered for positive only values.
That statement, and the one which computes count by taking a value from the map is to allow the array to contain both positive and negative numbers. And ths same is true a -k and all positive values.
For the following input:
long[] vals = {1,2,3,-3,0,3};
The subarrays that sum to 3 are
(1+2), (3), (1+2+3-3), (1+2+3-3+0), (3-3+0+3), (0+3), (3)
Since adding negative numbers can result in previous sums, those need to be
accounted for. The solution for positive values does not do this.
This will also work for all negative values. If k is positive, no subarray will be found since all sums will be negative. If k is negative one or more subarrays may possibly be found.
#1: put(0, 1) is a convenience so you don't have to have an extra if statement checking if sum == k.
Say k = 6 and you have input [1,2,3,4], then after you've processed the 3 you have sum = 6, which of course means that subarray [1, 2, 3] needs to be counted. Since sum - k is 0, get(sum - k) returns a 1 to add to count, which means we don't need a separate if (sum == k) { count++; }
#2: prefixSumMap.put(sum, prefixSumMap.getOrDefault(sum, 0)+1) means that the first time a sum is seen, it does a put(sum, 1). The second time, it becomes put(sum, 2), third time put(sum, 3), and so on.
Basically the map is a map of sum to the number of times that sum has been seen.
E.g. if k = 3 and input is [0, 0, 1, 2, 4], by the time the 2 has been processed, sum = 3 and the map contains { 0=3, 1=1, 3=1 }, so get(sum - k), aka get(0), returns 3, because there are 3 subarrays summing to 3: [0, 0, 1, 2], [0, 1, 2], and [1, 2]
Similar if k = 4 and input is [1, 2, 0, 0, 4], by the time the 4 has been processed, sum = 7 and the map contains { 0=1, 1=1, 3=3, 7=1 }, so get(sum - k), aka get(3), returns 3, because there are 3 subarrays summing to 3: [0, 0, 4], [0, 4], and [4].
Note: This all assumes that values cannot be negative.
I have a weird homework that I have to write a program with a method that takes an array of non-negative integers (array elements can have repeated values) and a value sum as parameters. The method then prints out all the combinations of the elements in array whose sum is equal to sum. The weird part is, the teacher forces us to strictly follow the below structure:
public class Combinations {
public static void printCombinations(int[] arr, int sum) {
// Body of the method
}
public static void main(String[] args) {
// Create 2-3 arrays of integers and 2-3 sums here then call the above
// method with these arrays and sums to test the correctness of your method
}
}
We are not allow to add neither more methods nor more parameters for the current program. I have researched and understood several ways to do this recursively, but with this restriction, I don't really know how to do it. Therefore, I appreciate if you guys help me out.
EDIT: The array can have repeated elements. Here's an example run of the program.
arr = {1, 3, 2, 2, 25} and sum = 3
Outputs:
(1, 2) // 1st and 3rd element
(1, 2) // 1st and 4th element
(3) // 2nd element
As the printCombinations() method accepts the integer array as parameter and you are not allowed to add any additional methods. I couldn't think of Recursion without adding an additional method.
Here is a solution, let me know if this helps. And this is not the best way!
public static void main( String[] args ) throws Exception {
int arr[] = {1, 3, 2, 2, 25, 1, 1};
int sum = 8;
printCombinations(arr, sum);
}
public static void printCombinations(int arr[], int sum){
int count = 0;
int actualSum = sum;
while (count < arr.length) {
int j = 0;
int arrCollection[] = new int[arr.length];
for (int k = 0; k < arrCollection.length; k++){
arrCollection[k] = -99; // as the array can contain only +ve integers
}
for (int i = count; i < arr.length; i++) {
sum = sum - arr[i];
if (sum < 0){
sum = sum + arr[i];
} else if (sum > 0){
arrCollection[j++] = arr[i];
} else if (sum == 0){
System.out.println("");
arrCollection[j++] = arr[i];
int countElements = 0;
for (int k = 0; k < arrCollection.length; k++){
if (arrCollection[k] != -99) {
countElements++;
System.out.print(arrCollection[k] + " ");
}
}
if (countElements == 1){
i = arr.length -1;
}
sum = sum + arr[i];
j--;
}
}
count++;
sum = actualSum;
}
}
This is extremely suited for recursive algorithm.
Think about function, let's call it fillRemaining, that gets the current state of affairs in parameters. For example, usedItems would be a list that holds the items that were already used, availableItems would be a list that holds the items that haven't been tried, currentSum would be the sum of usedItems and goal would be the sum you are searching for.
Then, in each call of fillRemaining, you just have to walk over availableItems and check each one of them. If currentSum + item == goal, you have found a solution. If currentSum + item > goal, you skip the item because it's too large. If currentSum + item < goal, you add item to usedItems and remove it from availableItems, and call fillRemaining again. Of course, in this call currentSum should also be increased by item.
So in printCombinations, you initialize availableItems to contain all elements of arr, and usedItems to empty list. You set currentSum to 0 and goal to sum, and call fillRemaining. It should do the magic.
With the restriction of not being able to add any other methods or parameters, you can also make fields for availableItems, usedItems, currentSum and goal. This way, you don't have to pass them as parameters, but you can still use them. The fields will have to be static, and you would set them in main as described above.
If neither adding fields is allowed, then you have to somehow simulate nested loops with variable depth. In effect, this simulates what would otherwise be passed via stack, but the algorithm is still the same.
In effect, this algorithm would do a depth-first search of (pruned) tree of all possible combinations. Beware however, that there are 2^n combinations, so the time complexity is also O(2^n).
I think that all algorithms which can be solved with recursion can also be solved with stacks instead of recursion (see solution below). But very often it is easier to solve the problems with recursion before attempting the stack based solutions.
My recursive take on this problems would be in Java something like this:
public static void printCombinations(int[] array, int pos, int sum, int[] acc) {
if (Arrays.stream(acc).sum() == sum) {
System.out.println(Arrays.toString(acc));
}
for (int i = pos + 1; i < array.length; i++) {
int[] newAcc = new int[acc.length + 1];
System.arraycopy(acc, 0, newAcc, 0, acc.length);
newAcc[acc.length] = array[i];
printCombinations(array, i, sum, newAcc);
}
}
This function you can call like this:
printCombinations(new int[]{1, 3, 2, 2, 25}, -1, 3, new int[]{});
And it will print this:
[1, 2]
[1, 2]
[3]
Basically it goes through all possible sets in this array and then filters those out which have the sum of 3 in this case. It is not great, there are for sure better, more efficient ways to do this. But my point here is simply to show that you can convert this algorithm to a stack based implementation.
Here it goes how you can implement the same algorithm using stacks instead of recursion:
public static void printCombinationsStack(int[] array, int sum) {
Stack<Integer> stack = new Stack<>();
stack.push(0);
while (true) {
int i = stack.peek();
if (i == array.length - 1) {
stack.pop();
if (stack.isEmpty()) {
break;
}
int last = stack.pop();
stack.push(last + 1);
} else {
stack.push(i + 1);
}
if (stack.stream().map(e -> array[e]).mapToInt(Integer::intValue).sum() == sum) {
System.out.println(stack.stream().map(e -> Integer.toString(array[e]))
.collect(Collectors.joining(",")));
}
}
}
This method can be called like this:
printCombinationsStack(new int[]{1, 3, 2, 2, 25}, 3);
And it outputs also:
1,2
1,2
3
How I came to this conversion of a recursive to a stack based algorithm:
If you observe the positions in the acc array on the first algorithm above, then you will see a pattern which can be emulated by a stack. If you have an initial array with 4 elements, then the positions which are in the acc array are always these:
[]
[0]
[0, 1]
[0, 1, 2]
[0, 1, 2, 3]
[0, 1, 3]
[0, 2]
[0, 2, 3]
[0, 3]
[1]
[1, 2]
[1, 2, 3]
[1, 3]
[2]
[2, 3]
[3]
There is a pattern here which can easily be emulated with stacks:
The default operation is always to push into the stack, unless you reach the last position in the array. You push first 0 which is the first position in the array. When you reach the last position of the array, you pop once from the array and then pop again and a second popped item which you push back to the stack - incremented by one.
If the stack is empty you break the loop. You have gone through all possible combinations.
Seems duplicate, please go through below link for correct solution with exact code complexity details
find-a-pair-of-elements-from-an-array-whose-sum-equals-a-given-number
there I have these two algorithms that are implemented from Pseudo code. My question is how can I count primitive operations and derive T(n) for both algorithms and also to find out the time complexity of (Big-Oh, O(n)) of each algorithm?
public class PrefixAverages1 {
static double array[] = new double[10];
public static void prefixAverages(){
for (int i = 0; i < 10; i++){
double s = array[i];
for (int j = 0; j < 10; j++){
s = s + array[j];
}
array[i] = s / (i + 1);
System.out.println(Arrays.toString(array));
}
}
public static double[] prefixAverages(double[] inArray) {
double[] outArray = new double[inArray.length];
return outArray;
}
public static void main(String... args) {
System.out.println(
Arrays.equals(
prefixAverages(new double[] {5, 6, 7, 8}),
new double[] {2, 2.5, 3.5, 4}
)
);
}
}
Prefix2
import java.util.Arrays;
public class PrefixAverages2 {
static double array[] = new double[10];
public static void prefixAverages(){
double s = 0;
for (int i = 0; i < 10; i++){
s = s + array[i];
array[i] = s / (i + 1);
}
array[0] = 10;
System.out.println(Arrays.toString(array));
}
public static double[] prefixAverages(double[] inArray) {
double[] outArray = new double[inArray.length];
return outArray;
}
public static void main(String... args) {
System.out.println(
Arrays.equals(
prefixAverages(new double[] {3, 4, 5, 6}),
new double[] {2, 3.5, 4, 5}
)
);
}
}
First, primitive operations are considered the sums (or subtraction) and multiplication (or divisions) you have in your code. You can count them from your pseudo-code.
So, this means s = s + array[j]; this counts as 1 such operation and also does this array[i] = s / (i + 1);.
The big O (complexity) is basically the relation you have in your algorithm between the number of elements and the operations required.
In your case for example you have 10 elements (as in new double[10]; and i < 10 parts) and require in algorithm 1: 10x(10+1) operations.
This is analyzed as:
You have an outer loop with 10 runs
You have an inner loop with also 10 runs (this cannot be different because you cannot get the result differently) meaning the number of outer and inner loop is the same in this algorithm, say `N =10'
You also have a division inside the outer loop for each run so you have +1 operation here.
So, 10(outer)x( 10(inner)+1(division) ) = 110
To get complexity, consider that:
If you double the number of elements how does the number of primitive operation is affected?
Let's see:
Complexity(N) = Nx(N+1) so Complexity(2N) = (2N)x((2N)+1) = 4N^2 + 2N.
But because in complexity what really matters is the biggest degree we get:
Complexity(2N) ~ 4N^2. Also the fixed factors before the degree are of no interest we finally get:
Complexity(2N) ~ N^2 meaning your first algorithm is O(N^2).
You can do the maths for your next algorithm.
P.S. denominator operation does not count as one: (i + 1).
P.S.2 It is not a SO question though as it's not programming one.
Given this array
int [] myArray = {5,-11,2,3,14,5,-14,2};
You are to find the maximum sum of the values in any downsequence in an unsorted array of integers. If the array is of length zero then maxSeqValue must return Integer.MIN_VALUE.
You should print the number, 19 because the downsequence with the maximum sum is 14,5.
Downsequence number is a series of non-increasing number.
These are the codes that i used but i guess that there are some cases which is still not accounted for.
Any ideas, thanks in advance.
public class MaxDownSequence{
public int maxSeqValue(int[] a){
int sum=Integer.MIN_VALUE;
int maxsum=Integer.MIN_VALUE;
for (int i=1;i<a.length;i++){
if(a[i]<a[i-1]){
sum = a[i] + a[i-1];
if (sum>maxsum){
maxsum=sum;
}
}
else {
sum=a[i];
if (sum>maxsum){
maxsum=sum;
}
}
}
if (a.length==0){
return Integer.MIN_VALUE;
}
else{
return maxsum;
}
}
public static void main(String args[]){
MaxDownSequence mySeq = new MaxDownSequence();
int [] myArray = {5,-11,2,3,14,5,-14,2};
System.out.println(mySeq.maxSeqValue(myArray));
}
}
Take the input {3,2,1} the answer should be 6 your program gives 5.
Your approach is correct, every time you test a number in the array you check if its less than (actually this should be <=) previous array element.
If it is you update sum as: sum = a[i] + a[i-1]; this is incorrect. sum in your program represents the running rum of the current subsequence. You should not be overwriting it.
Dynamic programming is the way to go.
http://en.wikipedia.org/wiki/Dynamic_programming
I know, maybe that doesn't help at all, but since I don't want to post a solution for your problem the best thing I can do is to give you this hint :)
you haven't considered sequences of more than two numbers. If you had [3,2,1] the result should be 6. But your code would give 5, because it only looks at the sum of the current number and the previous, whereas you should keep track of a current downsequence and add the current number to the running total of that downsequence. Once you hit a number that breaks the downsequence update maxsum if needed then reset the running total to 0.
not sure why you have the else in the loop?? If a[i] is not less than a[i-1] then it is not a downsequence, therefore surely maxsum should not be updated. If you take just the first 3 numbers in your sample array, it would return the number 2. Because the first downsequence [5,-11] would give a sum of -6 and on the next iteration it would just look at 2, which is greater than -6 and therefore maxsum is updated.
No need for:
if (a.length==0){
return Integer.MIN_VALUE;
}
if the array length is 0 then you never enter the loop and therefore never change maxsum, so it will still be equal to Integer.MIN_VALUE, so you can just return maxsum at the end regardless.
You are suppose to have a running sum i think. Meaning Sum = Sum + A[i]. Just make sure to initialize the sum to the first member of the array and you are in business.
package sree;
import java.util.ArrayList;
import java.util.List;
import javax.lang.model.element.NestingKind;
public class MaximumSumSequence {
private final int[] theArray;
private MaximumSumSequence(int[] theArray) {
this.theArray = theArray;
}
private void maximumSequence() {
int currentMax = 0,currentSum = 0, start = 0, end = 0, nextStart = 0;
for (int i=0; i< theArray.length; i++) {
currentSum += theArray[ i ];
if (currentMax < currentSum) {
currentMax = currentSum;
start = nextStart;
nextStart = end;
end = i;
} else if (currentSum < 0) {
currentSum = 0;
}
}
System.out.println("Max Sum :" + currentMax);
System.out.println("Start :" + start);
System.out.println("End :" + end);
}
public static void main(String[] args) {
//int[] anArray = {4, -1, 2, -2, -1, -3};
int[] anArray ={-2, 1, -3, 4, -1, 2, 1, -5, 4};
new MaximumSumSequence(anArray).maximumSequence();
}
}
Does anyone have a good algorithm for taking an ordered list of integers, i.e.:
[1, 3, 6, 7, 8, 10, 11, 13, 14, 17, 19, 23, 25, 27, 28]
into a given number of evenly sized ordered sublists, i.e. for 4 it will be:
[1, 3, 6] [7, 8, 10, 11] [13, 14, 17, 19] [23, 25, 27, 28]
The requirement being that each of the sublists are ordered and as similar in size as possible.
Splitting the lists evenly means you will have two sizes of lists - size S and S+1.
With N sublists, and X elements in the original, you would get:
floor(X/N) number of elements in the smaller sublists (S), and X % N is the number of larger sublists (S+1).
Then iterate over the original array, and (looking at your example) creating small lists firsts.
Something like this maybe:
private static List<Integer[]> splitOrderedDurationsIntoIntervals(Integer[] durations, int numberOfIntervals) {
int sizeOfSmallSublists = durations.length / numberOfIntervals;
int sizeOfLargeSublists = sizeOfSmallSublists + 1;
int numberOfLargeSublists = durations.length % numberOfIntervals;
int numberOfSmallSublists = numberOfIntervals - numberOfLargeSublists;
List<Integer[]> sublists = new ArrayList(numberOfIntervals);
int numberOfElementsHandled = 0;
for (int i = 0; i < numberOfIntervals; i++) {
int size = i < numberOfSmallSublists ? sizeOfSmallSublists : sizeOfLargeSublists;
Integer[] sublist = new Integer[size];
System.arraycopy(durations, numberOfElementsHandled, sublist, 0, size);
sublists.add(sublist);
numberOfElementsHandled += size;
}
return sublists;
}
Here is my own recursive solution, inspired by merge sort and breadth first tree traversal:
private static void splitOrderedDurationsIntoIntervals(Integer[] durations, List<Integer[]> intervals, int numberOfInterals) {
int middle = durations.length / 2;
Integer[] lowerHalf = Arrays.copyOfRange(durations, 0, middle);
Integer[] upperHalf = Arrays.copyOfRange(durations, middle, durations.length);
if (lowerHalf.length > upperHalf.length) {
intervals.add(lowerHalf);
intervals.add(upperHalf);
} else {
intervals.add(upperHalf);
intervals.add(lowerHalf);
}
if (intervals.size() < numberOfIntervals) {
int largestElementLength = intervals.get(0).length;
if (largestElementLength > 1) {
Integer[] duration = intervals.remove(0);
splitOrderedDurationsIntoIntervals(duration, intervals);
}
}
}
I was hoping someone might have a suggestion for an iterative solution.
Here's a solution for Python. You can translate it to Java, you need a way to get a piece of of a list and then to return it. You cannot use the generator approach though, but you can append each sublist to a new list.
pseudocode...
private static void splitOrderedDurationsIntoIntervals(Integer[] durations, List<Integer[]> intervals, int numberOfInterals) {
int num_per_interval = Math.floor(durations.length / numberOfInterals);
int i;
int idx;
// make sure you have somewhere to put the results
for (i = 0; i < numberOfInterals; i++) intervals[i] = new Integer[];
// run once through the list and put them in the right sub-list
for (i = 0; i < durations.length; i++)
{
idx = Math.floor(i / num_per_interval);
intervals[idx].add(durations[i]);
}
}
That code will need a bit of tidying up, but I'm sure you get the point. Also I suspect that the uneven sized interval list will be at the end rather than at the beginning. If you really want it that way round you can probably do that by reversing the order of the loop.
That should be an Answer in a more iterative fashion.
public static void splitList(List<Integer> startList, List<List<Integer>> resultList,
int subListNumber) {
final int subListSize = startList.size() / subListNumber;
int index = 0;
int stopIndex = subListSize;
for (int i = subListNumber; i > 0; i--) {
resultList.add(new ArrayList<Integer>(startList.subList(index, stopIndex)));
index = stopIndex;
stopIndex =
(index + subListSize > startList.size()) ? startList.size() : index + subListSize;
}
}
You might consider something like this:
public static int[][] divide(int[] initialList, int sublistCount)
{
if (initialList == null)
throw new NullPointerException("initialList");
if (sublistCount < 1)
throw new IllegalArgumentException("sublistCount must be greater than 0.");
// without remainder, length / # lists will always be the minimum
// number of items in a given subset
int min = initialList.length / sublistCount;
// without remainer, this algorithm determines the maximum number
// of items in a given subset. example: in a 15-item sample,
// with 4 subsets, we get a min of 3 (15 / 4 = 3r3), and
// 15 + 3 - 1 = 17. 17 / 4 = 4r1.
// in a 16-item sample, min = 4, and 16 + 4 - 1 = 19. 19 / 4 = 4r3.
// The -1 is required in samples in which the max and min are the same.
int max = (initialList.length + min - 1) / sublistCount;
// this is the meat and potatoes of the algorithm. here we determine
// how many lists have the min count and the max count. we start out
// with all at max and work our way down.
int sublistsHandledByMax = sublistCount;
int sublistsHandledByMin = 0;
while ((sublistsHandledByMax * max) + (sublistsHandledByMin * min)
!= initialList.length)
{
sublistsHandledByMax--;
sublistsHandledByMin++;
}
// now we copy the items into their new sublists.
int[][] items = new int[sublistCount][];
int currentInputIndex = 0;
for (int listIndex = 0; listIndex < sublistCount; listIndex++)
{
if (listIndex < sublistsHandledByMin)
items[listIndex] = new int[min];
else
items[listIndex] = new int[max];
// there's probably a better way to do array copies now.
// it's been a while since I did Java :)
System.arraycopy(initialList, currentInputIndex, items[listIndex], 0, items[listIndex].length);
currentInputIndex += items[listIndex].length;
}
return items;
}
This isn't quite polished - I got into an infinite loop (I think) when I tried to pass an 18-item array in with 10 sublists. I think the algorithm breaks down when min == 1.
This should be fairly fast. Good luck :)