Safe packing - Choco-solver - java

Problem:
You need to pack several items into your shopping bag without squashing anything. The items are to be placed one on top of the other. Each item has a weight and a strength, defined as the maximum weight that can be placed above that item without it being squashed. A packing order is safe if no item in the bag is squashed, that is, if, for each item, that item’s strength is at least the combined weight of what’s placed above that item. For example, here are three items and a packing order:
This packing is not safe. The bread is squashed because the weight above it, 5, is greater than its strength, 4. Swapping the apples and the bread, however, gives a safe packing.
Goal:
I need to find all the solutions to this problem with choco solver then test if this solution is enumerated:
N=3, WS = { (5,6), (4,4), (10,10) }
What I tried:
First I wrote my CSP model
Then I wrote my choco code like that
public static void main(String[] args) {
int[][] InitialList = {{5,6}, {4,4}, {10,10}};
int N = InitialList.length;
Model model = new Model("SafePacking");
// Create IntVars of weights and strengths
IntVar[] weights = new IntVar[N], strengths = new IntVar[N];
for (int i = 0; i < N; i++) {
weights[i] = model.intVar("Weight"+i, InitialList[i][0]);
strengths[i] = model.intVar("Strength"+i, InitialList[i][1]);
}
// Create IntVar of positions
IntVar[] positions = model.intVarArray("P",N, 0, N-1);
model.allDifferent(positions).post();
for(int i = 0; i < N; i++) {
int sum = 0;
for(int j = 0; j < N; j++)
if (positions[j].getValue() < positions[i].getValue())
sum += weights[j].getValue();
model.arithm(model.intVar(sum), "<=", strengths[i]).post();
}
Solution solution = model.getSolver().findSolution();
System.out.println(solution);
}
But I've got this result :
Solution: P[0]=0, P[1]=1, P[2]=2
Which is a wrong solution.
Did I miss something ?

The computation of sum seems to assume that the variables have a defined value, which they do not have. Calling getValue() on an uninstantiated IntVar will give the lower bound for a variable in Choco.
To make your model work, you need to build up sum as an IntVar instead.

Related

How to make a random super increasing array

I'm having a lab about making a Merkle Hellman Knapsack,the request said i need to make an automatically super increasing array but i don't know how to make it,is there any way to make it ? Thanks for reading
Random random = new Random();
int[] wInt = random.ints(8, 1, 999).toArray();
for(int i = 0; i < 8 - 1; i++) {
for (int j = i + 1; j < wInt.length; j++) {
if(wInt[i] > wInt[j]) {
int temp = wInt[i];
wInt[i] = wInt[j];
wInt[j] = temp;
}
}
}
A superincreasing sequence is one where each term is at least as big as the sum of all the preceding terms. Therefore, one way to generate a superincreasing sequence would be to keep track of the sum of all elements currently in the sequence, then to add some random, nonzero number onto the total when forming the next element. In pseudocode:
sequence = []
total = 0
while the sequence has fewer than n items in it:
total += some random number
append total to the sequence
I'll leave it as an exercise to translate this into your Programming Language of Choice.

How do I get my program to store every possible combination of 55 bits of an 81 bit BigInteger?

I'm making a Sudoku program, and I wanted to store every combination of x bits in an 81-bit integer into a list. I want to be able to then shuffle this list, iterate through it, and each on-bit will represent a cell that is to be removed from an existing Sudoku grid, x depending on difficulty. My program then tests this unique puzzle to see if it's solvable, if not, continue to the next combination. Do you guys understand? Is there a better way?
Currently I have a for-loop with a BigInteger, adding 1 every iteration, and testing to see if the resulting number has a number of bits on equal to 55. But this takes LOOOOOONG time. I don't think there's enough time in the universe to do it this way.
LOOP: for(BigInteger big = new BigInteger("36028797018963967");
big.compareTo(new BigInteger("2417851639229258349412351")) < 0;
big = big.add(big.ONE))
{
int count = 0;
for(int i = 0; i < 81; i++)
{
if(big.testBit(i)) count++;
if(count > 55) continue LOOP;
}
//just printing first, no arraylist yet
if(count == 55) System.out.println(big.toString(2));
}
As you already noticed, storing all combinations in a list and then shuffling them is not a viable option.
Instead, you can obtain a shuffled stream of all combinations, by using the Streamplify library.
import org.beryx.streamplify.combination.Combinations;
...
SudokuGrid grid = new SudokuGrid();
int[] solvedPuzzle = IntStream.range(0, 81).map(i -> grid.get(i)).toArray();
int k = 55;
new Combinations(81, k)
.shuffle()
.parallelStream()
.map(removals -> {
int[] puzzle = new int[81];
System.arraycopy(solvedPuzzle, 0, puzzle, 0, 81);
for(int i : removals) {
puzzle[i] = 0;
}
return puzzle;
})
.filter(puzzle -> resolveGrid(new SudokuSolver(new Candidates(puzzle))))
//.limit(10)
.forEach(puzzle -> System.out.println(Arrays.toString(puzzle)));
You probably don't want to generate all puzzles of a given difficulty, but only a few of them.
You can achieve this by putting a limit (see the commented line in the above code).
Certainly there are methods that will finish before you die of old age. For example:
Make an array (or BitSet, as David Choweller suggested in the comments) to represent the bits, and turn on as many as you need until you have enough. Then convert that back into a BigInteger.
I appreciate any feedback. The following seems to be a better option than my initial idea, since I believe having a list of all possible combinations would definitely give an out of memory error. It's not perfect, but this option takes out a random cell, tests to see if its solvable, if not put the last taken number back, and continue to remove the next random cell until enough cells have been taken out, or start over.
int[] candidates = new int[81];
SudokuGrid grid = new SudokuGrid();
LOOP: while(true)
{
ArrayList<Integer> removals = new ArrayList<Integer>();
for(int i = 0; i < 81; i++)
{
removals.add(i);
candidates[i] = grid.get(i);
}
Collections.shuffle(removals);
int k = 55;
for(int i = 0; i < k; i++)
{
int num = candidates[removals.get(i)];
candidates[removals.get(i)] = 0;
cand = new Candidates(candidates);
SudokuSolver solver = new SudokuSolver(cand);
if(!resolveGrid(solver))
{
candidates[removals.get(i)] = num;
k++;
if(k > removals.size())
continue LOOP;
}
}
break;
}
This takes about 5 seconds to solve. It's a bit slower than I wanted it to be, but a lot of it depends on the way I coded the solving strategies.

Dynamic programming with Combination sum inner loop and outer loop interchangeable?

I am a little confuse about the dynamic programming solution for combination sum, that you are given a list of numbers and a target total, and you want to count how many ways you can sum up to this target sum. Numbers can be reused multiple times. I am confused about the inner loop and outer loop that whether they are interchangeable or not. Can some explain the difference between the following two, and in what case we would use one but not the other, or they are the same.
int [] counts = new int[total];
counts[0] = 1;
// (1)
for(int i = 0; i <= total; i++) {
for(int j = 0; j < nums.length; j++) {
if(i >= nums[j])
counts[i] += counts[i - nums[j]];
}
}
// (2)
for(int j = 0; j < nums.length; j++)
for(int i = nums[j]; i <= total; i++) {
counts[i] += counts[i - nums[j]];
}
}
The two versions are indeed different, yielding different results.
I'll use nums = {2, 3} for all examples below.
Version 1 finds the number of combinations with ordering of elements from nums whose sum is total. It does so by iterating through all "subtotals" and counting how many combinations have the right sum, but it doesn't keep track of the elements. For example, the count for 5 will be 2. This is the result of using the first element (with value 2) and finding 1 combination in nums[3] and another combination for the second element (value 3) with the 1 combination in nums[2]. You should pay attention that both combinations use a single 2 and a single 3, but they represent the 2 different ordered lists [2, 3] & [3, 2].
Version 2 on the other hand find the number of combinations without ordering of elements from nums whose sum is total. It does so by counting how many combinations have the right sum (fur each subtotal), but contrary to version 1, it "uses" each element completely before moving on to the next element thus avoiding different orderings of the same group. When counting subtotals with the first element (2), all counts will initially be 0 (except the 0 sum sentinel), and any even subtotal will get the new count of 1. When the next element used, it is as if it's coming after all 2's are already in the group, so, contrary to version 1, only [2, 3] is counted, and not [3, 2].
By the way, the order of elements in nums doesn't affect the results, as can be understood by the logic explained.
Dynamic programming works by filling out entries in a table assuming that previous entries in the table have been fully completed.
In this case, we have counts[i] is dependent on counts[i - nums[j]]; for every entry j in nums.
In this code snippet
// (1)
for(int i = 0; i < total; i++) {
for(int j = 0; j < nums.length; j++) {
if(i >= nums[j])
counts1[i] += counts1[i - nums[j]];
}
}
We fill the table in order from 0 to total in that order. This is the action of the outer loop. The inner loop goes through our different nums and updates the current entry in our table based on the previous values, which are all assumed to be completed.
Now look at this snippet
// (2)
for(int j = 0; j < nums.length; j++){
for(int i = nums[j]; i < total; i++) {
counts2[i] += counts2[i - nums[j]];
}
}
Here we are iterating through our list of different counts and updating our totals. This breaks the concept of dynamic programming. None of our entries can ever be assumed to be complete until we are completely finished with our table.
Are they the same? The answer is no they are not. The following code illustrates the fact:
public class dyn {
public static void main(String[] args) {
int total = 50;
int[] nums = new int[]{1, 5, 10};
int [] counts1 = new int[total];
int [] counts2 = new int[total];
counts1[0] = 1;
counts2[0] = 1;
// (1)
for(int i = 0; i < total; i++) {
for(int j = 0; j < nums.length; j++) {
if(i >= nums[j])
counts1[i] += counts1[i - nums[j]];
}
}
// (2)
for(int j = 0; j < nums.length; j++){
for(int i = nums[j]; i < total; i++) {
counts2[i] += counts2[i - nums[j]];
}
}
for(int k = 0; k < total; k++){
System.out.print(counts1[k] + ",");
}
System.out.println("");
for(int k = 0; k < total; k++){
System.out.print(counts2[k] + ",");
}
}
}
This will output 2 different lists.
They are different because we are updating our counts[i] with incomplete information from earlier in the table. counts[6] assumes you have the entry for counts[5] and counts[1], which in turn assume you have the entries for counts[4], counts[3], counts[2], and counts[0]. Thus, each entry is dependent on (in the worst case all of) the previous entries in the table.
Addendum:
Interesting (perhaps obvious) side-note:
The two methods produce the same list up until the smallest pairwise sum of entries in nums.
Why?
This is when the information from previous entries becomes incomplete (with respect to the first loop). That is, if we have int[] nums = new int[]{3, 6}, then counts[3+6] will not be computed correctly, because either
count[3] will not be right or count[6] will not align with the result obtained using the first loop, depending on which stage of the computation we have done yet.
In light of criticism of my previous answer, I thought I'd take a more mathematical approach.
As in #Amit 's answer, I will use nums = {2, 3} in examples.
Recurrence Relations
The first loop computes
S(n) = S(n-3) + S(n-2)
Or, more generally, for some set {x_1, x_2, x_3, ... ,x_k}:
S(n) = S(n- x_1) + S(n- x_2) + ... + S(n- x_k)
It should be clear that each S(n) is dependent on (possibly all) previous values, and so we must start on 0 and populate the table upwards to our desired total.
The second loop computes a recurrence S_2(n) with the following definitions:
S_1(n) = S_1(n-2)
S_2(n) = S_1(n) + S_2(n-3)
More generally, for some set {x_1, x_2, x_3, ... ,x_k}:
S_1(n) = S_1(n- x_1)
S_2(n) = S_1(n) + S_2(n- x_2)
...
S_k(n) = S_{k-1}(n) + S_k(n- x_k)
Each entry in this sequence is like those from the first loop; it is dependent on the previous entries. But unlike the first loop, it is also dependent on earlier sequences.
Put perhaps more concretely:
S_2 is dependent on not only (possibly all) previous entries of S_2, but also on previous entries of S_1.
Thus, when we want to compute the first recurrence, we begin at 0 and compute each entry, for each number in our nums.
When we want to compute the second recurrence, we compute each intermediate recurrence one at a time, each time storing the result in counts.
In Plain English
What do these two recurrences compute? As #Amit 's answer explains, they compute the number of combinations that sum to total, with and without preserving order. It's easy to see why, again using our example of nums = {2, 3}:
Note my use of the word list to denote something ordered, and the word set to denote something unordered.
I use append to mean adding to the former, and add to denote adding to the latter.
If you know
how many lists of numbers add to 2,
and how many add to 3,
and I ask you
how many add to 5?
You can append a 3 to every one of the former lists, and a 2 to every one of the latter lists.
Thus (how many add to 5) = (how many add to 3) + (how many add to 2)
Which is our first recurrence.
For the second recurrence,
If you know
how many sets of just 2's add to 5 (0)
how many sets of just 2's and 3's add to 2 (1)
You can just take all of the first number, and you can add a 3 to all the sets in the second number.
Note how "sets of just 2's" is a special case of "sets of just 2's and 3's". "sets of just 2's and 3's" depends on "sets of just 2's", just like in our recurrence!
Recursive functions written in java
The following recursive function computes the values for the first loop, with example values 3 and 2.
public static int r(int n){
if(n < 0)
return 0;
if(n == 0)
return 1;
return r(n-2) + r(n-3);
}
The following set of recursive functions computes the values for the second loop, with example values 3 and 2.
public static int r1(int n){
if(n < 0)
return 0;
if(n == 0)
return 1;
return r1(n-2);
}
public static int r2(int n){
if(n < 0){
return 0;
}
return r1(n) + r2(n-3);
}
I have checked them up to 10 and they appear to be correct.

Matrix manipulation: logic not fetching correct answer for higher order NXN matrix data

I came across below problem related to Matrix Manipulation.
problem statement
There is a NxN matrix,divided into N * N cells. Each cell has a predefined value. Which would be given as an input. Iteration has to happen K number of times which is also given in the test input. We have to make sure that we pick the optimum/min value of rows/columns at each iteration. Final output is the cumulative sum of optimum value saved at the end of each iteration.
Steps 1. Sum up the individual row and column and find the min sum of rows and columns, (it could be a row or a column, just need the minimum row or a column)
Step 2. Store the sum found above separately
Step 3.
Increment elements of the min. sum row or column. by 1
Repeat steps 1,2,3 from 1 to Kth value
add the sum at each iteration(specified in step2)
output is the sum obtained on on the Kth iteration.
Sample data
2 4
1 3
2 4
Output data
22
I was able to write a code (in java) and tested the same for some sample test cases. The output worked fine. The code works fine for sample data matrix of lower order, say, 2x2,4x4,even till 44x40 (that has less iteration). However, when the matrix size is increased to 100X100 (complex iteration), I see the expected output output values differ at 10s and hundreds place of the digit from the actual output and its random. Since I am not able to find a correct pattern of output vs input. Now, it is taking a toll on me to really debugging 500th loop to identify the issue. Is there any better way or approach to solve such problem related to huge matrix manipulation. Has anyone come across issues similar to this and solved it.
I am mainly interested in knowing the correct approach to solve given matrix problem. What Data structure to use in java. At present, I am using primitive DS and arrays int[] or long[] to solve this problem. Appreciate any help in this regard.
Which data structure?
What you need here is a data structure which allows you to efficiently query and update the minimum sum line. The most commonly used for this is a heap https://en.wikipedia.org/wiki/Heap_(data_structure).
For your purposes it's probably best to just implement the simplest kind, an array-based binary heap:
See here: https://en.wikipedia.org/wiki/Binary_heap
And here: http://courses.cs.washington.edu/courses/cse373/11wi/homework/5/BinaryHeap.java
..for implementation details.
Procedure:
Initialize your heap to size M + N where M, N are the number of rows and columns.
Before the loop, pre-compute the sum of each row and column, and add them as objects to the heap. Also add two arrays A, B which store the row and columon objects separately.
Now heapify the heap array with respect to the line sum attribute. This ensures the heap follows the criterion of the binary heap structure (parent always > children). Read the sources to find out more about how to implement this (quite easy for a fixed array)
For each iteration, look at the first element in the heap array. This is always the one with the smallest line sum. If this is a row object, then increment the sum attribute by N (no. of columns), and increment each object in B (list of columns) by 1. Do the same if it's a column.
After this, always heapify before the next iteration.
At the end, just return the first element's attribute.
Time complexity:
The original naive solution (looping through all columns and rows every time) is .
Using a heap, the heapify operation at each step is (for a binary heap).
This means the total complexity is , FAR smaller. The max term is to compensate for the fact that at each iteration it may be either rows or columns which are incremented.
As a side note, there are other heap structure types which have even better time complexity than the binary heap, e.g. binomial trees, Fibonacci heaps etc. These however are far more complicated, and have higher constant-factor overheads as a result. Thus for your project I feel they are not necessary, as many of them need phenomenal data set sizes to justify for the constant factor overhead.
Besides, they all support the same external operations as the binary heap, as defined by the Abstract Data Structure of Heap.
(heapify is an internal operation specific to the binary heap structure. Quite a few of the other ones are theoretically superior as they do this operation implicitly and "lazily")
O(KN + N*N) Solution:
You can just work with sum of columns and rows, and not store or manipulate them directly.
First sum all the columns and rows, in a 2*N array, first row being sum of columns, a[0][0] is sum of first column, a[0][1] is sum of second column, and second row is sum of rows, a[1][0] sum of first row, etc...
Then do the following for iterating:
Find min in array a .
Add it to the answer.
Add N to the min of row or column selected.
If the min is row add one to all cols and if it is a column add one to all rows.
If needed any further explanation, don't hesitate to comment.
I am doing like this for solving the above problem...
void matrixManipulation() throws IOException {
int N = Reader.nextInt();
int[][] matrix = new int[N][N];
int K = Reader.nextInt();
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
matrix[i][j] = Reader.nextInt();
}
}
// System.out.println("********Inital position**********");
// for (int i = 0; i < N; i++) {
// for (int j = 0; j < N; j++) {
// System.out.print(matrix[i][j]);
// }
// System.out.println();
// }
// System.out.println("********Inital position**********");
CalculateSum calculateSum = new CalculateSum();
int[] row = new int[N];
int[] row_clone = new int[N];
int[] col = new int[N];
int[] col_clone = new int[N];
int test =0;
for (int kk = 0; kk < K; kk++) {
row = calculateSum.calculateRowSum(matrix, N);
row_clone = row.clone();
/* just sort it either Arrarys sort or any other ---starts here*/
// for (int i = 1; i < row.length; i++) {
// row_orignial[i] = row[i];
// }
// Arrays.sort(row);
Node root1 = insert(null, row[0], 0, row.length);
for (int i = 1; i < row.length; i++) {
insert(root1, row[i], 0, row.length);
}
sortArrayInOrderTrvsl(root1, row, 0);
/* just sort it either Arrarys sort or any other ---ends here*/
col = calculateSum.calculateColumnSum(matrix, N);
col_clone = col.clone();
/* just sort it either Arrarys sort or any other ---starts here*/
// for (int i = 1; i < col.length; i++) {
// col_orignial[i] = col[i];
// }
// Arrays.sort(col);
Node root2 = insert(null, col[0], 0, col.length);
for (int i = 1; i < row.length; i++) {
insert(root2, col[i], 0, col.length);
}
sortArrayInOrderTrvsl(root2, col, 0);
/* just sort it either Arrary.sort or any other---ends here */
int pick = 0;
boolean rowflag = false;
int rowNumber = 0;
int colNumber = 0;
if (row[0] < col[0]) {
pick = row[0];// value
rowflag = true;
for (int i = 0; i < N; i++) {
if (pick == row_clone[i])
rowNumber = i;
}
} else if (row[0] > col[0]) {
pick = col[0];// value
rowflag = false;
for (int i = 0; i < N; i++) {
if (pick == col_clone[i])
colNumber = i;
}
} else if(row[0] == col[0]){
pick = col[0];
rowflag = false;
for (int i = 0; i < N; i++) {
if (pick == col_clone[i])
colNumber = i;
}
}
test= test + pick;
if (rowflag) {
matrix = rowUpdate(matrix, N, rowNumber);
} else {
matrix = columnUpdate(matrix, N, colNumber);
}
System.out.println(test);
// System.out.println("********Update Count"+kk+" position**********");
// for (int i = 0; i < N; i++) {
// for (int j = 0; j < N; j++) {
// System.out.print(matrix[i][j]);
// }System.out.println();
// }
// System.out.println("********Update Count"+kk+" position**********");
}
// System.out.println("********Final position**********");
// for (int i = 0; i < N; i++) {
// for (int j = 0; j < N; j++) {
// System.out.print(matrix[i][j]);
// }System.out.println();
// }
// System.out.println("********Final position**********");
// System.out.println(test);
}

How can I avoid duplicates when generating a random list of pairs of numbers?

I have the following code to set 10 random values to true in a boolean[][]:
bommaker = new boolean[10][10];
int a = 0;
int b = 0;
for (int i=0; i<=9; i++) {
a = randomizer.nextInt(9);
b = randomizer.nextInt(9);
bommaker[a][b] = true;
}
However, with this code, it is possible to have the same value generated, and therefore have less then 10 values set to random. I need to build in a checker, if the value isn't already taken. And if it is already taken, then it needs to redo the randomizing. But I have no idea how to do that. Can someone help me?
simplest solution, not the best:
for (int i=0; i<=9; i++) {
do {
a = randomizer.nextInt(10);
b = randomizer.nextInt(10);
} while (bommaker[a][b]);
bommaker[a][b] = true;
}
You're problem is similar to drawing cards at random from a deck if I'm not mistaken...
But first... The following:
randomizer.nextInt(9)
will not do what you want because it shall return an integer between [0..8] included (instead of [0..9]).
Here's Jeff's take on the subject of shuffling:
http://www.codinghorror.com/blog/2007/12/shuffling.html
To pick x spot at random, you could shuffle your 100 spot and keep the first 10 spots.
Now of course seen that you'll have only 10% of all the spots taken, simply retrying if a spot is already taken is going to work too in reasonable time.
But if you were to pick, say, 50 spots out of 100, then shuffling a list from [0..99] and keeping the 50 first value would be best.
For example here's how you could code it in Java (now if speed is an issue you'd use an array of primitives and a shuffle on the primitives array):
List<Integer> l = new ArrayList<Integer>();
for (int i = 0; i < 100; i++) {
l.add(i);
}
Collections.shuffle(l);
for (int i = 0; i < n; i++) {
a[l.get(i)/10][l.get(i)%10] = true;
}

Categories