Memory Choke on Branch And Bound Knapsack Implementation - java

I wrote this implementation of the branch and bound knapsack algorithm based on the pseudo-Java code from here. Unfortunately, it's memory choking on large instances of the problem, like this. Why is this? How can I make this implementation more memory efficient?
The input on the file on the link is formatted this way:
numberOfItems maxWeight
profitOfItem1 weightOfItem1
.
.
.
profitOfItemN weightOfItemN
// http://books.google.com/books?id=DAorddWEgl0C&pg=PA233&source=gbs_toc_r&cad=4#v=onepage&q&f=true
import java.util.Comparator;
import java.util.LinkedList;
import java.util.PriorityQueue;
class ItemComparator implements Comparator {
public int compare (Object item1, Object item2){
Item i1 = (Item)item1;
Item i2 = (Item)item2;
if ((i1.valueWeightQuotient)<(i2.valueWeightQuotient))
return 1;
if ((i2.valueWeightQuotient)<(i1.valueWeightQuotient))
return -1;
else { // costWeightQuotients are equal
if ((i1.weight)<(i2.weight)){
return 1;
}
if ((i2.weight)<(i1.weight)){
return -1;
}
}
return 0;
}
}
class Node
{
int level;
int profit;
int weight;
double bound;
}
class NodeComparator implements Comparator {
public int compare(Object o1, Object o2){
Node n1 = (Node)o1;
Node n2 = (Node)o2;
if ((n1.bound)<(n2.bound))
return 1;
if ((n2.bound)<(n1.bound))
return -1;
return 0;
}
}
class Solution {
long weight;
long value;
}
public class BranchAndBound {
static Solution branchAndBound2(LinkedList<Item> items, double W) {
double timeStart = System.currentTimeMillis();
int n = items.size();
int [] p = new int [n];
int [] w = new int [n];
for (int i=0; i<n;i++){
p [i]= (int)items.get(i).value;
w [i]= (int)items.get(i).weight;
}
Node u;
Node v = new Node(); // tree root
int maxProfit=0;
int usedWeight=0;
NodeComparator nc = new NodeComparator();
PriorityQueue<Node> PQ = new PriorityQueue<Node>(n,nc);
v.level=-1;
v.profit=0;
v.weight=0; // v initialized to -1, dummy root
v.bound = bound(v,W, n, w, p);
PQ.add(v);
while(!PQ.isEmpty()){
v=PQ.poll();
u = new Node();
if(v.bound>maxProfit){ // check if node is still promising
u.level = v.level+1; // set u to the child that includes the next item
u.weight = v.weight + w[u.level];
u.profit = v.profit + p[u.level];
if (u.weight <=W && u.profit > maxProfit){
maxProfit = u.profit;
usedWeight = u.weight;
}
u.bound = bound(u, W, n, w, p);
if(u.bound > maxProfit){
PQ.add(u);
}
u = new Node();
u.level = v.level+1;
u.weight = v.weight; // set u to the child that does not include the next item
u.profit = v.profit;
u.bound = bound(u, W, n, w, p);
if(u.bound>maxProfit)
PQ.add(u);
}
}
Solution solution = new Solution();
solution.value = maxProfit;
solution.weight = usedWeight;
double timeStop = System.currentTimeMillis();
double elapsedTime = timeStop - timeStart;
System.out.println("* Time spent in branch and bound (milliseconds):" + elapsedTime);
return solution;
}
static double bound(Node u, double W, int n, int [] w, int [] p){
int j=0; int k=0;
int totWeight=0;
double result=0;
if(u.weight>=W)
return 0;
else {
result = u.profit;
totWeight = u.weight; // por esto no hace
if(u.level < w.length)
{
j= u.level +1;
}
int weightSum;
while ((j < n) && ((weightSum=totWeight + w[j])<=W)){
totWeight = weightSum; // grab as many items as possible
result = result + p[j];
j++;
}
k=j; // use k for consistency with formula in text
if (k<n){
result = result + ((W - totWeight) * p[k] / w[k]);// grab fraction of excluded kth item
}
return result;
}
}
}

I got a slightly speedier implementation taking away all the Collection instances with generics and instead using arrays.

Not sure whether you still need insight into the algorithm or whether your tweaks have solved your problem, but with a breadth-first branch and bound algorithm like the one you've implemented there's always going to be the potential for a memory usage problem. You're hoping, of course, that you'll be able to rule out a sufficient number of branches as you go along to keep the number of nodes in your priority queue relatively small, but in the worst-case scenario you could end up with up to as many nodes as there are possible permutations of item selections in the knapsack held in memory. The worst-case scenario is, of course, highly unlikely, but for large problem instances even an average tree could end up populating your priority queue with millions of nodes.
If you're going to be throwing lots of unforeseen large problem instances at your code and need the piece of mind of knowing that no matter how many branches the algorithm has to consider you'll never run out of memory, I'd consider a depth-first branch and bound algorithm, like the Horowitz-Sahni algorithm outlined in section 2.5.1 of this book: http://www.or.deis.unibo.it/knapsack.html. For some problem instances this approach will be less efficient in terms of the number of possible solutions that have to be considered before the optimal one is found, but then again for some problem instances it will be more efficient--it really depends on the structure of the tree.

Related

Implementing Union-Find Algorithm for Kruskal's Algorithm to find Minimum Spanning Tree in Java

I am trying to solve the following Leetcode problem (https://leetcode.com/problems/connecting-cities-with-minimum-cost), and my approach is to figure out the total weight of the minimum spanning tree (MST) from the input graph using Kruskal's Algorithm using the Union-Find data structure. However, my code online passes 51/63 of the test cases, returning the incorrect result on the following test case, which is too hard to debug, since the input graph is too large.
50
[[2,1,22135],[3,1,13746],[4,3,37060],[5,2,48513],[6,3,49607],[7,1,97197],[8,2,95909],[9,2,82668],[10,2,48372],[11,4,17775],[12,2,6017],[13,1,51409],[14,2,12884],[15,7,98902],[16,14,52361],[17,8,11588],[18,12,86814],[19,17,49581],[20,4,41808],[21,11,77039],[22,10,80279],[23,16,61659],[24,12,89390],[25,24,10042],[26,12,78278],[27,15,30756],[28,6,2883],[29,8,3478],[30,7,29321],[31,12,47542],[32,20,35806],[33,3,26531],[34,12,16321],[35,27,82484],[36,7,55920],[37,24,21253],[38,23,90537],[39,7,83795],[40,36,70353],[41,34,76983],[42,14,63416],[43,15,39590],[44,9,86794],[45,3,31968],[46,19,32695],[47,17,40287],[48,1,27993],[49,12,86349],[50,11,52080],[17,27,65829],[42,45,87517],[14,23,96130],[5,50,3601],[10,17,2017],[26,44,4118],[26,29,93146],[1,9,56934],[22,43,5984],[3,22,13404],[13,28,66475],[11,14,93296],[16,44,71637],[7,37,88398],[7,29,56056],[2,34,79170],[40,44,55496],[35,46,14494],[32,34,25143],[28,36,59961],[10,49,58317],[8,38,33783],[8,28,19762],[34,41,69590],[27,37,26831],[15,23,53060],[5,11,7570],[20,42,98814],[18,34,96014],[13,43,94702],[1,46,18873],[44,45,43666],[22,40,69729],[4,25,28548],[8,46,19305],[15,22,39749],[33,48,43826],[14,15,38867],[13,22,56073],[3,46,51377],[13,15,73530],[6,36,67511],[27,38,76774],[6,21,21673],[28,49,72219],[40,50,9568],[31,37,66173],[14,29,93641],[4,40,87301],[18,46,41318],[2,8,25717],[1,7,3006],[9,22,85003],[14,45,33961],[18,28,56248],[1,31,10007],[3,24,23971],[6,28,24448],[35,39,87474],[10,50,3371],[7,18,26351],[19,41,86238],[3,8,73207],[11,34,75438],[3,47,35394],[27,32,69991],[6,40,87955],[2,18,85693],[5,37,50456],[8,20,59182],[16,38,58363],[9,39,58494],[39,43,73017],[10,15,88526],[16,23,48361],[4,28,59995],[2,3,66426],[6,17,29387],[15,38,80738],[12,43,63014],[9,11,90635],[12,20,36051],[13,25,1515],[32,40,72665],[10,40,85644],[13,40,70642],[12,24,88771],[14,46,79583],[30,49,45432],[21,34,95097],[25,48,96934],[2,35,79611],[9,26,71147],[11,37,57109],[35,36,67266],[42,43,15913],[3,30,44704],[4,32,46266],[5,10,94508],[31,39,45742],[12,25,56618],[10,45,79396],[15,28,78005],[19,32,94010],[36,46,4417],[6,35,7762],[10,13,12161],[49,50,60013],[20,23,6891],[9,50,63893],[35,43,74832],[10,24,3562],[6,8,47831],[29,32,82689],[7,47,71961],[14,41,82402],[20,33,38732],[16,26,24131],[17,34,96267],[21,46,81067],[19,47,41426],[13,24,68768],[1,25,78243],[2,27,77645],[11,25,96335],[31,45,30726],[43,44,34801],[3,42,22953],[12,23,34898],[37,43,32324],[18,44,18539],[8,13,59737],[28,37,67994],[13,14,25013],[22,41,25671],[1,6,57657],[8,11,83932],[42,48,24122],[4,15,851],[9,29,70508],[7,32,53629],[3,4,34945],[2,32,64478],[7,30,75022],[14,19,55721],[20,22,84838],[22,25,6103],[8,49,11497],[11,32,22278],[35,44,56616],[12,49,18681],[18,43,56358],[24,43,13360],[24,47,59846],[28,43,36311],[17,25,63309],[1,14,30207],[39,48,22241],[13,26,94146],[4,33,62994],[40,48,32450],[8,19,8063],[20,29,56772],[10,27,21224],[24,30,40328],[44,46,48426],[22,45,39752],[6,43,96892],[2,30,73566],[26,36,43360],[34,36,51956],[18,20,5710],[7,22,72496],[3,39,9207],[15,30,39474],[11,35,82661],[12,50,84860],[14,26,25992],[16,39,33166],[25,41,11721],[19,40,68623],[27,28,98119],[19,43,3644],[8,16,84611],[33,42,52972],[29,36,60307],[9,36,44224],[9,48,89857],[25,26,21705],[29,33,12562],[5,34,32209],[9,16,26285],[22,37,80956],[18,35,51968],[37,49,36399],[18,42,37774],[1,30,24687],[23,43,55470],[6,47,69677],[21,39,6826],[15,24,38561]]
I'm having trouble understanding why my code will fail a test case, since I believe I am implementing the steps of Kruskal's Algorithm propertly:
Sorting the connections in increasing order of weight.
Building the MST by going through each connection in the sorted list and selecting that connection if it does not result in a cycle in the MST.
Below is my Java code:
class UnionFind {
// parents[i] = parent node of node i.
// If a node is the root node of a component, we define its parent
// to be itself.
int[] parents;
public UnionFind(int n) {
this.parents = new int[n];
for (int i = 0; i < n; i++) {
this.parents[i] = i;
}
}
// Merges two nodes into the same component.
public void union(int node1, int node2) {
int node1Component = find(node1);
int node2Component = find(node2);
this.parents[node1Component] = node2Component;
}
// Returns the component that a node is in.
public int find(int node) {
while (this.parents[node] != node) {
node = this.parents[node];
}
return node;
}
}
class Solution {
public int minimumCost(int n, int[][] connections) {
UnionFind uf = new UnionFind(n + 1);
// Sort edges by increasing cost.
Arrays.sort(connections, new Comparator<int[]>() {
#Override
public int compare(final int[] a1, final int[] a2) {
return a1[2] - a2[2];
}
});
int edgeCount = 0;
int connectionIndex = 0;
int weight = 0;
// Greedy algorithm: Choose the edge with the smallest weight
// which does not form a cycle. We know that an edge between
// two nodes will result in a cycle if those nodes are already
// in the same component.
for (int i = 0; i < connections.length; i++) {
int[] connection = connections[i];
int nodeAComponent = uf.find(connection[0]);
int nodeBComponent = uf.find(connection[1]);
if (nodeAComponent != nodeBComponent) {
weight += connection[2];
edgeCount++;
}
if (edgeCount == n - 1) {
break;
}
}
// MST, by definition, must have (n - 1) edges.
if (edgeCount == n - 1) {
return weight;
}
return -1;
}
}
As #geobreze stated, I forgot to unite the components (disjoint sets) of node A and node B. Below is the corrected code:
if (nodeAComponent != nodeBComponent) {
uf.union(nodeAComponent, nodeBComponent);
weight += connection[2];
edgeCount++;
}

Find the max path from root to leaf of a n-ary tree without including values of two adjacent nodes in the sum

I recently got interviewed and was asked the following question.
Given an n-ary tree, find the maximum path from root to leaf such that maximum path does not contain values from any two adjacent nodes.
(Another edit: The nodes would only have positive values.)
(Edit from comments: An adjacent node means node that share a direct edge. Because its a tree, it means parent-child. So if I include parent, I can not include child and vice versa.)
For example:
5
/ \
8 10
/ \ / \
1 3 7 9
In the above example, the maximum path without two adjacent would be 14 along the path 5->10->9. I include 5 and 9 in the final sum but not 10 because it would violate the no two adjacent nodes condition.
I suggested the following algorithm. While I was fairly sure about it, my interviewer did not seem confident about it. Hence, I wanted to double check if my algorithm was correct or not. It seemed to work on various test cases I could think of:
For each node X, let F(X) be the maximum sum from root to X without two adjacent values in the maximum sum.
The formula for calculating F(X) = Max(F(parent(X)), val(X) + F(grandParent(X)));
Solution would have been
Solution = Max(F(Leaf Nodes))
This was roughly the code I came up with:
class Node
{
int coins;
List<Node> edges;
public Node(int coins, List<Node> edges)
{
this.coins = coins;
this.edges = edges;
}
}
class Tree
{
int maxPath = Integer.MIN_VALUE;
private boolean isLeafNode(Node node)
{
int size = node.edges.size();
for(int i = 0; i < size; i++)
{
if(node.edges.get(i) != null)
return false;
}
return true;
}
// previous[0] = max value obtained from parent
// previous[1] = max value obtained from grandparent
private void helper(Node node, int[] previous)
{
int max = Math.max(previous[0], max.val + previous[1]);
//leaf node
if(isLeafNode(node))
{
maxPath = Math.max(maxPath, max);
return;
}
int[] temp= new int[2];
temp[0] = max;
temp[1] = prev[0];
for(int i = 0; i < node.edges.size(); i++)
{
if(node.edges.get(i) != null)
{
helper(node.edges.get(i), temp);
}
}
}
public int findMax(Node node)
{
int[] prev = new int[2];
prev[0] = 0;
prev[1] = 0;
if(node == null) return 0;
helper(node, prev);
return maxPath;
}
}
Edit: Forgot to mention that my primary purpose in asking this question is to know if my algorithm was correct rather than ask for a new algorithm.
Edit: I have a reason to believe that my algorithm should also have worked.
I was scouring the internet for similar questions and came across this question:
https://leetcode.com/problems/house-robber/?tab=Description
It is pretty similar to the problem above except that it is now an array instead of the tree.
The formal F(X) = Max(F(X-1), a[x] + F(X-2)) works in this case.
Here is my accepted code:
public class Solution {
public int rob(int[] nums) {
int[] dp = new int[nums.length];
if(nums.length < 1) return 0;
dp[0] = nums[0];
if(nums.length < 2) return nums[0];
dp[1] = Math.max(nums[0], nums[1]);
for(int i = 2; i < nums.length; i++)
{
dp[i] = Math.max(dp[i-1], dp[i-2] + nums[i]);
}
return dp[nums.length-1];
}
}
The natural solution would be to compute for each node X two values: max path from X to leaf including X and max path from X to leaf, excluding X, let's call them MaxPath(X) and MaxExcluded(X).
For leaf L MaxPath(L) is Value(L) and MaxExcluded(L) is 0.
For internal node X:
MaxPath(X) = Value(X) + Max over child Y of: MaxExcluded(Y)
MaxExcluded(X) = Max over child Y of : Max(MaxExcluded(Y), MaxPath(Y))
The first line means that if you include X, you have to exclude its children. The second means that if you exclude X, you are free to either include or exclude its children.
It's a simple recursive function on nodes which can be computed going leaves-to-parents in O(size of the tree).
Edit: The recursive relation does also work top-down, and in this case you can indeed eliminate storing two values by the observation that MaxExcluded(Y) is actually MaxPath(Parent(Y)), which gives the solution given in the question.
Implementation of what #RafaƂDowgird explained.
/* 5
* 8 10
* 1 3 7 9
* 4 5 6 11 13 14 3 4
*
*
*/
public class app1 {
public static void main(String[] args) {
Node root = new Node(5);
root.left = new Node(8);root.right = new Node(10);
root.left.left = new Node(1);root.left.right = new Node(3);
root.right.left = new Node(7);root.right.right = new Node(9);
root.left.left.left = new Node(4);root.left.left.right = new Node(5);
root.left.right.left = new Node(6);root.left.right.right = new Node(11);
root.right.left.left = new Node(13);root.right.left.right = new Node(14);
root.right.right.right = new Node(4);
System.out.println(findMaxPath(root));
}
private static int findMaxPath(Node root) {
if (root == null) return 0;
int maxInclude = root.data + findMaxPathExcluded(root);
int maxExcludeLeft = Math.max(findMaxPath(root.left), findMaxPathExcluded(root.left));
int maxExcludeRight = Math.max(findMaxPath(root.right), findMaxPathExcluded(root.right));
return Math.max(maxInclude, Math.max(maxExcludeLeft, maxExcludeRight));
}
private static int findMaxPathExcluded(Node root) {
if(root == null) return 0;
int left1 = root.left!=null ? findMaxPath(root.left.left) : 0;
int right1 = root.left!=null ? findMaxPath(root.left.right) : 0;
int left2 = root.right!=null ? findMaxPath(root.right.left) : 0;
int right2 = root.right!=null ? findMaxPath(root.right.right) : 0;
return Math.max(left1, Math.max(right1, Math.max(left2, right2)));
}
}
class Node{
int data;
Node left;
Node right;
Node(int data){
this.data=data;
}
}

tukey's ninther for different shufflings of same data

While implementing improvements to quicksort partitioning,I tried to use Tukey's ninther to find the pivot (borrowing almost everything from sedgewick's implementation in QuickX.java)
My code below gives different results each time the array of integers is shuffled.
import java.util.Random;
public class TukeysNintherDemo{
public static int tukeysNinther(Comparable[] a,int lo,int hi){
int N = hi - lo + 1;
int mid = lo + N/2;
int delta = N/8;
int m1 = median3a(a,lo,lo+delta,lo+2*delta);
int m2 = median3a(a,mid-delta,mid,mid+delta);
int m3 = median3a(a,hi-2*delta,hi-delta,hi);
int tn = median3a(a,m1,m2,m3);
return tn;
}
// return the index of the median element among a[i], a[j], and a[k]
private static int median3a(Comparable[] a, int i, int j, int k) {
return (less(a[i], a[j]) ?
(less(a[j], a[k]) ? j : less(a[i], a[k]) ? k : i) :
(less(a[k], a[j]) ? j : less(a[k], a[i]) ? k : i));
}
private static boolean less(Comparable x,Comparable y){
return x.compareTo(y) < 0;
}
public static void shuffle(Object[] a) {
Random random = new Random(System.currentTimeMillis());
int N = a.length;
for (int i = 0; i < N; i++) {
int r = i + random.nextInt(N-i); // between i and N-1
Object temp = a[i];
a[i] = a[r];
a[r] = temp;
}
}
public static void show(Comparable[] a){
int N = a.length;
if(N > 20){
System.out.format("a[0]= %d\n", a[0]);
System.out.format("a[%d]= %d\n",N-1, a[N-1]);
}else{
for(int i=0;i<N;i++){
System.out.print(a[i]+",");
}
}
System.out.println();
}
public static void main(String[] args) {
Integer[] a = new Integer[]{17,15,14,13,19,12,11,16,18};
System.out.print("data= ");
show(a);
int tn = tukeysNinther(a,0,a.length-1);
System.out.println("ninther="+a[tn]);
}
}
Running this a cuople of times gives
data= 11,14,12,16,18,19,17,15,13,
ninther=15
data= 14,13,17,16,18,19,11,15,12,
ninther=14
data= 16,17,12,19,18,13,14,11,15,
ninther=16
Will tuckey's ninther give different values for different shufflings of the same dataset? when I tried to find the median of medians by hand ,I found that the above calculations in the code are correct.. which means that the same dataset yield different results unlike a median of the dataset.Is this the proper behaviour? Can someone with more knowledge in statistics comment?
Tukey's ninther examines 9 items and calculates the median using only those.
For different random shuffles, you may very well get a different Tukey's ninther, because different items may be examined. After all, you always examine the same array slots, but a different shuffle may have put different items in those slots.
The key here is that Tukey's ninther is not the median of the given array. It is an attempted appromixation of the median, made with very little effort: we only have to read 9 items and make 12 comparisons to get it. This is much faster than getting the actual median, and has a smaller chance of resulting in an undesirable pivot compared to the 'median of three'. Note that the chance still exists.
Does this answer you question?
On a side note, does anybody know if quicksort using Tukey's ninther still requires shuffling? I'm assuming yes, but I'm not certain.

Hash by Chaining VS Double Probing

I'm trying to compare between Chaining and Double probing.
I need to insert 40 integers to table size 100,
when I measure the time with nanotime (in java)
I get that the Double is faster.
thats because in the Insert methood of Chaining, I create every time LinkedListEntry,
and it's add time.
how can it be that Chaining is more faster than Double probing ? (that's what i read in wikipedia)
Thanks!!
this is the code of chaining:
public class LastChain
{
int tableSize;
Node[] st;
LastChain(int size) {
tableSize = size;
st = new Node[tableSize];
for (int i = 0; i < tableSize; i++)
st[i] = null;
}
private class Node
{
int key;
Node next;
Node(int key, Node next)
{
this.key = key;
this.next = next;
}
}
public void put(Integer key)
{
int i = hash(key);
Node first=st[i];
for (Node x = st[i]; x != null; x = x.next)
if (key.equals(x.key))
{
return;
}
st[i] = new Node(key, first);
}
private int hash(int key)
{ return key%tableSize;
}
}
}
and this is the relevant code from double probing:
public class HashDouble1 {
private Integer[] hashArray;
private int arraySize;
private Integer bufItem; // for deleted items
HashDouble1(int size) {
arraySize = size;
hashArray = new Integer[arraySize];
bufItem = new Integer(-1);
}
public int hashFunc1(int key) {
return key % arraySize;
}
public int hashFunc2(int key) {
return 7 - key % 7;
}
public void insert(Integer key) {
int hashVal = hashFunc1(key); // hash the key
int stepSize = hashFunc2(key); // get step size
// until empty cell or -1
while (hashArray[hashVal] != null && hashArray[hashVal] != -1) {
hashVal += stepSize; // add the step
hashVal %= arraySize; // for wraparound
}
hashArray[hashVal] = key; // insert item
}
}
in this way the insert in Double is more faster than Chaining.
how can i fix it?
Chaining works best with high load factors. Trying using 90 strings (not a well places selection of integers) in a table of 100.
Also chaining is much easier to implement removal/delete for.
Note: In HashMap, an Entry object is created whether it is chained or not, not there is no saving there.
Java has the special "feature" Objects take up a lot of memory.
Thus, for large datasets (where this will have any relevance) double probing will be good.
But as a very first thing, please change your Integer[] into int[] -> the memory usage will be one fourth or so and the performance will jump nicely.
But always with performance questions: measure, measure, measure, as your case will always be special.

Implementing branch and bound for knapsack

I'm having a headache implementing this (awful) pseudo-java code (I wonder: why the hell people do that?) for the b&b knapsack problem. This is my implementation so far, which outputs a maximum of 80 (when it should print 90, for the items on the textbook sample). I created a Comparator (on a LinkedList) to sort the elements by Pi/Wi before passing them to the algorithm, but on this input is already presorted. I'm debugging right now (and updating the posted code), cause I guess it's an array indexing problem... or is there a mistake on the bounding function?
input:
4 16 //# items maxWeight
40 2 // profit weight
30 5
50 10
10 5
class Node
{
int level;
int profit;
int weight;
double bound;
}
public class BranchAndBound {
static int branchAndBound (LinkedList<Item> items, int W) {
int n = items.size();
int [] p= new int[n];
int [] w= new int[n];
for (int i=0; i<n;i++){
p [i]= (int)items.get(i).value;
w [i]= (int)items.get(i).weight;
}
Node u = new Node();
Node v = new Node(); // tree root
int maxProfit=0;
LinkedList <Node> Q = new LinkedList<Node>();
v.level=-1;
v.profit=0;
v.weight=0; // v initialized to -1, dummy root
Q.offer(v); // place the dummy at the root
while(!Q.isEmpty()){
v = Q.poll();
if (v.level==-1){
u.level=0;
}
else if(v.level != (n - 1))
{
u.level = v.level+1; // set u to be a child of v
}
u = new Node();
u.weight = v.weight + w[u.level];// set u to the child
u.profit = v.profit + p[u.level]; // that includes the
//next item
double bound = bound(u, W, n, w, p);
u.bound=bound;
if(u.weight<=W && u.profit>maxProfit){
maxProfit = u.profit;
}
if(bound>maxProfit){
Q.add(u);
}
u = new Node();
u.weight = v.weight; // set u to the child that
u.profit = v.profit;// does NOT include the next item
bound = bound(u, W, n, w, p);
u.bound = bound;
if (bound>maxProfit){
Q.add(u);
}
}
return maxProfit;
}
public static float bound(Node u, int W, int n, int [] w, int [] p){
int j=0; int k=0;
int totWeight=0;
float result=0;
if(u.weight>=W)
return 0;
else {
result = u.profit;
j= u.level +1;
totWeight = u.weight;
while ((j < n) && (totWeight + w[j]<=W)){
totWeight = totWeight + w[j]; // grab as many items as possible
result = result + p[j];
j++;
}
k=j; // use k for consistency with formula in text
if (k<n)
result = result + (W-totWeight) * p[k]/w[k];// grab fraction of kth item
return result;
}
}
}
I have only tested it with the given example, but it looks like that wherever the pseudocode says
enqueue(Q, u)
you should add a copy of u to the linked list, rather than passing a reference to u and continue manipulating it.
In other words, define a copy constructor for the class Node and do
Q.offer(new Node(u));
instead of
Q.offer(u);
In fact, the code you give above only allocates two instances of the class Node per call to branchAndBound(..)

Categories