Implementing branch and bound for knapsack - java

I'm having a headache implementing this (awful) pseudo-java code (I wonder: why the hell people do that?) for the b&b knapsack problem. This is my implementation so far, which outputs a maximum of 80 (when it should print 90, for the items on the textbook sample). I created a Comparator (on a LinkedList) to sort the elements by Pi/Wi before passing them to the algorithm, but on this input is already presorted. I'm debugging right now (and updating the posted code), cause I guess it's an array indexing problem... or is there a mistake on the bounding function?
input:
4 16 //# items maxWeight
40 2 // profit weight
30 5
50 10
10 5
class Node
{
int level;
int profit;
int weight;
double bound;
}
public class BranchAndBound {
static int branchAndBound (LinkedList<Item> items, int W) {
int n = items.size();
int [] p= new int[n];
int [] w= new int[n];
for (int i=0; i<n;i++){
p [i]= (int)items.get(i).value;
w [i]= (int)items.get(i).weight;
}
Node u = new Node();
Node v = new Node(); // tree root
int maxProfit=0;
LinkedList <Node> Q = new LinkedList<Node>();
v.level=-1;
v.profit=0;
v.weight=0; // v initialized to -1, dummy root
Q.offer(v); // place the dummy at the root
while(!Q.isEmpty()){
v = Q.poll();
if (v.level==-1){
u.level=0;
}
else if(v.level != (n - 1))
{
u.level = v.level+1; // set u to be a child of v
}
u = new Node();
u.weight = v.weight + w[u.level];// set u to the child
u.profit = v.profit + p[u.level]; // that includes the
//next item
double bound = bound(u, W, n, w, p);
u.bound=bound;
if(u.weight<=W && u.profit>maxProfit){
maxProfit = u.profit;
}
if(bound>maxProfit){
Q.add(u);
}
u = new Node();
u.weight = v.weight; // set u to the child that
u.profit = v.profit;// does NOT include the next item
bound = bound(u, W, n, w, p);
u.bound = bound;
if (bound>maxProfit){
Q.add(u);
}
}
return maxProfit;
}
public static float bound(Node u, int W, int n, int [] w, int [] p){
int j=0; int k=0;
int totWeight=0;
float result=0;
if(u.weight>=W)
return 0;
else {
result = u.profit;
j= u.level +1;
totWeight = u.weight;
while ((j < n) && (totWeight + w[j]<=W)){
totWeight = totWeight + w[j]; // grab as many items as possible
result = result + p[j];
j++;
}
k=j; // use k for consistency with formula in text
if (k<n)
result = result + (W-totWeight) * p[k]/w[k];// grab fraction of kth item
return result;
}
}
}

I have only tested it with the given example, but it looks like that wherever the pseudocode says
enqueue(Q, u)
you should add a copy of u to the linked list, rather than passing a reference to u and continue manipulating it.
In other words, define a copy constructor for the class Node and do
Q.offer(new Node(u));
instead of
Q.offer(u);
In fact, the code you give above only allocates two instances of the class Node per call to branchAndBound(..)

Related

Implementing Union-Find Algorithm for Kruskal's Algorithm to find Minimum Spanning Tree in Java

I am trying to solve the following Leetcode problem (https://leetcode.com/problems/connecting-cities-with-minimum-cost), and my approach is to figure out the total weight of the minimum spanning tree (MST) from the input graph using Kruskal's Algorithm using the Union-Find data structure. However, my code online passes 51/63 of the test cases, returning the incorrect result on the following test case, which is too hard to debug, since the input graph is too large.
50
[[2,1,22135],[3,1,13746],[4,3,37060],[5,2,48513],[6,3,49607],[7,1,97197],[8,2,95909],[9,2,82668],[10,2,48372],[11,4,17775],[12,2,6017],[13,1,51409],[14,2,12884],[15,7,98902],[16,14,52361],[17,8,11588],[18,12,86814],[19,17,49581],[20,4,41808],[21,11,77039],[22,10,80279],[23,16,61659],[24,12,89390],[25,24,10042],[26,12,78278],[27,15,30756],[28,6,2883],[29,8,3478],[30,7,29321],[31,12,47542],[32,20,35806],[33,3,26531],[34,12,16321],[35,27,82484],[36,7,55920],[37,24,21253],[38,23,90537],[39,7,83795],[40,36,70353],[41,34,76983],[42,14,63416],[43,15,39590],[44,9,86794],[45,3,31968],[46,19,32695],[47,17,40287],[48,1,27993],[49,12,86349],[50,11,52080],[17,27,65829],[42,45,87517],[14,23,96130],[5,50,3601],[10,17,2017],[26,44,4118],[26,29,93146],[1,9,56934],[22,43,5984],[3,22,13404],[13,28,66475],[11,14,93296],[16,44,71637],[7,37,88398],[7,29,56056],[2,34,79170],[40,44,55496],[35,46,14494],[32,34,25143],[28,36,59961],[10,49,58317],[8,38,33783],[8,28,19762],[34,41,69590],[27,37,26831],[15,23,53060],[5,11,7570],[20,42,98814],[18,34,96014],[13,43,94702],[1,46,18873],[44,45,43666],[22,40,69729],[4,25,28548],[8,46,19305],[15,22,39749],[33,48,43826],[14,15,38867],[13,22,56073],[3,46,51377],[13,15,73530],[6,36,67511],[27,38,76774],[6,21,21673],[28,49,72219],[40,50,9568],[31,37,66173],[14,29,93641],[4,40,87301],[18,46,41318],[2,8,25717],[1,7,3006],[9,22,85003],[14,45,33961],[18,28,56248],[1,31,10007],[3,24,23971],[6,28,24448],[35,39,87474],[10,50,3371],[7,18,26351],[19,41,86238],[3,8,73207],[11,34,75438],[3,47,35394],[27,32,69991],[6,40,87955],[2,18,85693],[5,37,50456],[8,20,59182],[16,38,58363],[9,39,58494],[39,43,73017],[10,15,88526],[16,23,48361],[4,28,59995],[2,3,66426],[6,17,29387],[15,38,80738],[12,43,63014],[9,11,90635],[12,20,36051],[13,25,1515],[32,40,72665],[10,40,85644],[13,40,70642],[12,24,88771],[14,46,79583],[30,49,45432],[21,34,95097],[25,48,96934],[2,35,79611],[9,26,71147],[11,37,57109],[35,36,67266],[42,43,15913],[3,30,44704],[4,32,46266],[5,10,94508],[31,39,45742],[12,25,56618],[10,45,79396],[15,28,78005],[19,32,94010],[36,46,4417],[6,35,7762],[10,13,12161],[49,50,60013],[20,23,6891],[9,50,63893],[35,43,74832],[10,24,3562],[6,8,47831],[29,32,82689],[7,47,71961],[14,41,82402],[20,33,38732],[16,26,24131],[17,34,96267],[21,46,81067],[19,47,41426],[13,24,68768],[1,25,78243],[2,27,77645],[11,25,96335],[31,45,30726],[43,44,34801],[3,42,22953],[12,23,34898],[37,43,32324],[18,44,18539],[8,13,59737],[28,37,67994],[13,14,25013],[22,41,25671],[1,6,57657],[8,11,83932],[42,48,24122],[4,15,851],[9,29,70508],[7,32,53629],[3,4,34945],[2,32,64478],[7,30,75022],[14,19,55721],[20,22,84838],[22,25,6103],[8,49,11497],[11,32,22278],[35,44,56616],[12,49,18681],[18,43,56358],[24,43,13360],[24,47,59846],[28,43,36311],[17,25,63309],[1,14,30207],[39,48,22241],[13,26,94146],[4,33,62994],[40,48,32450],[8,19,8063],[20,29,56772],[10,27,21224],[24,30,40328],[44,46,48426],[22,45,39752],[6,43,96892],[2,30,73566],[26,36,43360],[34,36,51956],[18,20,5710],[7,22,72496],[3,39,9207],[15,30,39474],[11,35,82661],[12,50,84860],[14,26,25992],[16,39,33166],[25,41,11721],[19,40,68623],[27,28,98119],[19,43,3644],[8,16,84611],[33,42,52972],[29,36,60307],[9,36,44224],[9,48,89857],[25,26,21705],[29,33,12562],[5,34,32209],[9,16,26285],[22,37,80956],[18,35,51968],[37,49,36399],[18,42,37774],[1,30,24687],[23,43,55470],[6,47,69677],[21,39,6826],[15,24,38561]]
I'm having trouble understanding why my code will fail a test case, since I believe I am implementing the steps of Kruskal's Algorithm propertly:
Sorting the connections in increasing order of weight.
Building the MST by going through each connection in the sorted list and selecting that connection if it does not result in a cycle in the MST.
Below is my Java code:
class UnionFind {
// parents[i] = parent node of node i.
// If a node is the root node of a component, we define its parent
// to be itself.
int[] parents;
public UnionFind(int n) {
this.parents = new int[n];
for (int i = 0; i < n; i++) {
this.parents[i] = i;
}
}
// Merges two nodes into the same component.
public void union(int node1, int node2) {
int node1Component = find(node1);
int node2Component = find(node2);
this.parents[node1Component] = node2Component;
}
// Returns the component that a node is in.
public int find(int node) {
while (this.parents[node] != node) {
node = this.parents[node];
}
return node;
}
}
class Solution {
public int minimumCost(int n, int[][] connections) {
UnionFind uf = new UnionFind(n + 1);
// Sort edges by increasing cost.
Arrays.sort(connections, new Comparator<int[]>() {
#Override
public int compare(final int[] a1, final int[] a2) {
return a1[2] - a2[2];
}
});
int edgeCount = 0;
int connectionIndex = 0;
int weight = 0;
// Greedy algorithm: Choose the edge with the smallest weight
// which does not form a cycle. We know that an edge between
// two nodes will result in a cycle if those nodes are already
// in the same component.
for (int i = 0; i < connections.length; i++) {
int[] connection = connections[i];
int nodeAComponent = uf.find(connection[0]);
int nodeBComponent = uf.find(connection[1]);
if (nodeAComponent != nodeBComponent) {
weight += connection[2];
edgeCount++;
}
if (edgeCount == n - 1) {
break;
}
}
// MST, by definition, must have (n - 1) edges.
if (edgeCount == n - 1) {
return weight;
}
return -1;
}
}
As #geobreze stated, I forgot to unite the components (disjoint sets) of node A and node B. Below is the corrected code:
if (nodeAComponent != nodeBComponent) {
uf.union(nodeAComponent, nodeBComponent);
weight += connection[2];
edgeCount++;
}

Determining Whether Graph G = (V,E) Contains a Negative-Weight Cycle

In this program, I am given an input text file that gives information about a weighted, directed graph
G = (V, E, w)
The first line of the text file in input stores V (the number of vertices) and E (the number of edges).
The following lines store data about edges (u, v) in order u, v, weight.
I'm trying to implement a code that considers this input and determines whether G contains a negative-weight cycle.
So far, I've tried to use the Bellman Ford algorithm to try to get this to work: I started by initializing a dist[] array that initializes distances from the source to all other vertices as some really high number (making sure src to src is 0).
Next, I relax all edges |V| - 1 times.
Finally, I check for negative-weight cycles by iterating through the array of edges again, checking to see if we get a shorter path.
However, when I try to do that second step of relaxing the edges, I keep getting an index out of bounds error.
NOTE: To examine the code below, just scroll down to the method isNegativeCycle(). I just included some of the other stuff in case anyone needs background information.
public class P1 {
//instance variables
static int V; //number of vertices
static int E; //number of edges
//vertex class
public class Vertex {
int ID; //the name of the vertex
}
//edge class
public class Edge {
Vertex source; //the source vertex - its a directed graph
Vertex dest; //the destination vertex
int weight; //the weight of the edge
}
//graph class where all the magic happens
public class Graph {
//Each graph has an array of edges
Edge edgearray[];
//constructor
public Graph(int n, int m) {
V = n;
E = m;
edgearray = new Edge[E];
for (int i = 0; i < E; i++) {
edgearray[i] = new Edge();
}
}
//THIS IS THE IMPORTANT METHOD
public String isNegativeCycle(Graph G, int src) {
int dist[] = new int[V];
Arrays.fill(dist, Integer.MAX_VALUE);
dist[src] = 0; //cos the distance from A to A is 0
//Relax all edges
for (int i = 1; i <= V-1; i++) {
for (int j = 0; j < E; j++) {
int u = G.edgearray[j].source.ID;
int v = G.edgearray[j].dest.ID;
int weight = G.edgearray[j].weight;
//THIS IS WHERE I GET THE INDEX OUT OF BOUNDS ERROR
if (dist[u]!= Integer.MAX_VALUE && (dist[u]+weight) < dist[v]) {
dist[v] = dist[u]+weight;
}
}
//check for a negative cycle
for (int a = 0; a < E; a++) {
int u = G.edgearray[a].source.ID;
int v = G.edgearray[a].dest.ID;
double weight = G.edgearray[a].weight;
if (dist[u] != Integer.MAX_VALUE && dist[u]+weight < dist[v]) {
return "YES";
}
}
return "NO";
}
}//end of graph class
//main method
public static void main(String[] args) {
P1 instance = new P1();
int n;
int m;
int counter = 0;
boolean fl = true;
String infileName = args[0];
Graph G = instance.new Graph(V, E);
File infile = new File(infileName);
Scanner fileReader = null;
try {
fileReader = new Scanner(infile);
while (fileReader.hasNextLine()) {
//if we're reading the first line
if (fl == true) {
String[] temp = fileReader.nextLine().split(" ");
n = Integer.parseInt(temp[0]);
V = n;
m = Integer.parseInt(temp[1]);
E = m;
G = instance.new Graph(V, E);
fl = false;
}
//if we're reading any line other than the first line
else {
String[] temp = fileReader.nextLine().split(" ");
//G.addEdge(temp[0], temp[1], Double.parseDouble(temp[2]));
Vertex newsrc = instance.new Vertex();
Vertex newdest = instance.new Vertex();
newsrc.ID = Integer.parseInt(temp[0]);
newdest.ID = Integer.parseInt(temp[1]);
Edge newEdge = instance.new Edge();
newEdge.source = newsrc;
newEdge.dest = newdest;
newEdge.weight = Integer.parseInt(temp[2]);
G.edgearray[counter] = newEdge;
counter++;
}
}
}
catch (FileNotFoundException e) {
System.out.println("File not found.");
}
System.out.println(G.isNegativeCycle(G, 0));
}
}
My current input file doesn't really matter at this point, but after this code runs, I expect the output to be "YES." Thank you!
I should've included my input file. In my input file, you would see that the vertex names start at 1. Thus, when calling isNegativeCycle, I shoudl've sent in a 1 instead of a 0. In addition, I make the dist[] array one size bigger.

Djikstra's Algorithm using Adjacency List and Priority queue in java

I am having trouble understand why my dijkstraShortestPath(int startVertex) function is not working properly. I am following pseudocode for my project, but I do not understand what I am doing wrong.
Only the walk for my start vertex is showing up for my algorithm.
I also have a DFS, but I am not sure if I should be using it in my dijkstraShortestPath method, and if I do, how do I implement it?
I think my issue is either in the "while loop" or the way I am initializing my priority queue named "pq".
Link to FULL code:
https://www.dropbox.com/s/b848b9ts5lrfn01/Graph%20copy.java?dl=0
Link to pseudocode:
https://www.dropbox.com/s/tyia0sr3t9r8snf/Dijkstra%27s%20Algorithm%20%281%29.docx?dl=0
Link to requirements:
https://www.dropbox.com/s/rq8km8rp4jvyxvp/Project%202%20Description-1%20%282%29.docx?dl=0
Below is the code for my Dijkstra Algorithm.
public void dijkstraShortestPaths(int startVertex) {
// Initialize VARS and Arrays
int count = 0, start = startVertex;
int[] d;
int[] parent;
d = new int[nVertices];
parent = new int[nVertices];
DistNode u;
// 10000 is MAX/Infinity
for (int i = 0; i < nVertices; i++) {
parent[i] = -1;
d[i] = 10000;
}
// Initialize Start vertex distance to 0
d[startVertex] = 0;
// Setup Priotiry Queue
PriorityQueue<DistNode> pq = new PriorityQueue<DistNode>();
for(int i = 0; i < adjList[start].size(); i++){
pq.add(new DistNode(adjList[start].get(i).destVertex, adjList[start].get(i).weight));
}
System.out.print(pq);
//
while (count < nVertices && !pq.isEmpty()) {
// remove DistNode with d[u] value
u = pq.remove();
count++;
System.out.println("\n\nu.vertex: " + u.vertex);
// for each v in adjList[u] (adjacency list for vertex u)
for(int i = 0; i < adjList[u.vertex].size();i++){
// v
int v = adjList[u.vertex].get(i).destVertex;
System.out.println("v = " + v);
// w(u,v)
int vWt = adjList[u.vertex].get(i).weight;
System.out.println("vWt = " + vWt + "\n");
if((d[u.vertex] + vWt) < d[v]){
d[v] = d[u.vertex] + vWt;
parent[v] = u.vertex;
pq.add(new DistNode(v,d[v]));
}
}
}
printShortestPaths(start, d, parent);
}
The problem with your using a PriorityQueue is that the content of the the priority queue is unrelated to the content of the array d; and the statement:
pq.add(new DistNode(v,d[v]));
Should replace any DistNode in pq with vertex v, otherwise, you may visit the same vertex multiple times.
I'm not sure that the PriorityQueue is the right tool for the job.

Find the max path from root to leaf of a n-ary tree without including values of two adjacent nodes in the sum

I recently got interviewed and was asked the following question.
Given an n-ary tree, find the maximum path from root to leaf such that maximum path does not contain values from any two adjacent nodes.
(Another edit: The nodes would only have positive values.)
(Edit from comments: An adjacent node means node that share a direct edge. Because its a tree, it means parent-child. So if I include parent, I can not include child and vice versa.)
For example:
5
/ \
8 10
/ \ / \
1 3 7 9
In the above example, the maximum path without two adjacent would be 14 along the path 5->10->9. I include 5 and 9 in the final sum but not 10 because it would violate the no two adjacent nodes condition.
I suggested the following algorithm. While I was fairly sure about it, my interviewer did not seem confident about it. Hence, I wanted to double check if my algorithm was correct or not. It seemed to work on various test cases I could think of:
For each node X, let F(X) be the maximum sum from root to X without two adjacent values in the maximum sum.
The formula for calculating F(X) = Max(F(parent(X)), val(X) + F(grandParent(X)));
Solution would have been
Solution = Max(F(Leaf Nodes))
This was roughly the code I came up with:
class Node
{
int coins;
List<Node> edges;
public Node(int coins, List<Node> edges)
{
this.coins = coins;
this.edges = edges;
}
}
class Tree
{
int maxPath = Integer.MIN_VALUE;
private boolean isLeafNode(Node node)
{
int size = node.edges.size();
for(int i = 0; i < size; i++)
{
if(node.edges.get(i) != null)
return false;
}
return true;
}
// previous[0] = max value obtained from parent
// previous[1] = max value obtained from grandparent
private void helper(Node node, int[] previous)
{
int max = Math.max(previous[0], max.val + previous[1]);
//leaf node
if(isLeafNode(node))
{
maxPath = Math.max(maxPath, max);
return;
}
int[] temp= new int[2];
temp[0] = max;
temp[1] = prev[0];
for(int i = 0; i < node.edges.size(); i++)
{
if(node.edges.get(i) != null)
{
helper(node.edges.get(i), temp);
}
}
}
public int findMax(Node node)
{
int[] prev = new int[2];
prev[0] = 0;
prev[1] = 0;
if(node == null) return 0;
helper(node, prev);
return maxPath;
}
}
Edit: Forgot to mention that my primary purpose in asking this question is to know if my algorithm was correct rather than ask for a new algorithm.
Edit: I have a reason to believe that my algorithm should also have worked.
I was scouring the internet for similar questions and came across this question:
https://leetcode.com/problems/house-robber/?tab=Description
It is pretty similar to the problem above except that it is now an array instead of the tree.
The formal F(X) = Max(F(X-1), a[x] + F(X-2)) works in this case.
Here is my accepted code:
public class Solution {
public int rob(int[] nums) {
int[] dp = new int[nums.length];
if(nums.length < 1) return 0;
dp[0] = nums[0];
if(nums.length < 2) return nums[0];
dp[1] = Math.max(nums[0], nums[1]);
for(int i = 2; i < nums.length; i++)
{
dp[i] = Math.max(dp[i-1], dp[i-2] + nums[i]);
}
return dp[nums.length-1];
}
}
The natural solution would be to compute for each node X two values: max path from X to leaf including X and max path from X to leaf, excluding X, let's call them MaxPath(X) and MaxExcluded(X).
For leaf L MaxPath(L) is Value(L) and MaxExcluded(L) is 0.
For internal node X:
MaxPath(X) = Value(X) + Max over child Y of: MaxExcluded(Y)
MaxExcluded(X) = Max over child Y of : Max(MaxExcluded(Y), MaxPath(Y))
The first line means that if you include X, you have to exclude its children. The second means that if you exclude X, you are free to either include or exclude its children.
It's a simple recursive function on nodes which can be computed going leaves-to-parents in O(size of the tree).
Edit: The recursive relation does also work top-down, and in this case you can indeed eliminate storing two values by the observation that MaxExcluded(Y) is actually MaxPath(Parent(Y)), which gives the solution given in the question.
Implementation of what #RafaƂDowgird explained.
/* 5
* 8 10
* 1 3 7 9
* 4 5 6 11 13 14 3 4
*
*
*/
public class app1 {
public static void main(String[] args) {
Node root = new Node(5);
root.left = new Node(8);root.right = new Node(10);
root.left.left = new Node(1);root.left.right = new Node(3);
root.right.left = new Node(7);root.right.right = new Node(9);
root.left.left.left = new Node(4);root.left.left.right = new Node(5);
root.left.right.left = new Node(6);root.left.right.right = new Node(11);
root.right.left.left = new Node(13);root.right.left.right = new Node(14);
root.right.right.right = new Node(4);
System.out.println(findMaxPath(root));
}
private static int findMaxPath(Node root) {
if (root == null) return 0;
int maxInclude = root.data + findMaxPathExcluded(root);
int maxExcludeLeft = Math.max(findMaxPath(root.left), findMaxPathExcluded(root.left));
int maxExcludeRight = Math.max(findMaxPath(root.right), findMaxPathExcluded(root.right));
return Math.max(maxInclude, Math.max(maxExcludeLeft, maxExcludeRight));
}
private static int findMaxPathExcluded(Node root) {
if(root == null) return 0;
int left1 = root.left!=null ? findMaxPath(root.left.left) : 0;
int right1 = root.left!=null ? findMaxPath(root.left.right) : 0;
int left2 = root.right!=null ? findMaxPath(root.right.left) : 0;
int right2 = root.right!=null ? findMaxPath(root.right.right) : 0;
return Math.max(left1, Math.max(right1, Math.max(left2, right2)));
}
}
class Node{
int data;
Node left;
Node right;
Node(int data){
this.data=data;
}
}

Memory Choke on Branch And Bound Knapsack Implementation

I wrote this implementation of the branch and bound knapsack algorithm based on the pseudo-Java code from here. Unfortunately, it's memory choking on large instances of the problem, like this. Why is this? How can I make this implementation more memory efficient?
The input on the file on the link is formatted this way:
numberOfItems maxWeight
profitOfItem1 weightOfItem1
.
.
.
profitOfItemN weightOfItemN
// http://books.google.com/books?id=DAorddWEgl0C&pg=PA233&source=gbs_toc_r&cad=4#v=onepage&q&f=true
import java.util.Comparator;
import java.util.LinkedList;
import java.util.PriorityQueue;
class ItemComparator implements Comparator {
public int compare (Object item1, Object item2){
Item i1 = (Item)item1;
Item i2 = (Item)item2;
if ((i1.valueWeightQuotient)<(i2.valueWeightQuotient))
return 1;
if ((i2.valueWeightQuotient)<(i1.valueWeightQuotient))
return -1;
else { // costWeightQuotients are equal
if ((i1.weight)<(i2.weight)){
return 1;
}
if ((i2.weight)<(i1.weight)){
return -1;
}
}
return 0;
}
}
class Node
{
int level;
int profit;
int weight;
double bound;
}
class NodeComparator implements Comparator {
public int compare(Object o1, Object o2){
Node n1 = (Node)o1;
Node n2 = (Node)o2;
if ((n1.bound)<(n2.bound))
return 1;
if ((n2.bound)<(n1.bound))
return -1;
return 0;
}
}
class Solution {
long weight;
long value;
}
public class BranchAndBound {
static Solution branchAndBound2(LinkedList<Item> items, double W) {
double timeStart = System.currentTimeMillis();
int n = items.size();
int [] p = new int [n];
int [] w = new int [n];
for (int i=0; i<n;i++){
p [i]= (int)items.get(i).value;
w [i]= (int)items.get(i).weight;
}
Node u;
Node v = new Node(); // tree root
int maxProfit=0;
int usedWeight=0;
NodeComparator nc = new NodeComparator();
PriorityQueue<Node> PQ = new PriorityQueue<Node>(n,nc);
v.level=-1;
v.profit=0;
v.weight=0; // v initialized to -1, dummy root
v.bound = bound(v,W, n, w, p);
PQ.add(v);
while(!PQ.isEmpty()){
v=PQ.poll();
u = new Node();
if(v.bound>maxProfit){ // check if node is still promising
u.level = v.level+1; // set u to the child that includes the next item
u.weight = v.weight + w[u.level];
u.profit = v.profit + p[u.level];
if (u.weight <=W && u.profit > maxProfit){
maxProfit = u.profit;
usedWeight = u.weight;
}
u.bound = bound(u, W, n, w, p);
if(u.bound > maxProfit){
PQ.add(u);
}
u = new Node();
u.level = v.level+1;
u.weight = v.weight; // set u to the child that does not include the next item
u.profit = v.profit;
u.bound = bound(u, W, n, w, p);
if(u.bound>maxProfit)
PQ.add(u);
}
}
Solution solution = new Solution();
solution.value = maxProfit;
solution.weight = usedWeight;
double timeStop = System.currentTimeMillis();
double elapsedTime = timeStop - timeStart;
System.out.println("* Time spent in branch and bound (milliseconds):" + elapsedTime);
return solution;
}
static double bound(Node u, double W, int n, int [] w, int [] p){
int j=0; int k=0;
int totWeight=0;
double result=0;
if(u.weight>=W)
return 0;
else {
result = u.profit;
totWeight = u.weight; // por esto no hace
if(u.level < w.length)
{
j= u.level +1;
}
int weightSum;
while ((j < n) && ((weightSum=totWeight + w[j])<=W)){
totWeight = weightSum; // grab as many items as possible
result = result + p[j];
j++;
}
k=j; // use k for consistency with formula in text
if (k<n){
result = result + ((W - totWeight) * p[k] / w[k]);// grab fraction of excluded kth item
}
return result;
}
}
}
I got a slightly speedier implementation taking away all the Collection instances with generics and instead using arrays.
Not sure whether you still need insight into the algorithm or whether your tweaks have solved your problem, but with a breadth-first branch and bound algorithm like the one you've implemented there's always going to be the potential for a memory usage problem. You're hoping, of course, that you'll be able to rule out a sufficient number of branches as you go along to keep the number of nodes in your priority queue relatively small, but in the worst-case scenario you could end up with up to as many nodes as there are possible permutations of item selections in the knapsack held in memory. The worst-case scenario is, of course, highly unlikely, but for large problem instances even an average tree could end up populating your priority queue with millions of nodes.
If you're going to be throwing lots of unforeseen large problem instances at your code and need the piece of mind of knowing that no matter how many branches the algorithm has to consider you'll never run out of memory, I'd consider a depth-first branch and bound algorithm, like the Horowitz-Sahni algorithm outlined in section 2.5.1 of this book: http://www.or.deis.unibo.it/knapsack.html. For some problem instances this approach will be less efficient in terms of the number of possible solutions that have to be considered before the optimal one is found, but then again for some problem instances it will be more efficient--it really depends on the structure of the tree.

Categories