I am trying to get the biggest and second biggest distance in a Hashmap in Java.
Basically, from a hashmap populated with (x,y) values, I plan to pick a point, set it as a fixed point and calculate distance with this point in relation to all the other points. After all possible distances are calculated, I change the fixed point to the next element in the HashMap. With this process, I aim to get the biggest and second biggest value in a hashmap distance-wise.
HashMap<Integer, Integer> corners = getPotentialCorners(image);
HashMap<Integer, Integer> extremeCorners = new HashMap<>();
int Blue = new Color(0, 0, 255).getRGB();
int currentNumberX;
int currentNumberY;
int pivotVarX;
int pivotVarY;
double distance;
double Highest = 0;
double Highest2 = 1;
int xHighest = 0;
int yHighest = 0;
int xHighest2 = 0;
int yHighest2 = 0;
for (int i : corners.keySet()) {
currentNumberX = (i);
currentNumberY = corners.get(currentNumberX);
for (int j : corners.keySet()) {
pivotVarX = j;
pivotVarY = corners.get(pivotVarX);
distance = Math.abs(Math.sqrt(Math.pow((pivotVarX - currentNumberX), 2) + Math.pow((pivotVarY - currentNumberY), 2)));
if (pivotVarX != currentNumberX) {
if ((Highest > Highest2)) {
xHighest = currentNumberX;
yHighest = currentNumberY;
Highest2 = distance;
}
if (distance > Highest2) {
Highest2 = distance;
xHighest2 = currentNumberX;
yHighest2 = currentNumberY;
}
}
}
}
With this code, I debugged it, and I always get one correct point, and another point is ALWAYS (0,0). I know the issue lies with my process of getting the second highest point (Highest2, XHighest2,YHighest2), but I do not know how to fix it.
As others pointed out, instead of a HashMap, it is better to use List<Point> which you can easily iterate as:
for (Point p: myList) {
...
}
or if you need more control on which elements to iterate over you can use an integer counter:
for (int j = i+1; j < corners.size(); j++) {
Point p = corners.get(j);
...
}
instead of having to use keySet() and get() and all the problems with identical x-values mapping on the same bin.
Also, there are some trivial speed improvements possible:
No need to use the slow Math.sqrt() function (or Math.abs() as square root is always positive) since you are only comparing larger/smaller distances. You can just compare the squared distances.
The latest Java compiler knows how to optimize Math.pow(int, 2), but to make sure you don't get the overhead of a function call, you can help the compiler by writing: (p.x-q.x)*(p.x-q.x) + (p.y-q.y)*(p.x-q.y)
Renaming current and pivot to p and q for conciseness, your code would look like:
List<Point> corners = getPotentialCorners(image);
Double highest = null;
Double highest2 = null;
Point highestP = null, highestQ = null;
Point highestP2 = null, highestQ2 = null;
for (int i = 0; i < corners.size()-1; i++) {
Point p = corners.get(i);
for (int j = i+1; j < corners.size(); j++) {
Point q = corners.get(j);
double distanceSq = (p.x-q.x)*(p.x-q.x) + (p.y-q.y)*(p.y-q.y);
if (highest == null || distanceSq >= highest) {
// shift highest to second highest
highest2 = highest;
highestP2 = highestP;
highestQ2 = highestQ;
highest = distanceSq;
highestP = p;
highestQ = q;
} else if (highest2 == null || distanceSq > highest2) {
highest2 = distanceSq;
highestP2 = p;
highestQ2 = q;
}
}
}
I have a matrix that represents a grid and would like to find out all possible places an object can move to.
An object can only move horizontally or vertically.
Let's assume that the example below is the grid I'm looking at, which is represented as a 2d matrix. The object is the *, the 0s are empty spaces that an object can move to, and the 1s are walls which the object cannot jump over or go on to.
What is the best way to find all possible movements of this object provided that it can only move horizontally or vertically?
I'd like to print a message saying: "There are 9 places the object can go to." The 9 is for the example below, but I would like it to work for any configuration of the below grid. So all I have to do is give the current coordinates of the * and it will give me the number of possible positions it can move to.
A thing to note is that the *'s original position is not considered in the calculations, which is why for the example below the message would print 9 and not 10.
I have a isaWall method that tells me if the cell is a wall or not. The isaWall method is in a Cell class. Each cell is represented by its coordinates. I looked into using Algorithms like BFS or DFS, but I didn't quite understand how to implement them in this case, as I am not too familiar with the algorithms. I thought of using the Cells as nodes of the graph, but wasn't too sure how to traverse the graph because from the examples I saw online of BFS and DFS, you would usually have a destination node and source node (the source being the position of the *), but I don't really have a destination node in this case. I would really appreciate some help.
00111110
01000010
100*1100
10001000
11111000
EDIT: I checked the website that was recommend in the comments and tried to implement my own version. It unfortunately didn't work. I understand that I have to expand the "frontier" and I basically just translated the expansion code to Java, but it still doesn't work. The website continues explaining the process, but in my case, there is no destination cell to go to. I'd really appreciate an example or a clearer explanation pertaining to my case.
EDIT2: I'm still quite confused by it, can someone please help?
While BFS/DFS are commonly used to find connections between a start and end point, that isn't really what they are. BFS/DFS are "graph traversal algorithms," which is a fancy way of saying that they find every point reachable from a start point. DFS (Depth First Search) is easier to implement, so we'll use that for your needs (note: BFS is used when you need to find how far away any point is from the start point, and DFS is used when you only need to go to every point).
I don't know exactly how your data is structured, but I'll assume your map is an array of integers and define some basic functionality (for simplicity's sake I made the start cell 2):
Map.java
import java.awt.*;
public class Map {
public final int width;
public final int height;
private final Cell[][] cells;
private final Move[] moves;
private Point startPoint;
public Map(int[][] mapData) {
this.width = mapData[0].length;
this.height = mapData.length;
cells = new Cell[height][width];
// define valid movements
moves = new Move[]{
new Move(1, 0),
new Move(-1, 0),
new Move(0, 1),
new Move(0, -1)
};
generateCells(mapData);
}
public Point getStartPoint() {
return startPoint;
}
public void setStartPoint(Point p) {
if (!isValidLocation(p)) throw new IllegalArgumentException("Invalid point");
startPoint.setLocation(p);
}
public Cell getStartCell() {
return getCellAtPoint(getStartPoint());
}
public Cell getCellAtPoint(Point p) {
if (!isValidLocation(p)) throw new IllegalArgumentException("Invalid point");
return cells[p.y][p.x];
}
private void generateCells(int[][] mapData) {
boolean foundStart = false;
for (int i = 0; i < mapData.length; i++) {
for (int j = 0; j < mapData[i].length; j++) {
/*
0 = empty space
1 = wall
2 = starting point
*/
if (mapData[i][j] == 2) {
if (foundStart) throw new IllegalArgumentException("Cannot have more than one start position");
foundStart = true;
startPoint = new Point(j, i);
} else if (mapData[i][j] != 0 && mapData[i][j] != 1) {
throw new IllegalArgumentException("Map input data must contain only 0, 1, 2");
}
cells[i][j] = new Cell(j, i, mapData[i][j] == 1);
}
}
if (!foundStart) throw new IllegalArgumentException("No start point in map data");
// Add all cells adjacencies based on up, down, left, right movement
generateAdj();
}
private void generateAdj() {
for (int i = 0; i < cells.length; i++) {
for (int j = 0; j < cells[i].length; j++) {
for (Move move : moves) {
Point p2 = new Point(j + move.getX(), i + move.getY());
if (isValidLocation(p2)) {
cells[i][j].addAdjCell(cells[p2.y][p2.x]);
}
}
}
}
}
private boolean isValidLocation(Point p) {
if (p == null) throw new IllegalArgumentException("Point cannot be null");
return (p.x >= 0 && p.y >= 0) && (p.y < cells.length && p.x < cells[p.y].length);
}
private class Move {
private int x;
private int y;
public Move(int x, int y) {
this.x = x;
this.y = y;
}
public int getX() {
return x;
}
public int getY() {
return y;
}
}
}
Cell.java
import java.util.LinkedList;
public class Cell {
public final int x;
public final int y;
public final boolean isWall;
private final LinkedList<Cell> adjCells;
public Cell(int x, int y, boolean isWall) {
if (x < 0 || y < 0) throw new IllegalArgumentException("x, y must be greater than 0");
this.x = x;
this.y = y;
this.isWall = isWall;
adjCells = new LinkedList<>();
}
public void addAdjCell(Cell c) {
if (c == null) throw new IllegalArgumentException("Cell cannot be null");
adjCells.add(c);
}
public LinkedList<Cell> getAdjCells() {
return adjCells;
}
}
Now to write our DFS function. A DFS recursively touches every reachable cell once with the following steps:
Mark current cell as visited
Loop through each adjacent cell
If the cell has not already been visited, DFS that cell, and add the number of cells adjacent to that cell to the current tally
Return the number of cells adjacent to the current cell + 1
You can see a visualization of this here. With all the helper functionality we wrote already, this is pretty simple:
MapHelper.java
class MapHelper {
public static int countReachableCells(Map map) {
if (map == null) throw new IllegalArgumentException("Arguments cannot be null");
boolean[][] visited = new boolean[map.height][map.width];
// subtract one to exclude starting point
return dfs(map.getStartCell(), visited) - 1;
}
private static int dfs(Cell currentCell, boolean[][] visited) {
visited[currentCell.y][currentCell.x] = true;
int touchedCells = 0;
for (Cell adjCell : currentCell.getAdjCells()) {
if (!adjCell.isWall && !visited[adjCell.y][adjCell.x]) {
touchedCells += dfs(adjCell, visited);
}
}
return ++touchedCells;
}
}
And that's it! Let me know if you need any explanations about the code.
So let's say we have a code block that we want to execute 70% of times and another one 30% of times.
if(Math.random() < 0.7)
70percentmethod();
else
30percentmethod();
Simple enough. But what if we want it to be easily expandable to say, 30%/60%/10% etc.?
Here it would require adding and changing all the if statements on change which isn't exactly great to use, slow and mistake inducing.
So far I've found large switches to be decently useful for this use case, for example:
switch(rand(0, 10)){
case 0:
case 1:
case 2:
case 3:
case 4:
case 5:
case 6:
case 7:70percentmethod();break;
case 8:
case 9:
case 10:30percentmethod();break;
}
Which can be very easily changed to:
switch(rand(0, 10)){
case 0:10percentmethod();break;
case 1:
case 2:
case 3:
case 4:
case 5:
case 6:
case 7:60percentmethod();break;
case 8:
case 9:
case 10:30percentmethod();break;
}
But these have their drawbacks as well, being cumbersome and split onto a predetermined amount of divisions.
Something ideal would be based on a "frequency number" system I guess, like so:
(1,a),(1,b),(2,c) -> 25% a, 25% b, 50% c
then if you added another one:
(1,a),(1,b),(2,c),(6,d) -> 10% a, 10% b, 20% c, 60% d
So simply adding up the numbers, making the sum equal 100% and then split that.
I suppose it wouldn't be that much trouble to make a handler for it with a customized hashmap or something, but I'm wondering if there's some established way/pattern or lambda for it before I go all spaghetti on this.
EDIT: See edit at end for more elegant solution. I'll leave this in though.
You can use a NavigableMap to store these methods mapped to their percentages.
NavigableMap<Double, Runnable> runnables = new TreeMap<>();
runnables.put(0.3, this::30PercentMethod);
runnables.put(1.0, this::70PercentMethod);
public static void runRandomly(Map<Double, Runnable> runnables) {
double percentage = Math.random();
for (Map.Entry<Double, Runnable> entry : runnables){
if (entry.getKey() < percentage) {
entry.getValue().run();
return; // make sure you only call one method
}
}
throw new RuntimeException("map not filled properly for " + percentage);
}
// or, because I'm still practicing streams by using them for everything
public static void runRandomly(Map<Double, Runnable> runnables) {
double percentage = Math.random();
runnables.entrySet().stream()
.filter(e -> e.getKey() < percentage)
.findFirst().orElseThrow(() ->
new RuntimeException("map not filled properly for " + percentage))
.run();
}
The NavigableMap is sorted (e.g. HashMap gives no guarantees of the entries) by keys, so you get the entries ordered by their percentages. This is relevant because if you have two items (3,r1),(7,r2), they result in the following entries: r1 = 0.3 and r2 = 1.0 and they need to be evaluated in this order (e.g. if they are evaluated in the reverse order the result would always be r2).
As for the splitting, it should go something like this:
With a Tuple class like this
static class Pair<X, Y>
{
public Pair(X f, Y s)
{
first = f;
second = s;
}
public final X first;
public final Y second;
}
You can create a map like this
// the parameter contains the (1,m1), (1,m2), (3,m3) pairs
private static Map<Double,Runnable> splitToPercentageMap(Collection<Pair<Integer,Runnable>> runnables)
{
// this adds all Runnables to lists of same int value,
// overall those lists are sorted by that int (so least probable first)
double total = 0;
Map<Integer,List<Runnable>> byNumber = new TreeMap<>();
for (Pair<Integer,Runnable> e : runnables)
{
total += e.first;
List<Runnable> list = byNumber.getOrDefault(e.first, new ArrayList<>());
list.add(e.second);
byNumber.put(e.first, list);
}
Map<Double,Runnable> targetList = new TreeMap<>();
double current = 0;
for (Map.Entry<Integer,List<Runnable>> e : byNumber.entrySet())
{
for (Runnable r : e.getValue())
{
double percentage = (double) e.getKey() / total;
current += percentage;
targetList.put(current, r);
}
}
return targetList;
}
And all of this added to a class
class RandomRunner {
private List<Integer, Runnable> runnables = new ArrayList<>();
public void add(int value, Runnable toRun) {
runnables.add(new Pair<>(value, toRun));
}
public void remove(Runnable toRemove) {
for (Iterator<Pair<Integer, Runnable>> r = runnables.iterator();
r.hasNext(); ) {
if (toRemove == r.next().second) {
r.remove();
break;
}
}
}
public void runRandomly() {
// split list, use code from above
}
}
EDIT :
Actually, the above is what you get if you get an idea stuck in your head and don't question it properly.
Keeping the RandomRunner class interface, this is much easier:
class RandomRunner {
List<Runnable> runnables = new ArrayList<>();
public void add(int value, Runnable toRun) {
// add the methods as often as their weight indicates.
// this should be fine for smaller numbers;
// if you get lists with millions of entries, optimize
for (int i = 0; i < value; i++) {
runnables.add(toRun);
}
}
public void remove(Runnable r) {
Iterator<Runnable> myRunnables = runnables.iterator();
while (myRunnables.hasNext()) {
if (myRunnables.next() == r) {
myRunnables.remove();
}
}
public void runRandomly() {
if (runnables.isEmpty()) return;
// roll n-sided die
int runIndex = ThreadLocalRandom.current().nextInt(0, runnables.size());
runnables.get(runIndex).run();
}
}
All these answers seem quite complicated, so I'll just post the keep-it-simple alternative:
double rnd = Math.random()
if((rnd -= 0.6) < 0)
60percentmethod();
else if ((rnd -= 0.3) < 0)
30percentmethod();
else
10percentmethod();
Doesn't need changing other lines and one can quite easily see what happens, without digging into auxiliary classes. A small downside is that it doesn't enforce that percentages sum to 100%.
I am not sure if there is a common name to this, but I think I learned this as the wheel of fortune back in university.
It basically just works as you described: It receives a list of values and "frequency numbers" and one is chosen according to the weighted probabilities.
list = (1,a),(1,b),(2,c),(6,d)
total = list.sum()
rnd = random(0, total)
sum = 0
for i from 0 to list.size():
sum += list[i]
if sum >= rnd:
return list[i]
return list.last()
The list can be a function parameter if you want to generalize this.
This also works with floating point numbers and the numbers don't have to be normalized. If you normalize (to sum up to 1 for example), you can skip the list.sum() part.
EDIT:
Due to demand here is an actual compiling java implementation and usage example:
import java.util.ArrayList;
import java.util.Random;
public class RandomWheel<T>
{
private static final class RandomWheelSection<T>
{
public double weight;
public T value;
public RandomWheelSection(double weight, T value)
{
this.weight = weight;
this.value = value;
}
}
private ArrayList<RandomWheelSection<T>> sections = new ArrayList<>();
private double totalWeight = 0;
private Random random = new Random();
public void addWheelSection(double weight, T value)
{
sections.add(new RandomWheelSection<T>(weight, value));
totalWeight += weight;
}
public T draw()
{
double rnd = totalWeight * random.nextDouble();
double sum = 0;
for (int i = 0; i < sections.size(); i++)
{
sum += sections.get(i).weight;
if (sum >= rnd)
return sections.get(i).value;
}
return sections.get(sections.size() - 1).value;
}
public static void main(String[] args)
{
RandomWheel<String> wheel = new RandomWheel<String>();
wheel.addWheelSection(1, "a");
wheel.addWheelSection(1, "b");
wheel.addWheelSection(2, "c");
wheel.addWheelSection(6, "d");
for (int i = 0; i < 100; i++)
System.out.print(wheel.draw());
}
}
While the selected answer works, it is unfortunately asymptotically slow for your use case. Instead of doing this, you could use something called Alias Sampling. Alias sampling (or alias method) is a technique used for selection of elements with a weighted distribution. If the weights of choosing those elements doesn't change you can do selection in O(1) time!. If this isn't the case, you can still get amortized O(1) time if the ratio between the number of selections you make and the changes you make to the alias table (changing the weights) is high. The current selected answer suggests an O(N) algorithm, the next best thing is O(log(N)) given sorted probabilities and binary search, but nothing is going to beat the O(1) time I suggested.
This site provides a good overview of Alias method that is mostly language agnostic. Essentially you create a table where each entry represents the outcome of two probabilities. There is a single threshold for each entry at the table, below the threshold you get one value, above you get another value. You spread larger probabilities across multiple table values in order to create a probability graph with an area of one for all probabilities combined.
Say you have the probabilities A, B, C, and D, which have the values 0.1, 0.1, 0.1 and 0.7 respectively. Alias method would spread the probability of 0.7 to all the others. One index would correspond to each probability, where you would have the 0.1 and 0.15 for ABC, and 0.25 for D's index. With this you normalize each probability so that you end up with 0.4 chance of getting A and 0.6 chance of getting D in A's index (0.1/(0.1 + 0.15) and 0.15/(0.1 + 0.15) respecively) as well as B and C's index, and 100% chance of getting D in D's index (0.25/0.25 is 1).
Given an unbiased uniform PRNG (Math.Random()) for indexing, you get an equal probability of choosing each index, but you also do a coin flip per index which provides the weighted probability. You have a 25% chance of landing on the A or D slot, but within that you only have a 40% chance of picking A, and 60% of D. .40 * .25 = 0.1, our original probability, and if you add up all of D's probabilities strewn through out the other indices, you would get .70 again.
So to do random selection, you need only to generate a random index from 0 to N, then do a coin flip, no matter how many items you add, this is very fast and constant cost. Making an alias table doesn't take that many lines of code either, my python version takes 80 lines including import statements and line breaks, and the version presented in the Pandas article is similarly sized (and it's C++)
For your java implementation one could map between probabilities and array list indices to your functions you must execute, creating an array of functions which are executed as you index to each, alternatively you could use function objects (functors) which have a method that you use to pass parameters in to execute.
ArrayList<(YourFunctionObject)> function_list;
// add functions
AliasSampler aliassampler = new AliasSampler(listOfProbabilities);
// somewhere later with some type T and some parameter values.
int index = aliassampler.sampleIndex();
T result = function_list[index].apply(parameters);
EDIT:
I've created a version in java of the AliasSampler method, using classes, this uses the sample index method and should be able to be used like above.
import java.util.ArrayList;
import java.util.Collections;
import java.util.Random;
public class AliasSampler {
private ArrayList<Double> binaryProbabilityArray;
private ArrayList<Integer> aliasIndexList;
AliasSampler(ArrayList<Double> probabilities){
// java 8 needed here
assert(DoubleStream.of(probabilities).sum() == 1.0);
int n = probabilities.size();
// probabilityArray is the list of probabilities, this is the incoming probabilities scaled
// by the number of probabilities. This allows us to figure out which probabilities need to be spread
// to others since they are too large, ie [0.1 0.1 0.1 0.7] = [0.4 0.4 0.4 2.80]
ArrayList<Double> probabilityArray;
for(Double probability : probabilities){
probabilityArray.add(probability);
}
binaryProbabilityArray = new ArrayList<Double>(Collections.nCopies(n, 0.0));
aliasIndexList = new ArrayList<Integer>(Collections.nCopies(n, 0));
ArrayList<Integer> lessThanOneIndexList = new ArrayList<Integer>();
ArrayList<Integer> greaterThanOneIndexList = new ArrayList<Integer>();
for(int index = 0; index < probabilityArray.size(); index++){
double probability = probabilityArray.get(index);
if(probability < 1.0){
lessThanOneIndexList.add(index);
}
else{
greaterThanOneIndexList.add(index);
}
}
// while we still have indices to check for in each list, we attempt to spread the probability of those larger
// what this ends up doing in our first example is taking greater than one elements (2.80) and removing 0.6,
// and spreading it to different indices, so (((2.80 - 0.6) - 0.6) - 0.6) will equal 1.0, and the rest will
// be 0.4 + 0.6 = 1.0 as well.
while(lessThanOneIndexList.size() != 0 && greaterThanOneIndexList.size() != 0){
//https://stackoverflow.com/questions/16987727/removing-last-object-of-arraylist-in-java
// last element removal is equivalent to pop, java does this in constant time
int lessThanOneIndex = lessThanOneIndexList.remove(lessThanOneIndexList.size() - 1);
int greaterThanOneIndex = greaterThanOneIndexList.remove(greaterThanOneIndexList.size() - 1);
double probabilityLessThanOne = probabilityArray.get(lessThanOneIndex);
binaryProbabilityArray.set(lessThanOneIndex, probabilityLessThanOne);
aliasIndexList.set(lessThanOneIndex, greaterThanOneIndex);
probabilityArray.set(greaterThanOneIndex, probabilityArray.get(greaterThanOneIndex) + probabilityLessThanOne - 1);
if(probabilityArray.get(greaterThanOneIndex) < 1){
lessThanOneIndexList.add(greaterThanOneIndex);
}
else{
greaterThanOneIndexList.add(greaterThanOneIndex);
}
}
//if there are any probabilities left in either index list, they can't be spread across the other
//indicies, so they are set with probability 1.0. They still have the probabilities they should at this step, it works out mathematically.
while(greaterThanOneIndexList.size() != 0){
int greaterThanOneIndex = greaterThanOneIndexList.remove(greaterThanOneIndexList.size() - 1);
binaryProbabilityArray.set(greaterThanOneIndex, 1.0);
}
while(lessThanOneIndexList.size() != 0){
int lessThanOneIndex = lessThanOneIndexList.remove(lessThanOneIndexList.size() - 1);
binaryProbabilityArray.set(lessThanOneIndex, 1.0);
}
}
public int sampleIndex(){
int index = new Random().nextInt(binaryProbabilityArray.size());
double r = Math.random();
if( r < binaryProbabilityArray.get(index)){
return index;
}
else{
return aliasIndexList.get(index);
}
}
}
You could compute the cumulative probability for each class, pick a random number from [0; 1) and see where that number falls.
class WeightedRandomPicker {
private static Random random = new Random();
public static int choose(double[] probabilties) {
double randomVal = random.nextDouble();
double cumulativeProbability = 0;
for (int i = 0; i < probabilties.length; ++i) {
cumulativeProbability += probabilties[i];
if (randomVal < cumulativeProbability) {
return i;
}
}
return probabilties.length - 1; // to account for numerical errors
}
public static void main (String[] args) {
double[] probabilties = new double[]{0.1, 0.1, 0.2, 0.6}; // the final value is optional
for (int i = 0; i < 20; ++i) {
System.out.printf("%d\n", choose(probabilties));
}
}
}
The following is a bit like #daniu answer but makes use of the methods provided by TreeMap:
private final NavigableMap<Double, Runnable> map = new TreeMap<>();
{
map.put(0.3d, this::branch30Percent);
map.put(1.0d, this::branch70Percent);
}
private final SecureRandom random = new SecureRandom();
private void branch30Percent() {}
private void branch70Percent() {}
public void runRandomly() {
final Runnable value = map.tailMap(random.nextDouble(), true).firstEntry().getValue();
value.run();
}
This way there is no need to iterate the whole map until the matching entry is found, but the capabilities of TreeSet in finding an entry with a key specifically comparing to another key is used. This however will only make a difference if the number of entries in the map is large. However it does save a few lines of code.
I'd do that something like this:
class RandomMethod {
private final Runnable method;
private final int probability;
RandomMethod(Runnable method, int probability){
this.method = method;
this.probability = probability;
}
public int getProbability() { return probability; }
public void run() { method.run(); }
}
class MethodChooser {
private final List<RandomMethod> methods;
private final int total;
MethodChooser(final List<RandomMethod> methods) {
this.methods = methods;
this.total = methods.stream().collect(
Collectors.summingInt(RandomMethod::getProbability)
);
}
public void chooseMethod() {
final Random random = new Random();
final int choice = random.nextInt(total);
int count = 0;
for (final RandomMethod method : methods)
{
count += method.getProbability();
if (choice < count) {
method.run();
return;
}
}
}
}
Sample usage:
MethodChooser chooser = new MethodChooser(Arrays.asList(
new RandomMethod(Blah::aaa, 1),
new RandomMethod(Blah::bbb, 3),
new RandomMethod(Blah::ccc, 1)
));
IntStream.range(0, 100).forEach(
i -> chooser.chooseMethod()
);
Run it here.
I wrote this implementation of the branch and bound knapsack algorithm based on the pseudo-Java code from here. Unfortunately, it's memory choking on large instances of the problem, like this. Why is this? How can I make this implementation more memory efficient?
The input on the file on the link is formatted this way:
numberOfItems maxWeight
profitOfItem1 weightOfItem1
.
.
.
profitOfItemN weightOfItemN
// http://books.google.com/books?id=DAorddWEgl0C&pg=PA233&source=gbs_toc_r&cad=4#v=onepage&q&f=true
import java.util.Comparator;
import java.util.LinkedList;
import java.util.PriorityQueue;
class ItemComparator implements Comparator {
public int compare (Object item1, Object item2){
Item i1 = (Item)item1;
Item i2 = (Item)item2;
if ((i1.valueWeightQuotient)<(i2.valueWeightQuotient))
return 1;
if ((i2.valueWeightQuotient)<(i1.valueWeightQuotient))
return -1;
else { // costWeightQuotients are equal
if ((i1.weight)<(i2.weight)){
return 1;
}
if ((i2.weight)<(i1.weight)){
return -1;
}
}
return 0;
}
}
class Node
{
int level;
int profit;
int weight;
double bound;
}
class NodeComparator implements Comparator {
public int compare(Object o1, Object o2){
Node n1 = (Node)o1;
Node n2 = (Node)o2;
if ((n1.bound)<(n2.bound))
return 1;
if ((n2.bound)<(n1.bound))
return -1;
return 0;
}
}
class Solution {
long weight;
long value;
}
public class BranchAndBound {
static Solution branchAndBound2(LinkedList<Item> items, double W) {
double timeStart = System.currentTimeMillis();
int n = items.size();
int [] p = new int [n];
int [] w = new int [n];
for (int i=0; i<n;i++){
p [i]= (int)items.get(i).value;
w [i]= (int)items.get(i).weight;
}
Node u;
Node v = new Node(); // tree root
int maxProfit=0;
int usedWeight=0;
NodeComparator nc = new NodeComparator();
PriorityQueue<Node> PQ = new PriorityQueue<Node>(n,nc);
v.level=-1;
v.profit=0;
v.weight=0; // v initialized to -1, dummy root
v.bound = bound(v,W, n, w, p);
PQ.add(v);
while(!PQ.isEmpty()){
v=PQ.poll();
u = new Node();
if(v.bound>maxProfit){ // check if node is still promising
u.level = v.level+1; // set u to the child that includes the next item
u.weight = v.weight + w[u.level];
u.profit = v.profit + p[u.level];
if (u.weight <=W && u.profit > maxProfit){
maxProfit = u.profit;
usedWeight = u.weight;
}
u.bound = bound(u, W, n, w, p);
if(u.bound > maxProfit){
PQ.add(u);
}
u = new Node();
u.level = v.level+1;
u.weight = v.weight; // set u to the child that does not include the next item
u.profit = v.profit;
u.bound = bound(u, W, n, w, p);
if(u.bound>maxProfit)
PQ.add(u);
}
}
Solution solution = new Solution();
solution.value = maxProfit;
solution.weight = usedWeight;
double timeStop = System.currentTimeMillis();
double elapsedTime = timeStop - timeStart;
System.out.println("* Time spent in branch and bound (milliseconds):" + elapsedTime);
return solution;
}
static double bound(Node u, double W, int n, int [] w, int [] p){
int j=0; int k=0;
int totWeight=0;
double result=0;
if(u.weight>=W)
return 0;
else {
result = u.profit;
totWeight = u.weight; // por esto no hace
if(u.level < w.length)
{
j= u.level +1;
}
int weightSum;
while ((j < n) && ((weightSum=totWeight + w[j])<=W)){
totWeight = weightSum; // grab as many items as possible
result = result + p[j];
j++;
}
k=j; // use k for consistency with formula in text
if (k<n){
result = result + ((W - totWeight) * p[k] / w[k]);// grab fraction of excluded kth item
}
return result;
}
}
}
I got a slightly speedier implementation taking away all the Collection instances with generics and instead using arrays.
Not sure whether you still need insight into the algorithm or whether your tweaks have solved your problem, but with a breadth-first branch and bound algorithm like the one you've implemented there's always going to be the potential for a memory usage problem. You're hoping, of course, that you'll be able to rule out a sufficient number of branches as you go along to keep the number of nodes in your priority queue relatively small, but in the worst-case scenario you could end up with up to as many nodes as there are possible permutations of item selections in the knapsack held in memory. The worst-case scenario is, of course, highly unlikely, but for large problem instances even an average tree could end up populating your priority queue with millions of nodes.
If you're going to be throwing lots of unforeseen large problem instances at your code and need the piece of mind of knowing that no matter how many branches the algorithm has to consider you'll never run out of memory, I'd consider a depth-first branch and bound algorithm, like the Horowitz-Sahni algorithm outlined in section 2.5.1 of this book: http://www.or.deis.unibo.it/knapsack.html. For some problem instances this approach will be less efficient in terms of the number of possible solutions that have to be considered before the optimal one is found, but then again for some problem instances it will be more efficient--it really depends on the structure of the tree.
I am writing a program that takes ordered pairs and determines if they are Reflexive, Symmetric and Transitive...
Given these points: (1,1)(1,2)(2,2)(4,4)(2,1)(3,3)
Reflexive : all these are present: (1,1)(2,2)(3,3)(4,4)
Symmetric: if (a, b) is present then (b, a) is present
Transitive: if (a, b) and (b, c) is present then (a,c) must also be present...
I am having problems because I started by using linked lists but decided that arrays would be easier. I was told to use the Point[] package, that it would make it easier than parallel arrays... this is what I have and i am not sure if it is even right?? i can't even seem to get numbers to store into my array!! Help please!!!
/****************************************************************
* Name: Cheryl Minor Date: March 8, 2011
*
* Program: Write a program that checks whether a relation R
* is an equivalence relation. If R is an equivalence relation
* the program outputs the equivalence classes of R.
****************************************************************/
import java.io.*;
import java.util.*;
public class Problem17
{
static class Point{
int x;
int y;
}
public static void main(String []args)
{
Point[] line = new Point[6];
for(int i = 0; i<line.length; i++)
{
line[i] = new Point();
}
line[0].x = 1;
line[1].x = 1;
line[2].x = 2;
line[3].x = 4;
line[4].x = 2;
line[5].x = 3;
line[0].y = 1;
line[1].y = 2;
line[2].y = 2;
line[3].y = 4;
line[4].y = 1;
line[5].y = 3;
System.out.println(line);
}
}
There is one universal solution to all computer programs (aka: the golden bullet): divide et impera (invented by the romans).
Solve your problem step by step.
... a program that takes ordered pairs ... Given these points: (1,1)(1,2)(2,2)(4,4)(2,1)(3,3)
I have a first dissonance here. :)
You can write something like this. This algorithm can be improve but now is more readable:
Point[] list = new Point[5];
...
for (int i=0; i<list.length; i++) {
// checking Reflexive
if (list[i].x == list[i].y) System.out.println("Reflexive" + list[i]);
for (int j=i+1; j<list.length; j++) {
// checking Symmetric
if (list[i].x == list[j].y && list[i].y == list[j].x) ...
for (int k=j+1; k<list.length; k++) {
// checking Transitive
if (list[i].x == list[k].x && list[i].y == list[j].x && ...
}
}
}
Small advice:
add constructor Point(x,y)
add toString in Point()