Related
it's my first time with CGAL, some of you may argue why do I have to learn CGAL from something like that, but it's a new project that I must do (and... yes, I must use CGAL and Java combined) :/ Long story short... I only have:
Two double arrays, representing x and y coordinates of my vertices. Let's call them double[] x, y;.
Both arrays have S random values.
Two vertices, u and w are connected if distance(x[u], y[u], x[w], y[w]) < CONSTANT (ofc. I do distanceSquared(x[u], y[u], x[w], y[w]) < CONSTANT_SQUARED, so I avoid to call sqrt()).
x and y are filled randomly with values from 0 to UPPER_LIMIT, no other infos are given.
Question, do x and y describes a connected graph?
Right now I have two algoritms:
Algorithm 1:
Build adjacency list (Arraylist<Integer>[] adjLists;) for each vertex (only upper triangular matrix explored). Complexity O(|V|^2) (V = vertices set).
Recursive graph exploration, vertex marking and counting, if visited vertex equals S my graph have only one connected component, my graph is connected. Complexity O(|E|) (E = edges set).
Algorithm 2:
private static boolean algorithmGraph(double[] x, double[] y) {
int unchecked, inside = 0, current = 0;
double switchVar;
while (current <= inside && inside != S - 1) {
unchecked = inside + 1;
while (unchecked < S) {
if ((x[current] - x[unchecked]) * (x[current] - x[unchecked]) + (y[current] - y[unchecked]) * (y[current] - y[unchecked]) <= CONSTANT_SQUARED) {
inside++;
// switch x coordinates | unchecked <-> inside
switchVar = x[unchecked];
x[unchecked] = x[inside];
x[inside] = switchVar;
// switch y coordinates | unchecked <-> inside
switchVar = y[unchecked];
y[unchecked] = y[inside];
y[inside] = switchVar;
}
unchecked++;
}
current++;
}
return inside == S - 1;
}
Funny thing the second one is slower, I do not use data structures, the code is iterative and in-place but the heavy use of switch makes it slow as hell.
The problem spec changed and now I must do it with CGAL and Java, I'll read the whole "https://github.com/CGAL/cgal-swig-bindings" to learn how to use CGAL within Java.... but I'd like some help about this specific instance of CGAL code... Are there faster algorithms already implemented in CGAL?
Thank you for your times guys! Happy coding!
I believe that, without a method of spatial indexing, the best performance you are going to achieve in the worst-case-scenario (all connected) is going to be O(n*(n-1)/2).
If you can afford to build a spatial index (have enough memory to pay for the boost in speed), you may consider R-tree and variants - insertion is O(n) searching is O(log2(n)): this will get your "outlier detection by examining distances" approach for a cost of of O(n*log2(n)) in the worst-case-scenario.
A notable result
I am creating javafx.Text objects (maintained in an instance of LinkedList) and placing them on javafx.Group (i.e: sourceGroup.getChildren().add(Text)). Each Text instance holds only one letter (not an entire word).
I have a click even that returns the x and y coordinates of the click. I want to drop a cursor to appear in front of the letter. This needs to be done in constant time, so I cant just iterate over my LinkedList and examine the Text x and y values.
There are certain restrictions on the libraries I can use. I can essentially only use javafx stuffs and java.util stuffs.
I was reading that HashMaps lookups essentially take place in constant time. My idea to drop the cursor is to:
1) While adding Text to the LinkedList instance, I want to update four hashMaps. One hashMap for the upper X value, one for the lower X value and the same for the Y values.
2) when it comes time to drop a cursor, i grab the x and y coordinates of the mouse click and perform a series of intersections (this part im not sure how to do yet) which should return the Text or subset of texts that fall between the X range and the Y range.
My Question:
Is there a better/more efficient way to do this? Am I being terribly inefficient with this idea?
Just add a click listener to each text item, and, when the mouse is clicked on the text, reposition the cursor based upon the text bounds in parent.
It's your homework, so you may or may not wish to look at the following code...
import javafx.application.Application;
import javafx.geometry.*;
import javafx.scene.Scene;
import javafx.scene.control.ScrollPane;
import javafx.scene.layout.FlowPane;
import javafx.scene.layout.Pane;
import javafx.scene.shape.Line;
import javafx.scene.text.Text;
import javafx.stage.Stage;
import java.util.stream.Collectors;
public class SillySong extends Application {
private static final String lyrics =
"Mares eat oats and does eat oats and little lambs eat ivy. ";
private static final int CURSOR_HEIGHT = 16;
private static final int INSET = 2;
private static final int N_LYRIC_REPEATS = 10;
private Line cursor = new Line(INSET, INSET, INSET, INSET + CURSOR_HEIGHT);
#Override
public void start(Stage stage) {
FlowPane textPane = new FlowPane();
for (int i = 0; i < N_LYRIC_REPEATS; i++) {
lyrics.codePoints()
.mapToObj(this::createTextNode)
.collect(Collectors.toCollection(textPane::getChildren));
}
textPane.setPadding(new Insets(INSET));
Pane layout = new Pane(textPane, cursor) {
#Override
protected void layoutChildren() {
super.layoutChildren();
layoutInArea(textPane, 0, 0, getWidth(), getHeight(), 0, new Insets(0), HPos.LEFT, VPos.TOP);
}
};
ScrollPane scrollPane = new ScrollPane(layout);
scrollPane.setFitToWidth(true);
stage.setScene(new Scene(scrollPane, 200, 150));
stage.show();
}
private Text createTextNode(int c) {
Text text = new Text(new String(Character.toChars(c)));
text.setOnMouseClicked(event -> {
Bounds bounds = text.getBoundsInParent();
cursor.setStartX(bounds.getMinX());
cursor.setStartY(bounds.getMinY());
cursor.setEndX(bounds.getMinX());
cursor.setEndY(bounds.getMinY() + CURSOR_HEIGHT);
});
return text;
}
public static void main(String[] args) {
launch(args);
}
}
This was just a basic sample, if you want to study something more full featured, look at the source of RichTextFX.
Truly, new TextField() is simpler :-)
So, what's really going on in the sample above? Where did all your fancy hash tables for click support go? How is JavaFX determining you clicked on a given text node? Is it using some kind of tricky algorithm for spatial indexing such as a quadtree or a kdtree?
Nah, it is just doing a straight depth first search of the scene graph tree and returning the first node it finds that intersects the click point, taking care to loop through children in reverse order so that the last added child to a parent group receives click processing priority over earlier children if the two children overlap.
For a parent node (Parent.java source):
#Deprecated
#Override protected void impl_pickNodeLocal(PickRay pickRay, PickResultChooser result) {
double boundsDistance = impl_intersectsBounds(pickRay);
if (!Double.isNaN(boundsDistance)) {
for (int i = children.size()-1; i >= 0; i--) {
children.get(i).impl_pickNode(pickRay, result);
if (result.isClosed()) {
return;
}
}
if (isPickOnBounds()) {
result.offer(this, boundsDistance, PickResultChooser.computePoint(pickRay, boundsDistance));
}
}
}
For a leaf node (Node.java):
#Deprecated
protected void impl_pickNodeLocal(PickRay localPickRay, PickResultChooser result) {
impl_intersects(localPickRay, result);
}
So you don't need to implement your own geometry processing and pick handling with a complicated supporting algorithm, as JavaFX already provides an appropriate structure (the scene graph) and is fully capable of processing click handling events for it.
Addressing additional questions or concerns
I know that searching trees is fast and efficient, but it isn't constant time right?
Searching trees is not necessarily fast nor efficient. Search speed depends upon the depth and width of the tree and whether the tree is ordered, allowing a binary search. The scene graph is not a [binary search tree[(https://en.wikipedia.org/wiki/Binary_search_tree) or a red-black tree or a b-tree or any other kind of tree that is optimized for search. The hit testing algorithm that JavaFX uses, as can be seen above is a in-order traversal of the tree, which is linear in time: O(n).
If you wanted, you could subclass parent, region, pane or group and implement your own search algorithm for picking by overriding functions such as impl_pickNodeLocal. For example, if you constrain your field to a fixed width font, calculation of which letter a click will hit is trivial function that could be done in constant time via a simple mathematical equation and no additional data structures.
Starting to get really off-topic aside
But, even if you can do implement a custom hit processing algorithm, you need to consider whether you really should. Obviously the default hit testing algorithm for JavaFX is sufficient for most applications and further optimizations of it for the general use case are currently deemed unnecessary. If there existed some well-known golden algorithm or data structure that greatly improved its quality and there was sufficient demand for it, the hit-testing algorithm would have been further optimized already. So it is probably best to use the default implementation unless you are experiencing performance issues and you have a specific use case (such as a mapping application), where an alternate algorithm or data structure (such as an r-tree), can be used to effect a performance boost. Even then, you would want to benchmark various approaches on different sizes and types of data sets to validate the performance on those data sets.
You can see the evidence of the optimization approach such as I described above in the multiply algorithm for BigIntegers from the JDK. You might think the number multiplication is a constant time operation, but, its not for large numbers, because digits in the numbers are represented in different bytes and it is necessary to process all of the bytes to perform the multiplication. There are various algorithms out there for processing the bytes for multiplication, but the choice of what is the "most efficient" one depends upon the properties of the numbers themselves (e.g. their size). For smaller numbers, a straight loop for long multiplication is the most efficient, for larger numbers, the Karatsuba algorithm is used and for larger numbers again the Toom-Cook algorithm is used. The thresholds for choosing just how large a number is required to switch to the different algorithm would have been chosen via analysis and benchmarking. Also, the number is being multiplied with itself (it is being squared), a more efficient algorithm can be used to perform the square (so that is an example of a special edge case that is being specifically optimized for).
/**
* Returns a BigInteger whose value is {#code (this * val)}.
*
* #implNote An implementation may offer better algorithmic
* performance when {#code val == this}.
*
* #param val value to be multiplied by this BigInteger.
* #return {#code this * val}
*/
public BigInteger multiply(BigInteger val) {
if (val.signum == 0 || signum == 0)
return ZERO;
int xlen = mag.length;
if (val == this && xlen > MULTIPLY_SQUARE_THRESHOLD) {
return square();
}
int ylen = val.mag.length;
if ((xlen < KARATSUBA_THRESHOLD) || (ylen < KARATSUBA_THRESHOLD)) {
int resultSign = signum == val.signum ? 1 : -1;
if (val.mag.length == 1) {
return multiplyByInt(mag,val.mag[0], resultSign);
}
if (mag.length == 1) {
return multiplyByInt(val.mag,mag[0], resultSign);
}
int[] result = multiplyToLen(mag, xlen,
val.mag, ylen, null);
result = trustedStripLeadingZeroInts(result);
return new BigInteger(result, resultSign);
} else {
if ((xlen < TOOM_COOK_THRESHOLD) && (ylen < TOOM_COOK_THRESHOLD)) {
return multiplyKaratsuba(this, val);
} else {
return multiplyToomCook3(this, val);
}
}
}
I have created a gameboard (5x5) and I now want to decide when a move is legal as fast as possible. For example a piece at (0,0) wants to go to (1,1), is that legal? First I tried to find this out with computations but that seemed bothersome. I would like to hard-code the possible moves based on a position on the board and then iterate through all the possible moves to see if they match the destinations of the piece. I have problems getting this on paper. This is what I would like:
//game piece is at 0,0 now, decide if 1,1 is legal
Point destination = new Point(1,1);
destination.findIn(legalMoves[0][0]);
The first problem I face is that I don't know how to put a list of possible moves in an array at for example index [0][0]. This must be fairly obvious but I am stuck at this for some time. I would like to create an array in which there is a list of Point objects. So in semi-code: legalMoves[0][0] = {Point(1,1),Point(0,1),Point(1,0)}
I am not sure if this is efficient but it makes logically move sense than maybe [[1,1],[0,1],[1,0]] but I am not sold on this.
The second problem I have is that instead of creating the object at every start of the game with an instance variable legalMoves, I would rather have it read from disk. I think that it should be quicker this way? Is the serializable class the way to go?
My 3rd small problem is that for the 25 positions the legal moves are unbalanced. Some have 8 possible legal moves, others have 3. Maybe this is not a problem at all.
You are looking for a structure that will give you the candidate for a given point, i.e. Point -> List<Point>.
Typically, I would go for a Map<Point, List<Point>>.
You can initialise this structure statically at program start or dynamically when needing. For instance, here I use 2 helpers arrays that contains the possible translations from a point, and these will yield the neighbours of the point.
// (-1 1) (0 1) (1 1)
// (-1 0) (----) (1 0)
// (-1 -1) (0 -1) (1 -1)
// from (1 0) anti-clockwise:
static int[] xOffset = {1,1,0,-1,-1,-1,0,1};
static int[] yOffset = {0,1,1,1,0,-1,-1,-1};
The following Map contains the actual neighbours for a Point with a function that compute, store and return these neighbours. You can choose to initialise all neighbours in one pass, but given the small numbers, I would not think this a problem performance wise.
static Map<Point, List<Point>> neighbours = new HashMap<>();
static List<Point> getNeighbours(Point a) {
List<Point> nb = neighbours.get(a);
if (nb == null) {
nb = new ArrayList<>(xOffset.length); // size the list
for (int i=0; i < xOffset.length; i++) {
int x = a.getX() + xOffset[i];
int y = a.getY() + yOffset[i];
if (x>=0 && y>=0 && x < 5 && y < 5) {
nb.add(new Point(x, y));
}
}
neighbours.put(a, nb);
}
return nb;
}
Now checking a legal move is a matter of finding the point in the neighbours:
static boolean isLegalMove(Point from, Point to) {
boolean legal = false;
for (Point p : getNeighbours(from)) {
if (p.equals(to)) {
legal = true;
break;
}
}
return legal;
}
Note: the class Point must define equals() and hashCode() for the map to behave as expected.
The first problem I face is that I don't know how to put a list of possible moves in an array at for example index [0][0]
Since the board is 2D, and the number of legal moves could generally be more than one, you would end up with a 3D data structure:
Point legalMoves[][][] = new legalMoves[5][5][];
legalMoves[0][0] = new Point[] {Point(1,1),Point(0,1),Point(1,0)};
instead of creating the object at every start of the game with an instance variable legalMoves, I would rather have it read from disk. I think that it should be quicker this way? Is the serializable class the way to go?
This cannot be answered without profiling. I cannot imagine that computing legal moves of any kind for a 5x5 board could be so intense computationally as to justify any kind of additional I/O operation.
for the 25 positions the legal moves are unbalanced. Some have 8 possible legal moves, others have 3. Maybe this is not a problem at all.
This can be handled nicely with a 3D "jagged array" described above, so it is not a problem at all.
I'm trying to write a time efficient algorithm that can detect a group of overlapping circles and make a single circle in the "middle" of the group that will represent that group. The practical application of this is representing GPS locations over a map, put the conversion in to Cartesian co-ordinates is already handled so that's not relevant, the desired effect is that at different zoom levels clusters of close together points just appear as a single circle (that will have the number of points printed in the centre in the final version)
In this example the circles just have a radius of 15 so the distance calculation (Pythagoras) is not being square rooted and compared to 225 for the collision detection. I was trying anything to shave off time, but the problem is this really needs to happen very quickly becasue it's a user facing bit of code that needs to be snappy and good looking.
I've given this a go and I it works with small data sets pretty well. 2 big problems, it takes too long and it can run out of memory if all the points are on top of one another.
The route I've taken is to calculate distance between each point in a first pass, and then take the shortest distance first and start to combine from there, anything that's been combined becomes ineligible for combination on that pass, and the whole list is passed back around to the distance calculations again until nothing changes.
To be honest I think it needs a radical shift in approach and I think it's a little beyond me. I've re factored my code in to one class for ease of posting and generated random points to give an example.
package mergepoints;
import java.awt.Point;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
public class Merger {
public static void main(String[] args) {
Merger m = new Merger();
m.subProcess(m.createRandomList());
}
private List<Plottable> createRandomList() {
List<Plottable> points = new ArrayList<>();
for (int i = 0; i < 50000; i++) {
Plottable p = new Plottable();
p.location = new Point((int) Math.floor(Math.random() * 1000),
(int) Math.floor(Math.random() * 1000));
points.add(p);
}
return points;
}
private List<Plottable> subProcess(List<Plottable> visible) {
List<PlottableTuple> tuples = new ArrayList<PlottableTuple>();
// create a tuple to store distance and matching objects together,
for (Plottable p : visible) {
PlottableTuple tuple = new PlottableTuple();
tuple.a = p;
tuples.add(tuple);
}
// work out each Plottable relative distance from
// one another and order them by shortest first.
// We may need to do this multiple times for one set so going in own
// method.
// this is the bit that takes ages
setDistances(tuples);
// Sort so that smallest distances are at the top.
// parse the set and combine any pair less than the smallest distance in
// to a combined pin.
// any plottable thats been combine is no longer eligable for combining
// so ignore on this parse.
List<PlottableTuple> sorted = new ArrayList<>(tuples);
Collections.sort(sorted);
Set<Plottable> done = new HashSet<>();
Set<Plottable> mergedSet = new HashSet<>();
for (PlottableTuple pt : sorted) {
if (!done.contains(pt.a) && pt.distance <= 225) {
Plottable merged = combine(pt, done);
done.add(pt.a);
for (PlottableTuple tup : pt.others) {
done.add(tup.a);
}
mergedSet.add(merged);
}
}
// if we haven't processed anything we are done just return visible
// list.
if (done.size() == 0) {
return visible;
} else {
// change the list to represent the new combined plottables and
// repeat the process.
visible.removeAll(done);
visible.addAll(mergedSet);
return subProcess(visible);
}
}
private Plottable combine(PlottableTuple pt, Set<Plottable> done) {
List<Plottable> plottables = new ArrayList<>();
plottables.addAll(pt.a.containingPlottables);
for (PlottableTuple otherTuple : pt.others) {
if (!done.contains(otherTuple.a)) {
plottables.addAll(otherTuple.a.containingPlottables);
}
}
int x = 0;
int y = 0;
for (Plottable p : plottables) {
Point position = p.location;
x += position.x;
y += position.y;
}
x = x / plottables.size();
y = y / plottables.size();
Plottable merged = new Plottable();
merged.containingPlottables.addAll(plottables);
merged.location = new Point(x, y);
return merged;
}
private void setDistances(List<PlottableTuple> tuples) {
System.out.println("pins: " + tuples.size());
int loops = 0;
// Start from the first item and loop through, then repeat but starting
// with the next item.
for (int startIndex = 0; startIndex < tuples.size() - 1; startIndex++) {
// Get the data for the start Plottable
PlottableTuple startTuple = tuples.get(startIndex);
Point startLocation = startTuple.a.location;
for (int i = startIndex + 1; i < tuples.size(); i++) {
loops++;
PlottableTuple compareTuple = tuples.get(i);
double distance = distance(startLocation, compareTuple.a.location);
setDistance(startTuple, compareTuple, distance);
setDistance(compareTuple, startTuple, distance);
}
}
System.out.println("loops " + loops);
}
private void setDistance(PlottableTuple from, PlottableTuple to,
double distance) {
if (distance < from.distance || from.others == null) {
from.distance = distance;
from.others = new HashSet<>();
from.others.add(to);
} else if (distance == from.distance) {
from.others.add(to);
}
}
private double distance(Point a, Point b) {
if (a.equals(b)) {
return 0.0;
}
double result = (((double) a.x - (double) b.x) * ((double) a.x - (double) b.x))
+ (((double) a.y - (double) b.y) * ((double) a.y - (double) b.y));
return result;
}
class PlottableTuple implements Comparable<PlottableTuple> {
public Plottable a;
public Set<PlottableTuple> others;
public double distance;
#Override
public int compareTo(PlottableTuple other) {
return (new Double(distance)).compareTo(other.distance);
}
}
class Plottable {
public Point location;
private Set<Plottable> containingPlottables;
public Plottable(Set<Plottable> plots) {
this.containingPlottables = plots;
}
public Plottable() {
this.containingPlottables = new HashSet<>();
this.containingPlottables.add(this);
}
public Set<Plottable> getContainingPlottables() {
return containingPlottables;
}
}
}
Map all your circles on a 2D grid first. You then only need to compare the circles in a cell with the other circles in that cell and in it's 9 neighbors (you can reduce that to five by using a brick pattern instead of a regular grid).
If you only need to be really approximate, then you can just group all the circles that fall into a cell together. You will probably also want to merge cells that only have a small number of circles together with there neighbors, but this will be fast.
This problem is going to take a reasonable amount of computation no matter how you do it, the question then is: can you do all the computation up-front so that at run-time it's just doing a look-up? I would build a tree-like structure where each layer is all the points that need to be drawn for a given zoom level. It takes more computation up-front, but at run-time you are simply drawing a list of point, fast.
My idea is to decide what the resolution of each zoom level is (ie at zoom level 1 points closer than 15 get merged; at zoom level 2 points closer than 30 get merged), then go through your points making groups of points that are within the 15 of each other and pick a point to represent group that group at the higher zoom. Now you have a 2 layer tree. Then you pass over the second layer grouping all points that are within 30 of each other, and so on all the way up to your highest zoom level. Now save this tree structure to file, and at run-time you can very quickly change zoom levels by simply drawing all points at the appropriate tree level. If you need to add or remove points, that can be done dynamically by figuring out where to attach them to the tree.
There are two downsides to this method that come to mind: 1) it will take a long time to compute the tree, but you only have to do this once, and 2) you'll have to think really carefully about how you build the tree, based on how you want the groupings to be done at higher levels. For example, in the image below the top level may not be the right grouping that you want. Maybe instead building the tree based off the previous layer, you always want to go back to the original points. That said, some loss of precision always happens when you're trying to trade-off for faster run-time.
EDIT
So you have a problem which requires O(n^2) comparisons, you say it has to be done in real-time, can not be pre-computed, and has to be fast. Good luck with that.
Let's analyze the problem a bit; if you do no pre-computation then in order to decide which points can be merged you have to compare every pair of points, that's O(n^2) comparisons. I suggested building a tree before-hand, O(n^2 log n) once, but then runtime is just a lookup, O(1). You could also do something in between where you do some work before and some at run-time, but that's how these problems always go, you have to do a certain amount of computation, you can play games by doing some of it earlier, but at the end of the day you still have to do the computation.
For example, if you're willing to do some pre-computation, you could try keeping two copies of the list of points, one sorted by x-value and one sorted by y-value, then instead of comparing every pair of points, you can do 4 binary searches to find all the points within, say, a 30 unit box of the current point. More complicated so would be slower for a small number of points (say <100), but would reduce the overall complexity to O(n log n), making it faster for large amounts of data.
EDIT 2
If you're worried about multiple points at the same location, then why don't you do a first pass removing the redundant points, then you'll have a smaller "search list"
list searchList = new list()
for pt1 in points :
boolean clean = true
for pt2 in searchList :
if distance(pt1, pt2) < epsilon :
clean = false
break
if clean :
searchList.add(pt1)
// Now you have a smaller list to act on with only 1 point per cluster
// ... I guess this is actually the same as my first suggestion if you make one of these search lists per zoom level. huh.
EDIT 3: Graph Traversal
A totally new approach would be to build a graph out of the points and do some sort of longest-edge-first graph traversal on them. So pick a point, draw it, and traverse its longest edge, draw that point, etc. Repeat this until you come to a point which doesn't have any untraversed edges longer than your zoom resolution. The number of edges per point gives you an easy way to tradeoff speed for correctness. If the number of edges per point was small and constant, say 4, then with a bit of cleverness you could build the graph in O(n) time and also traverse it to draw points in O(n) time. Fast enough to do it on the fly with no pre-computation.
Just a wild guess and something that occurred to me while reading responses from others.
Do a multi-step comparison. Assume your combining distance at the current zoom level is 20 meters. First, subtract (X1 - X2). If This is bigger than 20 meters then you are done, the points are too far. Next, subtract (Y1 - Y2) and do the same thing to reject combining the points.
You could stop here and be happy if you are good with using only horizontal/vertical distances as your metric for combining. Much less math (no squaring or square roots). Pythagoras wouldn't be happy but your users might.
If you really insist on exact answers, do the two subtraction/comparison steps above. If the points are within horizontal and vertical limits, THEN you do the full Pythagoras check with square roots.
Assuming all your points are not highly clustered very close to the combining limit, this should save some CPU cycles.
This is still approximately an O(n^2) technique, but the math should be simpler. If you have the memory, you could store distances between each set of points and then you never have to compute it again. This could take up more memory than you have and also grows at a rate of approximately O(n^2), so be careful.
Also, you could make a linked list or sorted array of all your points, sorted in order of increasing X or increasing Y. (I don't think you need both, just one). Then walk through the list in sorted order. For each point, check the neighbors out until (X1 - X2) is bigger than your combining distance. and then stop. You don't have to compare each set of points for O(N^2), you only have to compare neighbors that are close in one dimension to quickly prune your large list to a small one. As you move through the list, you only have to compare points that have a bigger X than your current candidate, because you already compared and combined with all previous X values. This gets you closer to the O(n) complexity you want. Of course, you would need to check the Y dimension and fully qualify the points to be combined before you actually do it. Don't just use the X distance to make your combining decision.
I've been trying to use my collision detection to stop objects from going through each other. I can't figure out how to do it though.
When objects collide, I've tried reversing the direction of their velocity vector (so it moves away from where it's colliding) but sometimes the objects get stuck inside each other.
I've tried switching their velocities but this just parents objects to each other.
Is there a simple way to limit objects' movement so that they don't go through other objects? I've been using the rectangle intersects for collisions, and I've also tried circle collision detection (using distance between objects).
Ideas?
package objects;
import java.awt.Rectangle;
import custom.utils.Vector;
import sprites.Picture;
import render.Window;
// Super class (game objects)
public class Entity implements GameObject{
private Picture self;
protected Vector position;
protected Vector velocity = new Vector(0,0);
private GameObject[] obj_list = new GameObject[0];
private boolean init = false;
// Takes in a "sprite"
public Entity(Picture i){
self = i;
position = new Vector(i.getXY()[0],i.getXY()[1]);
ObjectUpdater.addObject(this);
}
public Object getIdentity() {
return this;
}
// position handles
public Vector getPosition(){
return position;
}
public void setPosition(double x,double y){
position.setValues(x,y);
self.setXY(position);
}
public void setPosition(){
position.setValues((int)Window.getWinSize()[0]/2,(int)Window.getWinSize()[1]/2);
}
// velocity handles
public void setVelocity(double x,double y){ // Use if you're too lazy to make a vector
velocity.setValues(x, y);
}
public void setVelocity(Vector xy){ // Use if your already have a vector
velocity.setValues(xy.getValues()[0], xy.getValues()[1]);
}
public Vector getVelocity(){
return velocity;
}
// inferface for all game objects (so they all update at the same time)
public boolean checkInit(){
return init;
}
public Rectangle getBounds() {
double[] corner = position.getValues(); // Get the corner for the bounds
int[] size = self.getImageSize(); // Get the size of the image
return new Rectangle((int)Math.round(corner[0]),(int)Math.round(corner[1]),size[0],size[1]); // Make the bound
}
// I check for collisions where, this grabs all the objects and checks for collisions on each.
private void checkCollision(){
if (obj_list.length > 0){
for (GameObject i: obj_list){
if (getBounds().intersects(i.getBounds()) && i != this){
// What happens here?
}
}
}
}
public void updateSelf(){
checkCollision();
position = position.add(velocity);
setPosition(position.getValues()[0],position.getValues()[1]);
init = true;
}
public void pollObjects(GameObject[] o){
obj_list = o;
}
}
Hopefully it's not too difficult to read.
Edit:
So I've been using the rectangle intersection method to calculate the position of an object and to modify velocity. It's working pretty well. The only problem is that some objects push others, but that's so big deal. Collision is pretty much an extra thing for the mini game I'm creating. Thanks a lot of the help.
All that being said, I'd still really appreciate elaboration on mentioned ideas since I'm not totally sure how to implement them into my project.
Without seeing your code, I can only guess what's happening. I suspect that your objects are getting stuck because they overshooting the boundaries of other objects, ending up inside. Make sure that each object's step is not just velocity * delta_time, but that the step size is limited by potential collisions. When there is a collision, calculate the time at which it occurred (which is somewhere in the delta_time) and follow the bounce to determine the final object location. Alternatively, just set the objects to be touching and the velocities changed according to the law of conservation of momentum.
EDIT After seeing your code, I can expand my answer. First, let me clarify some of my terminology that you asked about. Since each call to updateSelf simply adds the velocity vector to the current position, what you have in effect is a unit time increment (delta time is always 1). Put another way, your "velocity" is actually the distance (velocity * delta time) traveled since the last call to updateSelf. I would recommend using an explicit (float) time increment as part of your simulation.
Second, the general problem of tracking collisions among multiple moving objects is very difficult. Whatever time increment is used, it is possible for an object to undergo many collisions in that increment. (Imagine an object squeezed between two other objects. In any given time interval, there is no limit to the number of times the object might bounce back and forth between the two surrounding ones.) Also, an object might (within the resolution of the computations) collide with multiple objects at the same time. The problem is even more complicated if the objects actually change size as they move (as your code suggests they may be doing).
Third, you have a significant source of errors because you are rounding all object positions to integer coordinates. I would recommend representing your objects with floating-point objects (Rectangle2D.Float rather than with Rectangle; Point2D.Float rather than Vector). I would also recommend replacing the position field with a rectangular bounds field that captures both the position and size. That way, you don't have to create a new object at each call to getBounds(). If the object sizes are constant, this would also simplify the bounds updating.
Finally, there's a significant problem with having the collision detection logic inside each object: when object A discovers that it would have hit object B, then it is also the case that object B would have hit object A! However, object B does its own calculations independently of object A. If you update A first, then B might miss the collision, and vice versa. It would be better to move the entire collision detection and object movement logic to a global algorithm and keep each game object relatively simple.
One approach (which I recommend) is to write an "updateGame" method that advances the game state by a given time increment. It would use an auxiliary data structure that records collisions, which might look like this:
public class Collision {
public int objectIndex1; // index of first object involved in collision
public int objectIndex2; // index of second object
public int directionCode; // encoding of the direction of the collision
public float time; // time of collision
}
The overall algorithm advances the game from the current time to a new time defined by a parameter deltaTime. It might be structured something like this:
void updateGame(float deltaTime) {
float step = deltaTime;
do (
Collision hit = findFirstCollision(step);
if (hit != null) {
step = Math.max(hit.time, MIN_STEP);
updateObjects(step);
updateVelocities(hit);
} else {
updateObjects(step);
}
deltaTime -= step;
step = deltaTime;
} while (deltaTime > 0);
}
/**
* Finds the earliest collision that occurs within the given time
* interval. It uses the current position and velocity of the objects
* at the start of the interval. If no collisions occur, returns null.
*/
Collision findFirstCollision(float deltaTime) {
Collision result = null;
for (int i = 0; i < obj_list.length; ++i) {
for (int j = i + 1; j < obj_list.length; ++j) {
Collision hit = findCollision(i, j, deltaTime);
if (hit != null) {
if (result == null || hit.time < result.time) {
result = hit;
}
}
}
}
return result;
}
/**
* Calculate if there is a collision between obj_list[i1] and
* obj_list[i2] within deltaTime, given their current positions
* and velocities. If there is, return a new Collision object
* that records i1, i2, the direction of the hit, and the time
* at which the objects collide. Otherwise, return null.
*/
Collision findCollision(int i1, int i2, float deltaTime) {
// left as an exercise for the reader
}
/**
* Move every object by its velocity * step
*/
void updateObjects(float step) {
for (GameObject obj : obj_list) {
Point2D.Float pos = obj.getPosition();
Point2D.Float velocity = obj.getVelocity();
obj.setPosition(
pos.getX() + step * velocity.getX(),
pos.getY() + step * velocity.getY()
);
}
}
/**
* Update the velocities of the two objects involved in a
* collision. Note that this does not always reverse velocities
* along the direction of collision (one object might be hit
* from behind by a faster object). The algorithm should assume
* that the objects are at the exact position of the collision
* and just update the velocities.
*/
void updateVelocities(Collision collision) {
// TODO - implement some physics simulation
}
The MIN_STEP constant is a minimum time increment to ensure that the game update loop doesn't get stuck updating such small time steps that it doesn't make progress. (With floating point, it's possible that deltaTime -= step; could leave deltaTime unchanged.)
Regarding the physics simulation: the Wikipedia article on Elastic collision provides some nice math for this problem.