I have a list of vertices and a list of regions (which are square/rectangle) shaped. Vertex has x and y coordinates, and a region has (x, y, height and width). How can I efficiently check which vertex lies in which region for every vertex/region?
EDIT:
This is the code I wrote to do this.
if (!g.getVertices().isEmpty()) {
for (int i = 0; i < g.getVertices().size(); i++) {
Vertex v = g.getVertices().get(i);
Point vertexPoint = new Point(v.getX(), v.getY());
for (int j = 0; j < g.getNumberOfRegions(); j++) {
int x = g.getRegions().get(j).getX();
int y = g.getRegions().get(j).getY();
int height = g.getRegions().get(j).getHeight();
int width = g.getRegions().get(j).getWidth();
Grid regionGrid = new Grid(j+1, x, y, height, width);
Rectangle regionRectangle = new Rectangle(x, y, height, width);
if (regionRectangle.contains(vertexPoint)) {
System.out.println("Vertex " + v + " lies inside region " + regionGrid.getRegionID());
}
}
}
}
EDIT 2: I used this to generate the regions, but I need a way to assign each region in the grid a regionID from left to right. For example:
1 - 2 - 3
4 - 5 - 6
7 - 8 - 9
for a 3x3 grid. At the moment it is in the following form:
1 - 1 - 1
2 - 2 - 2
3 - 3 - 3
for (int i = 0; i < rowValue; i++) {
for (int j = 0; j < columnValue; j++) {
Grid r = new Grid(0, 20 + i * size, 20 + j * size, size, size);
r.setRegionID(j + 1);
g.addRegion(r);
}
}
checking if a vertex is inside a square or a circle can be done in O(1). you can do it with library function or elementary math. so the works algorithm you can create is O(#vertices * #regions). you can try to optimise by sorting the vertices and regions by X-axis and then by Y-axis and try to eliminate checking that for sure return false. but seems that in pessimistic scenario you will still have O(#vertices * #regions) time.
You can probably use the Core Java libraries itself:
List<Rectangle2D.Double> rectangles = Arrays.asList(
new Rectangle2D.Double(0d, 0d, 100d, 100d),
new Rectangle2D.Double(100d, 0d, 100d, 100d),
new Rectangle2D.Double(0d, 100d, 100d, 100d),
new Rectangle2D.Double(100d, 100d, 100d, 100d));
Point2D.Double aPoint = new Point2D.Double(30d, 40d);
for (Rectangle2D.Double rectangle:rectangles){
if (rectangle.contains(aPoint)){
System.out.println(rectangle + " has the point " + aPoint);
}
}
Working with plane geometry is extremely easy while using JTS. You can try convert the objects you are using to JTS-specific.
Related
How to create grid coverage when each cell is 5M ?
I found this :
GridCoverage2D coverage = reader.read(null);
// direct access
DirectPosition position = new DirectPosition2D(crs, x, y);
double[] sample = (double[]) coverage.evaluate(position); // assume double
// resample with the same array
sample = coverage.evaluate(position, sample);
Source : https://docs.geotools.org/latest/userguide/library/coverage/grid.html
I didn't found a lot of tutorial about how to create grid coverage on geotools...
To create an empty coverage you need to use the GridCoverageFactory and one of the create methods. Since you are not constructing from an existing image you need to provide some memory for your raster to be stored in (this can also hold any initial values you want). For this your choices are a float[][] or a WritableRaster. Finally, you need a Envelope to say where the coverage is and what it's resolution is (otherwise it is just an array of numbers), I favour using a ReferencedEnvelope so that I know what the units are etc, so in the example below I have used EPSG:27700 which is the OSGB national grid so I know that it is in metres and I can define the origin somewhere in the South Downs. By specifying the lower left X and Y coordinates and the upper right X and Y as resolution times the width and height (plus the lower left corner) the maths all works out to make sure that the size of my pixels is resolution.
So keeping it simple for now you could do something like:
float[][] data;
int width = 100;
int height = 200;
data = new float[width][height];
int resolution = 5;
for(int i=0;i<width;i++){
for(int j=0;j<height;j++ ){
data[i][j] = 0.0f;
}
}
GridCoverageFactory gcf = new GridCoverageFactory();
CoordinateReferenceSystem crs = CRS.decode("EPSG:27700");
int llx = 500000;
int lly = 105000;
ReferencedEnvelope referencedEnvelope = new ReferencedEnvelope(llx, llx + (width * resolution), lly, lly + (height * resolution),
crs);
GridCoverage2D gc = gcf.create("name", data, referencedEnvelope);
If you want more bands in your coverage then you need to use a WriteableRaster as the base for your coverage.
WritableRaster raster2 = RasterFactory.createBandedRaster(java.awt.image.DataBuffer.TYPE_INT, width,
height, 3, null);
for (int i = 0; i < width; i++) {//width...
for (int j = 0; j < height; j++) {
raster2.setSample(i, j, 0, rn.nextInt(200));
raster2.setSample(i, j, 1, rn.nextInt(200));
raster2.setSample(i, j, 2, rn.nextInt(200));
}
}
I have a BufferedImage that I want to loop through. I want to loop through all pixels inside a circle with radius radius which has a center x and y at x,y.
I do not want to loop through it in a square fashion. It would also be nice if I could do this and cut O complexity in the process, but this is not needed. Since area of a circle is pi * r^2 and square would be 4 * r^2 that would mean I could get 4 / pi better O complexity if I looped in a perfect circle. If the circle at x,y with a radius of radius would happen to be larger than the dimensions of the BufferedImage, then prevent going out of bounds (this can be done with an if statement I believe to prevent going out of bounds at each check).
Examples: O means a recorded pixel while X means it was not looped over.
Radius 1
X O X
O O O
X O X
Radius 2
X X O X X
X O O O X
O O O O O
X O O O X
X X O X X
I think the proper way to do this is with trigonometric functions but I can't quite get it in my head. I know one easy part is that all Pixels up, left, right, and down in radius from the origin are added. Would like some advice incase anyone has any.
private LinkedList<Integer> getPixelColorsInCircle(final int x, final int y, final int radius)
{
final BufferedImage img; // Obtained somewhere else in the program via function call.
final LinkedList<Integer> ll = new Linkedlist<>();
for (...)
for (...)
{
int x = ...;
int y = ...;
ll.add(img.getRGB(x, y)); // Add the pixel
}
}
Having the center of the circle O(x,y) and the radius r the following coordinates (j,i) will cover the circle.
for (int i = y-r; i < y+r; i++) {
for (int j = x; (j-x)^2 + (i-y)^2 <= r^2; j--) {
//in the circle
}
for (int j = x+1; (j-x)*(j-x) + (i-y)*(i-y) <= r*r; j++) {
//in the circle
}
}
Description of the approach:
Go from the top to the bottom perpendicularly through the line which goes through the circle center.
Move horizontally till you reach the coordinate outside the circle, so you only hit two pixels which are outside of the circle in each row.
Move till the lowest row.
As it's only the approximation of a circle, prepare for it might look like a square for small rs
Ah, and in terms of Big-O, making 4 times less operations doesn't change complexity.
Big-O =/= complexity
While xentero's answer works, I wanted to check its actual performance (inCircle1) against the algorithm that OP thinks is too complex (inCircle2):
public static ArrayList<Point> inCircle1(Point c, int r) {
ArrayList<Point> points = new ArrayList<>(r*r); // pre-allocate
int r2 = r*r;
// iterate through all x-coordinates
for (int i = c.y-r; i <= c.y+r; i++) {
// test upper half of circle, stopping when top reached
for (int j = c.x; (j-c.x)*(j-c.x) + (i-c.y)*(i-c.y) <= r2; j--) {
points.add(new Point(j, i));
}
// test bottom half of circle, stopping when bottom reached
for (int j = c.x+1; (j-c.x)*(j-c.x) + (i-c.y)*(i-c.y) <= r2; j++) {
points.add(new Point(j, i));
}
}
return points;
}
public static ArrayList<Point> inCircle2(Point c, int r) {
ArrayList<Point> points = new ArrayList<>(r*r); // pre-allocate
int r2 = r*r;
// iterate through all x-coordinates
for (int i = c.y-r; i <= c.y+r; i++) {
int di2 = (i-c.y)*(i-c.y);
// iterate through all y-coordinates
for (int j = c.x-r; j <= c.x+r; j++) {
// test if in-circle
if ((j-c.x)*(j-c.x) + di2 <= r2) {
points.add(new Point(j, i));
}
}
}
return points;
}
public static <R extends Collection> R timing(Supplier<R> operation) {
long start = System.nanoTime();
R result = operation.get();
System.out.printf("%d points found in %dns\n", result.size(),
TimeUnit.NANOSECONDS.toNanos(System.nanoTime() - start));
return result;
}
public static void testCircles(int r, int x, int y) {
Point center = new Point(x, y);
ArrayList<Point> in1 = timing(() -> inCircle1(center, r));
ArrayList<Point> in2 = timing(() -> inCircle2(center, r));
HashSet<Point> all = new HashSet<>(in1);
assert(all.size() == in1.size()); // no duplicates
assert(in1.size() == in2.size()); // both are same size
all.removeAll(in2);
assert(all.isEmpty()); // both are equal
}
public static void main(String ... args) {
for (int i=100; i<200; i++) {
int x = i/2, y = i+1;
System.out.println("r = " + i + " c = [" + x + ", " + y + "]");
testCircles(i, x, y);
}
}
While this is by no means a precise benchmark (not much warm-up, machine doing other things, not smoothing outliers via n-fold repetition), the results on my machine are as follows:
[snip]
119433 points found in 785873ns
119433 points found in 609290ns
r = 196 c = [98, 197]
120649 points found in 612985ns
120649 points found in 584814ns
r = 197 c = [98, 198]
121905 points found in 619738ns
121905 points found in 572035ns
r = 198 c = [99, 199]
123121 points found in 664703ns
123121 points found in 778216ns
r = 199 c = [99, 200]
124381 points found in 617287ns
124381 points found in 572154ns
That is, there is no significant difference between both, and the "complex" one is often faster. My explanation is that integer operations are really, really fast - and examining a few extra points on the corners of a square that do not fall into the circle is really fast, compared to the cost of processing all those points that do fall into the circle (= the expensive part is calling points.add, and it is called the exact same number of times in both variants).
In the words of Knuth:
programmers have spent far too much time worrying about efficiency in
the wrong places and at the wrong times; premature optimization is the
root of all evil (or at least most of it) in programming
Then again, if you really need an optimal way of iterating the points of a circle, may I suggest using Bresenham's Circle Drawing Algorithm, which can provide all points of a circumference with minimal operations. It will again be premature optimization if you are actually going do anything with the O(n^2) points inside the circle, though.
I'm currently trying to develop a ArUco cube detector for a project. The goal is to have a more stable and accurate pose estimation without using a large ArUco board. For this to work however, I need to know the orientation of each of the markers. Using the draw3dAxis method, I discovered that the X and Y axis did not consistently appear in the same location. Here is a video demonstrating the issue: https://youtu.be/gS7BWKm2nmg
It seems to be a problem with the Rvec detection. There is a clear shift in the first two values of the Rvec, which will stay fairly consistent until the axis swaps. When this axis swap happens the values can change by a magnitude anywhere from 2-6. The ARuco library does try to deal with rotations as shown in the Marker.calculateMarkerId() method:
/**
* Return the id read in the code inside a marker. Each marker is divided into 7x7 regions
* of which the inner 5x5 contain info, the border should always be black. This function
* assumes that the code has been extracted previously.
* #return the id of the marker
*/
protected int calculateMarkerId(){
// check all the rotations of code
Code[] rotations = new Code[4];
rotations[0] = code;
int[] dists = new int[4];
dists[0] = hammDist(rotations[0]);
int[] minDist = {dists[0],0};
for(int i=1;i<4;i++){
// rotate
rotations[i] = Code.rotate(rotations[i-1]);
dists[i] = hammDist(rotations[i]);
if(dists[i] < minDist[0]){
minDist[0] = dists[i];
minDist[1] = i;
}
}
this.rotations = minDist[1];
if(minDist[0] != 0){
return -1; // matching id not found
}
else{
this.id = mat2id(rotations[minDist[1]]);
}
return id;
}
and the MarkerDetector.detect() does call that method and uses the getRotations() Method:
// identify the markers
for(int i=0;i<nCandidates;i++){
if(toRemove.get(i) == 0){
Marker marker = candidateMarkers.get(i);
Mat canonicalMarker = new Mat();
warp(in, canonicalMarker, new Size(50,50), marker.toList());
marker.setMat(canonicalMarker);
marker.extractCode();
if(marker.checkBorder()){
int id = marker.calculateMarkerId();
if(id != -1){
// rotate the points of the marker so they are always in the same order no matter the camera orientation
Collections.rotate(marker.toList(), 4-marker.getRotations());
newMarkers.add(marker);
}
}
}
}
The full source code for the ArUco library is here: https://github.com/sidberg/aruco-android/blob/master/Aruco/src/es/ava/aruco/MarkerDetector.java
If anyone has any advice or solutions I'd be very gracious. Please contact me if you have any questions.
I did find the problem. It turns out that the Marker Class has a rotation variable that can be used to rotate the axis to align with the orientation of the marker. I wrote the following method in the Utils class:
protected static void alignToId(Mat rotation, int codeRotation) {
//get the matrix corresponding to the rotation vector
Mat R = new Mat(3, 3, CvType.CV_64FC1);
Calib3d.Rodrigues(rotation, R);
codeRotation += 1;
//create the matrix to rotate around Z Axis
double[] rot = {
Math.cos(Math.toRadians(90) * codeRotation), -Math.sin(Math.toRadians(90) * codeRotation), 0,
Math.sin(Math.toRadians(90) * codeRotation), Math.cos(Math.toRadians(90) * codeRotation), 0,
0, 0, 1
};
// multiply both matrix
Mat res = new Mat(3, 3, CvType.CV_64FC1);
double[] prod = new double[9];
double[] a = new double[9];
R.get(0, 0, a);
for (int i = 0; i < 3; i++)
for (int j = 0; j < 3; j++) {
prod[3 * i + j] = 0;
for (int k = 0; k < 3; k++) {
prod[3 * i + j] += a[3 * i + k] * rot[3 * k + j];
}
}
// convert the matrix to a vector with rodrigues back
res.put(0, 0, prod);
Calib3d.Rodrigues(res, rotation);
}
and I called it from the Marker.calculateExtrinsics Method:
Utils.alignToId(Rvec, this.getRotations());
I would like to compare two arrays to see if they have the same values.
If I have a array called
public static float data[][]
which holds Y coordinates of a terrain, how can I check that array with another
public static int coords[][]
without iterating through all the coordinates?
Both arrays have over 1000 values in them. Iterating through them is not an option, since I must iterate through them over four times per second.
I am doing this to attempt to find if two objects are colliding. I have attempted using libraries for this, however I cannot find per-coordinate collision detection as specific as I need it.
Edit: Why I am unable to just iterate through this small amount of vertices is this.
The problem is, this is a MultiPlayer game,and I would have to iterate through all 1000 coordinates for every player. Meaning that just 10 players online is 10,000 100 online is 100,000. You can see how easily that would lag or at least take up a large percentage of the CPU.
Input of coordinates into the "Data" variable:
try {
// Load the heightmap-image from its resource file
BufferedImage heightmapImage = ImageIO.read(new File(
"res/images/heightmap.bmp"));
//width = heightmapImage.getWidth();
//height = heightmapImage.getHeight();
BufferedImage heightmapColour = ImageIO.read(new File(
"res/images/colours.bmp"));
// Initialise the data array, which holds the heights of the
// heightmap-vertices, with the correct dimensions
data = new float[heightmapImage.getWidth()][heightmapImage
.getHeight()];
// collide = new int[heightmapImage.getWidth()][50][heightmapImage.getHeight()];
red = new float[heightmapColour.getWidth()][heightmapColour
.getHeight()];
blue = new float[heightmapColour.getWidth()][heightmapColour
.getHeight()];
green = new float[heightmapColour.getWidth()][heightmapColour
.getHeight()];
// Lazily initialise the convenience class for extracting the
// separate red, green, blue, or alpha channels
// an int in the default RGB color model and default sRGB
// colourspace.
Color colour;
Color colours;
// Iterate over the pixels in the image on the x-axis
for (int z = 0; z < data.length; z++) {
// Iterate over the pixels in the image on the y-axis
for (int x = 0; x < data[z].length; x++) {
colour = new Color(heightmapImage.getRGB(z, x));
data[z][x] = setHeight;
}
}
}catch (Exception e){
e.printStackTrace();
System.exit(1);
}
And how coordinates are put into the "coords" variable (Oh wait, it was called "Ship", not coords. I forgot that):
try{
File f = new File("res/images/coords.txt");
String coords = readTextFile(f.getAbsolutePath());
for (int i = 0; i < coords.length();){
int i1 = i;
for (; i1 < coords.length(); i1++){
if (String.valueOf(coords.charAt(i1)).contains(",")){
break;
}
}
String x = coords.substring(i, i1).replace(",", "");
i = i1;
i1 = i + 1;
for (; i1 < coords.length(); i1++){
if (String.valueOf(coords.charAt(i1)).contains(",")){
break;
}
}
String y = coords.substring(i, i1).replace(",", "");;
i = i1;
i1 = i + 1;
for (; i1 < coords.length(); i1++){
if (String.valueOf(coords.charAt(i1)).contains(",")){
break;
}
}
String z = coords.substring(i, i1).replace(",", "");;
i = i1 + 1;
//buildx.append(String.valueOf(coords.charAt(i)));
////System.out.println(x);
////System.out.println(y);
////System.out.println(z);
//x = String.valueOf((int)Double.parseDouble(x));
//y = String.valueOf((int)Double.parseDouble(y));
//z = String.valueOf((int)Double.parseDouble(z));
double sx = Double.valueOf(x);
double sy = Double.valueOf(y);
double sz = Double.valueOf(z);
javax.vecmath.Vector3f cor = new javax.vecmath.Vector3f(Float.parseFloat(x), Float.parseFloat(y), Float.parseFloat(z));
//if (!arr.contains(cor)){
if (cor.y > 0)
arr.add(new javax.vecmath.Vector3f(cor));
if (!ship.contains(new Vector3f((int) sx, (int) sy, (int) sz)))
ship.add(new Vector3f((int) sx, (int) sy, (int) sz));
// arr.add(new javax.vecmath.Vector3f(Float.parseFloat(x), Float.parseFloat(y), Float.parseFloat(z)));
}
Thanks!
You can do like this but only applicable for same data type.
Arrays.deepEquals(data, coords);
For single dimension Array you can use this
Arrays.equals(array1, array1);
Arrays.deepEquals(a, b);
Try this but this will work only if the elements are in order.
No way around it, I'm afraid. Comparing data sets to see if they are identical demands looking at all elements, by definition. On a side note, comparing 1000 values is nothing even on relatively old hardware. You can do it thousands of time per second.
I created a program to draw many polygons automatically everytimes user presses a button. The points of the polygon are generated automatically using the random function. The problem is that, since the points of the polygon were randomly generated, some of the polygon are overlap with other polygon. How can I avoid this, so that every polygon shown without being overlapped?
.....
List<Polygon> triangles = new LinkedList<Polygon>();
Random generator = new Random();
public void paintComponent(Graphics g) {
for(int i = 0; i < 10; i++) {
double xWidth = generator.nextDouble() * 40.0 + 10.0;
double yHeight = generator.nextDouble() * 40.0 + 10.0;
xCoord[0] = generator.nextInt(MAX_WIDTH);
yCoord[0] = generator.nextInt(MAX_HEIGHT);
xCoord[1] = (int) (xCoord[0] - xWidth);
xCoord[2] = (int) (xCoord[1] + (xWidth/2));
yCoord[1] = yCoord[0];
yCoord[2] = (int) (yCoord[1] - yHeight);
triangles.add( new Polygon(xCoord,yCoord, 3));
}
Graphics2D g2 = (Graphics2D) g;
g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
g2.setStroke(new BasicStroke(1));
g2.setComposite(AlphaComposite.getInstance(AlphaComposite.SRC_OVER, 1.00f));
g2.setPaint(Color.black);//set the polygon line
for (Polygon triangle : triangles) g2.drawPolygon(triangle);
Polygon[] triArray = triangles.toArray(new Polygon[triangles.size()]);
for (Polygon p:triArray) triangles.remove (p);
}
Check out the game programming wiki on Polygon Collision:
http://gpwiki.org/index.php/Polygon_Collision
You could break your canvas into 10 regions and constrain your polygons each to their own region. To do this, you could use your i value and a %100 (or other suitable magnitude) of your randomly generated value and apply them to your x coordinates and y coordinates as applicable. The result would be a grid of similarly constrained(no larger than the grid cell), but randomly shaped, Polygons.
EDIT:
Taking another look and fooling around a bit, I took the general concept as I described above and made a stab at an implementation:
public void paintComponent(Graphics g) {
int[] xCoord = new int[3];
int[] yCoord = new int[3];
int colCnt = 5;
int rowCnt = 2;
int maxCellWidth = getWidth() / colCnt;
int maxCellHeight = getHeight() / rowCnt;
for (int i = 0; i < (colCnt * rowCnt); i++) {
int xMultiple = i % colCnt;
int yMultiple = i / colCnt;
for (int j = 0; j < 3; j++) {
xCoord[j] = generator.nextInt(maxCellWidth)
+ (maxCellWidth * xMultiple);
yCoord[j] = generator.nextInt(maxCellHeight)
+ (maxCellHeight * yMultiple);
}
triangles.add(new Polygon(xCoord, yCoord, 3));
}
//... the rest of your method
}
As you can see, all of the Polygons have all points randomly generated, as opposed to your method of generating the first point and then making the rest relative to the first. There is a sense of randomness that is lost, however, as the Polygons are laid out in a grid-like pattern.
Create Area objects from your new polygon as well as for all existing polygons.
Subtract the new polygon's area from the existing ones. If the subtract changed the area, the polygons overlap.
Area newArea = new Area(newPolygon);
Area existingArea = new Area(existingPolygon);
Area existingAreaSub = new Area(existingPolygon); existingAreaSub.subtract(newArea);
boolean intersects = existingAreaSub.equals(existingArea);
You could implement a method Polycon.containsPoint( x, y ) and repeat your random generation until this method returns false for all drawn Polygons.
I have achieved this in Android Using Kotlin (See github project) by using JTS see here
Step-1:
Add JTS library to your project
implementation group: 'org.locationtech.jts', name: 'jts-core', version: '1.15.0'
Step-2:
Create JTS polygon objects for both polygon
// create polygons One
var polygoneOneArray: ArrayList<Coordinate> = ArrayList()
for (points in polygonOnePointsList) {
polygoneOneArray.add(Coordinate(points.latitude(), points.longitude()))
}
val polygonOne: org.locationtech.jts.geom.Polygon = GeometryFactory().createPolygon(
polygoneOneArray.toTypedArray()
)
// create polygons Two
var polygoneTwoArray: ArrayList<Coordinate> = ArrayList()
for (points in polygoneTwoPointsList) {
polygoneTwoArray.add(Coordinate(points.latitude(), points.longitude()))
}
val polygonTwo: org.locationtech.jts.geom.Polygon = GeometryFactory().createPolygon(
polygoneTwo.toTypedArray()
)
Step-3:
Get Common Area of both Polygon
val intersection: org.locationtech.jts.geom.Geometry = polygonOne.intersection(polygonTwo)
Step-4:
Remove common Area from polygonTwo
val difference: org.locationtech.jts.geom.Geometry = polygonTwo.difference(intersection)
Step-5:
Merge Both polygonOne and update polygonTwo
val union: org.locationtech.jts.geom.Geometry = mergePolygonList.get(0).polygons.union(difference)
Step-5:
Now pick points from Geometry and draw a final merged Polygon
val array: ArrayList<Coordinate> = union.coordinates.toList() as ArrayList<Coordinate>
val pointList: ArrayList<Point> = ArrayList()
for (item in array) {
pointList.add(Point.fromLngLat(item.y, item.x))
}
var list: ArrayList<List<Point>> = ArrayList<List<Point>>()
list.add(pointList)
style.addSource(
GeoJsonSource(
"source-id${timeStamp}",
Feature.fromGeometry(Polygon.fromLngLats(list))
)
)