I am generating a List of points that have only integer components using GenerateSolidThetaZero function. My goal is to rotate these discrete points using an angle theta in radians and the new point after the rotation should still have integer components. The problem is I do not want any two points mapping to the same value. I want the same number of unique points before and after the rotation. I used the round function to remedy this problem a bit but I will still get some non-unique mappings. Basically I just want to find a way to rotate these points and preserve as much of the structure as possible(losing the least amount of points as possible). I am willing to use any library. Any help or guidance would be great.
Note: In my code the radius is 2 and 13 points are generated. After the rotation of Pi/6 I end up losing 4 points due to those points mapping to the same value another point already mapped to.
public class pointcheck{
// this HashSet will be used to check if a point is already in the rotated list
public static HashSet<Point> pointSet = new HashSet<Point>();
public static void main(String args[]) {
//generates sort of circular solid with param being the radius
ArrayList<Point> solid_pointList = GenerateSolidThetaZero(2);
//used to store original point as first part of pair and rotated point as second part of pair
ArrayList<Pair> point_pair = new ArrayList<Pair>();
//goes through all points in Solid_pointList adds each point to Point List with its corresponding rotated angle
for(Point t : solid_pointList){
point_pair.add(new Pair(t,rotation_about_origin(t,Math.PI / 6)));
}
for(Pair t : point_pair){
System.out.println(t.getFirst() + " " + t.getSecond());
}
System.out.println(pointSet.size());
}
//takes the point we want to rotate and then the angle to rotate it by
public static Point rotation_about_origin(Point P, double theta){
Point new_P = null;
double old_X = P.x;
double old_Y = P.y;
double cos_theta = Math.cos(theta);
double sin_theta = Math.sin(theta);
double new_X = old_X * cos_theta - old_Y * sin_theta;
double new_Y = old_X * sin_theta + old_Y * cos_theta;
new_P = new Point((int)Math.round(new_X),(int)Math.round(new_Y));
//if new_p is already in rotated solid
if(pointSet.contains(new_P))
System.out.println("Conflict " + P + " " + new_P);
else
//add new_P to pointSet so we know a point already rotated to that spot
pointSet.add(new_P);
return new_P;
}
private static ArrayList<Point> GenerateSolidThetaZero(int r){
int rsq = r * r;
ArrayList<Point> solidList=new ArrayList<Point>();
for (int x=-r;x<=r;x++)
for (int y=-r;y<=r;y++)
if (x*x + y*y <= rsq)
solidList.add(new Point(x,y));
return solidList;
}
public static class Pair<F,S>{
private F first; //first member of pair
private S second; //second member of pair
public Pair(F first, S second) {
this.first = first;
this.second = second;
}
public void setFirst(F first) {
this.first = first;
}
public void setSecond(S second) {
this.second = second;
}
public F getFirst() {
return first;
}
public S getSecond() {
return second;
}
}
}//end of pointcheck class
How would I be able to rotate the points using angles that aren't using integer multiples of 90? Where should I translate a point after rotation if the mapping is already taken?
The rotated disk will cover the exact same pixels as the original one. Therefore, you actually want to solve an assignment problem from original pixels to rotated pixels.
The cost for assigning an original pixel (ox, oy) to a corresponding pixel (cx, cy) can be expressed with a potential. For example, the distance:
E_o,c = length(R(ox, oy, theta) - (cx, cy))
, where R is the rotation operator. Alternatively, you could also try other norms, e.g. the quadratic distance.
Then, the problem is finding the correspondences that minimize the overall energy:
min_C Sum_{o \in O} E_o,c
An algorithm that solves this exactly is the Hungarian Algorithm. However, it is quite expensive if you have a large number of pixels.
Instead, here is an idea of an approximation:
In the target pixels, instead of having only the color, also store the rotated position. Then, rotate the original pixels sequentially as you did before. Round the rotated position and check if the according pixel is still occupied.
If not, store the rotated (unrounded) position along with the new color.
If it is occupied, check if the energy would decrease if you swapped the correspondences. If it does, swap them, which leaves you with the former pixel. In any case, you have an unmapped original pixel. Store this pixel in a list.
After this procedure, you have a partial correspondence map and a list of unmapped pixels. Pick any of the unmapped pixels. Analyze the target pixel's neighbors. There will probably always be an unoccupied pixel (although I have no proof for that). If so, choose this one. If not, check all neighboring pixels for the best energy decrease and swap. Continue until the list is empty.
The approximation algorithm is just an idea and I have no proof that it will actually work. But it sounds as if it is worth a try. And it will definitely be faster than the Hungarian algorithm. Though, this approximation will only work with Lp-norms having p>=1 for the potential definition.
Related
I have a problem with the correct vector alignment. I want to get a vector pointing in the same direction as the player, but with a constant Y value of 0. The point is, whatever the player's vertical and horizontal rotation, the vector's Y value was 0. The vector is always supposed to point horizontally (value 0), but keeping the direction of the player's rotation.
This picture shows the situation from the side. The red line represents an example of the player's viewing direction (up - down), and the green one the effect I want to achieve. Regardless of the direction in which the player is looking, up or down, the green line remains unchanged:
Here, in turn, I have presented this situation from the top. The red line is the player's viewing direction (left - right) and the green is the effect I want to achieve. As you can see, the player's rotation on this axis sets my vector exactly the same.
I was able to write a piece of code, but it doesn't behave correctly: the Y axis gets higher and higher as the player turns up or down. I don't know why:
Vector playerDirection = player.getLocation().getDirection();
Vector vector = new Vector(playerDirection.getX(), 0, playerDirection.getZ()).normalize().multiply(3);
How to do it correctly?
tl;dr:
Vector vector = new Vector(-1 * Math.sin(Math.toRadians(player.getLocation().getYaw())), 0, Math.cos(Math.toRadians(player.getLocation().getYaw())));
You are missing a fundamental principal of creating a new Vector based on where a player is looking. I don't know the math of it very well, but I can mess around with the math of people who are better than I at Geometry.
As such, let's try to reduce the number of Vector variables you have defined. Taking a quick peek at the source for Location, we can actually create your Vector directly to avoid having multiple defined.
public Vector getDirection() {
Vector vector = new Vector();
double rotX = this.getYaw();
double rotY = this.getPitch();
vector.setY(-Math.sin(Math.toRadians(rotY)));
double xz = Math.cos(Math.toRadians(rotY));
vector.setX(-xz * Math.sin(Math.toRadians(rotX)));
vector.setZ(xz * Math.cos(Math.toRadians(rotX)));
return vector;
}
As you can see, the pitch and yaw of a player are not a 1:1 relationship. No idea why, but let's repurpose their logic.
Here's how we'll do that:
public Vector getVectorForAdixe(Location playerLoc) {
Vector vector = new Vector();
double rotX = playerLoc.getYaw();
double rotY = 0; // this is the important change from above
// Original Code:
// vector.setY(-Math.sin(Math.toRadians(rotY)));
// Always resolves to 0, so just do that
vector.setY(0);
// Original Code:
// double xz = Math.cos(Math.toRadians(rotY));
// Always resolves to 1, so just do that
double xz = 1;
vector.setX(-xz * Math.sin(Math.toRadians(rotX)));
vector.setZ(xz * Math.cos(Math.toRadians(rotX)));
return vector;
Nice! Now, cleaning it up a bit to remove those comments and unnecessary variables:
public Vector getVectorForAdixe(Location playerLoc) {
Vector vector = new Vector();
double rotX = playerLoc.getYaw();
vector.setY(0);
vector.setX(-1 * Math.sin(Math.toRadians(rotX)));
vector.setZ(Math.cos(Math.toRadians(rotX)));
return vector;
Why does this math work like that? No idea! But this should almost certainly work for you. Could even inline it if you really wanted to keep it how you had it originally:
Vector vector = new Vector(-1 * Math.sin(Math.toRadians(player.getLocation().getYaw())), 0, Math.cos(Math.toRadians(player.getLocation().getYaw())));
Closing note, if you want to be able to get the pitch/yaw FROM the vector, that code is here: https://hub.spigotmc.org/stash/projects/SPIGOT/repos/bukkit/browse/src/main/java/org/bukkit/Location.java#310
I'm building my own small game engine for learning. So basically pure Java. I have Lines which are defined by a start and endpoint (x and y coordinates)
Now I have a ball with a velocity vector. I want to "bounce" off the wall, which could positioned in any possible angle. How do I find out the new velocity vector after the collision happend? I know the point S, P1 and P2 (see image)
I thought about calculating the angle, and change the x and y components. But I can't find a way how to do this for all possible angles.
I could find many solutions for walls which are parallel to the canvas borders, but no general solution. How do "big" game engines handle this common problem?
edit:
My updated Vector class methods:
public static Vector bounce(Vector normal, Vector velocity) {
Vector tmp = Vector.multiplication(2*Vector.dot(normal,velocity), normal);
return Vector.addition(tmp, velocity);
}
public static Vector multiplication(double multi, Vector n) {
Vector new_vector = new Vector(n.x * multi, n.y * multi);
return new_vector;
}
public static double dot(Vector a, Vector b) {
return a.x*b.x + a.y*b.y; // + a.z*b.z if you're in 3D
}
My test function:
#Test
public void testBounce() {
Vector normal_vector_corrected = new Vector(0, 1);
Vector start_velocity = new Vector(3, -3);
Vector bounced_vector = Vector.bounce(normal_vector_corrected, start_velocity);
System.out.println("normal vector: "+normal_vector_corrected);
System.out.println("start_velocity: "+start_velocity);
System.out.println("bounced_vector "+bounced_vector);
}
The output is this:
normal vector: <Vector x=0,00, y=1,00>
start_velocity: <Vector x=3,00, y=-3,00>
bounced_vector <Vector x=3,00, y=-9,00>
According to my calculations, bounced_vector should be x=3,y=3 instead. Where is my mistake? (My example as picture:)
edit2:
I found that it has to be return Vec.add(tmp, v);. Furthermore, I had to inverse the velocity vector.
The "bounced velocity vector" v' is obtained from the original velocity v and the surface normal unit vector n with 2(n . v)n + v where . stands for the vector dot product. This is usually called a reflection; the velocity vector is reflected across the surface normal.
In case you're not familiar with the terminology, the surface normal is a vector that is perpendicular (at 90-degree angle) to the surface. A unit vector is a vector with length 1.
I assume you already have a class to represent vectors, called Vec, with methods to multiply a vector with a scalar and to add two vectors. You could write the bounce operation as:
static Vec bounce(Vec n, Vec v) {
Vec tmp = Vec.scalarMultiply(-2*Vec.dot(n,v), n);
return Vec.add(tmp, v);
}
static double dot(Vec a, Vec b) {
return a.x*b.x + a.y*b.y; // + a.z*b.z if you're in 3D
}
As for how to get the surface normal, that will depend on if you're in 2D or 3D. Assuming 2D, it's simple: if (x,y) is the vector from P1 to P2, then (-y,x) is perpendicular to it, and one unit normal would be:
n = (-y/sqrt(x*x+y*y), x/sqrt(x*x+y*y))
The other possible unit normal is -n. You would use one or the other depending on which side of the surface you are.
You should store the normal vector with the geometry of the scene so you don't have to calculate it every time.
I'm writing a very basic raycaster for a 3D scene with triangulated objects and everything worked fine until I decided to try casting rays from points other than the origin of the scene (0/0/0).
However, when I changed to origin of the ray to (0/1/0) the intersection test suddenly returned a wrong intersection point for one of the triangles.
I'm deliberately "shooting" the rays into the direction of the center of the triangle, so obviously this should be the intersection point. I just simply don't know what's exactly leading to the wrong results in my code.
(I'm not using Möller-Trumbore at the moment because I'd like to start out with a simpler, more basic approach, but I will switch to Möller-Trumbore when optimizing the code.)
These are the coordinates of my the three vertices of the above mentioned triangle:
-2.0/2.0/0.0 | 0.0/3.0/2.0 | 2.0/2.0/0.0
This is the center of the triangle:
0.0/2.3333333333333335/0.6666666666666666
This is my ray (origin + t * Direction):
Origin: 0.0/1.0/0.0
Direction (normalized): 0.0/0.894427190999916/0.4472135954999579
This is the obviously wrong intersection point my program calculated (before checking and finding out that the point is not even on the triangle:
0.0/5.0/1.9999999999999996
So yeah, it's not hard to see (even without a calculator) that the ray should hit the triangle at its center at roughly t = 1.5. My code, however, returns the value 4.472135954999579 for t.
Here's my code for the intersection check:
public Vector intersectsWithTriangle(Ray ray, Triangle triangle) {
boolean intersects = false;
Vector triangleNormal = triangle.getNormalVector();
double normalDotRayDirection = triangleNormal.dotProduct(ray.getDirection());
if(Math.abs(normalDotRayDirection) == 0) {
// parallel
return null;
}
double d = triangleNormal.dotProduct(triangle.getV1AsVector());
double t = (triangleNormal.dotProduct(ray.getOrigin()) + d) / normalDotRayDirection;
// Check if triangle is behind ray
if (t < 0) return null;
// Get point of intersection between ray and triangle
Vector intersectionPoint = ray.getPosAt(t);
// Check if point is inside the triangle
if(isPointInTriangle(intersectionPoint, triangle, triangleNormal)) {
intersects = true;
return intersectionPoint;
}
return null;
}
Any ideas what's wrong with the line that calculates t?
If the ray is given by o + t*v and the triangle plane is defined by normal vector n and point p, then we are looking for t s.t. n*(o + t*v) = n*p which gives t = (n*p - n*o)/(n*v). So you seem to have a sign error and the correct computation for t should be:
double t = (d - triangleNormal.dotProduct(ray.getOrigin())) / normalDotRayDirection;
As long as the ray origin was (0,0,0) the wrong sign did not matter.
So, we have a homework question that asks us to create a hypercube of corners 2^n. And each corner has a set of n-coordinates in a plane of x1, x2, x3... xn. So, a n=3 hypercube has coordinates such as:
000, 001, 011, 010, etc. in a plane x1, x2, x3.
The point of writing this program is so have a recursive method and an iterative method to "walk" through the hypercube and pass every corner exactly once without overlapping it's trail. The professor also demand that our Corner object be a nested class in the hypercube class. So far this is what I've come up with:
ublic class Hypercube
{
private Corner[] walk;
private int size;
private final int ZERO = 0;
private int count;
public Hypercube(int n) throws IllegalHypercubeException {
if (n < 0) {
throw new IllegalHypercubeException("Please enter a positive integer");
} else {
this.size = n;
this.count = 0;
this.walk = new Corner[(int) Math.pow(2, n)];
}
}
public class Corner
{
private int[] coordinates;
public Corner() {
this.coordinates = new int[size];
}this.coordinates = coordinates;
}
I find most difficulty in setting the coordinates before I can even order them recursively in the walk methods. How am I meant to set all the coordinates of each corner of a cube of 2^n corners?
Not going to write a code for it (that's you job), but here's something to orient you:
In a 1D - the corners will be {0} and {1}
In 2D - the corners will be
{
{0,0}, {0,1},
{1,0}, {1,1}
}
In 3D - the corners will be
{
{0,0,0}, {0,0,1}, {0,1,0}, {0,1,1},
{1,0,0}, {1,0,1}, {1,1,0}, {1,1,1}
}
If you still haven't had your Aha! moment, here's the spoiler: make a list of corners in N-dimension by prefixing the all the corners in a N-1 dimension by 0 then by 1.
When you walk through hypercube by it's edges, you change one coordinate at every step. Note there are special binary sequences where exactly one bit is flipped between neighbor elements.
Look at Gray codes (n-bit code for n-dimensional hypercube)- they are well described, numerous methods exist to generate sequence both recursively and iteratively.
I'm working at a underwater game, where there are some ruins, made out of blocks.
Currently, I am checking for collision with the submarine's polygon and each block of the ruin, with a function that returns the vertices of a rectangle that I made.
public static float[] rectangleToVertices(float x, float y, float width,
float height) {
float[] result = new float[8];
result[0] = x;
result[1] = y;
result[2] = x + width;
result[3] = y;
result[4] = x + width;
result[5] = y + height;
result[6] = x;
result[7] = y + height;
return result;
}
I don't think that is very efficient, some of the ruins got over 10 blocks, and I don't want to check for 10 times a collision of a single object.
Is there a way to merge more polygons into one?
This picture can explain better:
There red area is the polygon.
If I understand your question, you are removing shared edges.
The simplest solution would be to start with one block, adding its edges to a HashSet (say S1). Then, while iterating over the list of blocks, check if any of those other blocks shares any edge from S1. If so, add all edges of that block to S1. For the edge(s) which already existed in S1, add them to another HashSet (say S2) to keep track of such edges. In the end, compute S1-S2, which will be the set of edges that you want. Use those edges to reconstruct your final polygon.
As an aside, you might want to take a look at The Skyline Problem.