Shaky unstable JBox2D bodies - java

Edit 2: http://youtu.be/KiCzUZ69gpA - as you can see in this video, the shaking effect is amplified when I also render some text for each body. Observe how the ground body (blue) shakes violently when each body has some text rendered near it, and how it does not when text rendering is commented out. This has to be connected!
Edit: I've made two important additions to the original question: I've added my rendering functions, and the camera (translation) methods, and I think that the error is actually there, not in JBox2D.
I'm trying to simulate and render a lot of random bodies (2-20) connected with RevoluteJoints. One body can be connected to multiple others, and there are no separate constructions, i.e. all the bodies are interconnected.
However, when watching the live rendering, it is very shaky and unstable. By that I mean that bodies' positions (or maybe angles) seem to be randomly fluctuating for no apparent reason, making the simulation look unstable.
Here's a video of what I'm observing:
http://youtu.be/xql-ypso1ZU
Notice the middle square and the rotating rectangle. The middle square is shifting its position back and forth slightly at seemingly random intervals, and the rotating rectangle is very jittery (take a look at the point it is rotating about).
What could this effect be due? Is it some known issue with (J)Box2D, or is it an issue with my rendering system? I think that I have somehow misconfigured the physics engine, but also some floating point math in the rendering system could be the culprit.
Here's how I'm creating the bodies and the joints:
private Body setPart(Part part) {
// body definition
BodyDef bd = new BodyDef();
bd.position.set(0f, -10f);
bd.angle = 0f;
bd.type = BodyType.DYNAMIC;
// define shape of the body.
PolygonShape Shape = new PolygonShape();
Shape.setAsBox(part.width / 2, part.height / 2);
// define fixture of the body.
FixtureDef fd = new FixtureDef();
Filter filter = new Filter();
filter.groupIndex = -1;
fd.filter = filter;
fd.shape = Shape;
fd.density = 0.5f;
fd.friction = 0.3f;
fd.restitution = 0.5f;
// create the body and add fixture to it
Body body = world.createBody(bd);
body.createFixture(fd);
body.setUserData(new PartUserData());
return body;
}
private void setJoint(PartJoint partJoint) {
Body bodyOne = partToBody.get(partJoint.partOne);
Body bodyTwo = partToBody.get(partJoint.partTwo);
RevoluteJointDef jointDef = new RevoluteJointDef();
jointDef.bodyA = bodyOne;
jointDef.bodyB = bodyTwo;
jointDef.localAnchorA = partJoint.partOne
.getAnchor(partJoint.percentOne);
jointDef.localAnchorB = partJoint.partTwo
.getAnchor(partJoint.percentTwo);
// rotation
jointDef.lowerAngle = GeomUtil.circle(partJoint.rotateFrom);
jointDef.upperAngle = GeomUtil.circle(partJoint.rotateTo);
jointDef.enableLimit = true;
jointDef.maxMotorTorque = 10.0f; // TODO limit maximum torque
jointDef.motorSpeed = GeomUtil.circle(partJoint.angularVelocity);
jointDef.enableMotor = true;
world.createJoint(jointDef);
}
The time step is 0.01f.
Here is how I draw bodies:
private void drawBody(Body body) {
// setup the transforms
Vector position = camera.translate(body.getPosition());
currentGraphics.translate(position.x, position.y);
currentGraphics.rotate(body.getAngle());
// do the actual rendering
for (Fixture fixture = body.getFixtureList(); fixture != null; fixture = fixture
.getNext()) {
PolygonShape shape = (PolygonShape) fixture.getShape();
if (body.getUserData() instanceof PartUserData) {
fillShape(shape, partFillColor);
currentGraphics.setStroke(partOutlineStroke);
outlineShape(shape, partOutlineColor);
} else {
fillShape(shape, groundFillColor);
outlineShape(shape, groundOutlineColor);
}
}
// clean up
currentGraphics.rotate(-body.getAngle());
currentGraphics.translate(-position.x, -position.y);
currentGraphics.setColor(defaultColor);
currentGraphics.setStroke(defaultStroke);
}
I think that the issue might be the way I'm handling rendering of all the bodies.
This is the algorithm for each body:
1. Translate the Graphics2D object to its position
2. Rotate it by body.getAngle()
3. Render the body
4. Rotate the graphics back
5. Translate the graphics back
Could it be that amongst all these transforms something goes wrong?
When I removed the calls to camera's methods, the effect seems to have been reduced. These are the relevant camera methods:
public Vector translate(Vec2 worldPosition) {
Vector point = new Vector();
point.x = (int) (worldPosition.x * pixelsPerMeter) - position.x;
point.y = (int) (worldPosition.y * pixelsPerMeter) - position.y;
point.x = (int) (point.x * zoom);
point.y = (int) (point.y * zoom);
point.x += renderer.getWidth() / 2;
point.y += renderer.getHeight() / 2;
return point;
}
public Vector translateRelative(Vec2 worldPosition) {
Vector point = new Vector();
point.x = (int) (worldPosition.x * pixelsPerMeter);
point.y = (int) (worldPosition.y * pixelsPerMeter);
point.x = (int) (point.x * zoom);
point.y = (int) (point.y * zoom);
return point;
}
But what part of them would cause an issue?

tl;dr: I've found a solution, but haven't identified the exact problem. Quite sure it's with my translation methods.
It seems that I have identified the scope of the problem and the solution, but I am still not sure what exactly is causing this behavior.
In those translation formulas I posted in the question, all JBox2D vectors are multiplied by a scale called pixelsPerMeter. When I set this scale to a low value, the shaking effect occurs (it's also important to note that there is another factor as well, called zoom, which is usually greater for a lower pixelsPerMeter).
So, it could be that when multiplying by a relatively low pixelsPerMeter, I have to multiply with a higher zoom factor, and since I'm converting to ints in both steps, there could be some errors in the floating point math or something. Please see the translation methods I've posted in the question.
Here's a video that demonstrates this: (to be uploaded)
Notice that when I set the pixelsPerMeter to 250, shaking seems to be gone, while when I set it to 25, it's quite visible.

Your solution was the correct one. You are not supposed to use pixel units for the Box2D physics engine :)
http://box2d.org/2011/12/pixels/
and
https://code.google.com/p/box2d/wiki/FAQ#How_do_I_convert_pixels_to_meters?

Related

Java Arc2D Collision detection (With Rotation)

I have tried to create NPC character that can "see" the player by using cones of vision.
The NPC will rotate back and forth at all times.
My problem is that the arc has a generic and unchanging position, but when its drawn to the screen it looks correct.
[Screenshots of the collisions in action][1]
[GitHub link for java files][2]
I'm using Arc2D to draw the shape like this in my NPC class
// Update the shapes used in the npc
rect.setRect(x, y, w, h);
ellipse.setFrame(rect);
visionArc.setArcByCenter(cx, cy, visionDistance, visionAngle, visionAngle * 2, Arc2D.PIE);
/ CenterX, CenterY (of the npc),
/ the distance from the arc to the npc
/ a constant value around 45 degrees and a constant value around 90 degress (to make a pie shape)
I've tried multiplying the position and the angles by the sin and cosine of the NPC's current angle
something like these
visionArc.setArcByCenter(cx * (Math.cos(Math.toRadians(angle))), cy (Math.sin(Math.toRadians(angle)), visionDistance, visionAngle, visionAngle * 2, Arc2D.PIE);
or
visionArc.setArcByCenter(cx, cy, visionDistance, visionAngle - angle, (visionAngle + angle) * 2, Arc2D.PIE);
or
visionArc.setArcByCenter(cx, cy, visionDistance, visionAngle * (Math.cos(Math.toRadians(angle))), visionAngle * 2, Arc2D.PIE);
I've tried a lot but can't seem to find what works. Making the vision angles not constant makes an arc that expands and contracts, and multiplying the position by the sin or cosine of the angle will make the arc fly around the screen, which doesn't really work either.
This is the function that draws the given NPC
public void drawNPC(NPC npc, Graphics2D g2, AffineTransform old) {
// translate to the position of the npc and rotate
AffineTransform npcTransform = AffineTransform.getRotateInstance(Math.toRadians(npc.angle), npc.x, npc.y);
// Translate back a few units to keep the npc rotating about its own center
// point
npcTransform.translate(-npc.halfWidth, -npc.halfHeight);
g2.setTransform(npcTransform);
// g2.draw(npc.rect); //<-- show bounding box if you want
g2.setColor(npc.outlineColor);
g2.draw(npc.visionArc);
g2.setColor(Color.BLACK);
g2.draw(npc.ellipse);
g2.setTransform(old);
}
This is my collision detection algorithim - NPC is a superclass to ninja (Shorter range, higher peripheral)
public void checkNinjas(Level level) {
for (int i = 0; i < level.ninjas.size(); i++) {
Ninja ninja = level.ninjas.get(i);
playerRect = level.player.rect;
// Check collision
if (playerRect.getBounds2D().intersects(ninja.visionArc.getBounds2D())) {
// Create an area of the object for greater precision
Area area = new Area(playerRect);
area.intersect(new Area(ninja.visionArc));
// After checking if the area intersects a second time make the NPC "See" the player
if (!area.isEmpty()) {
ninja.seesPlayer = true;
}
else {
ninja.seesPlayer = false;
}
}
}
}
Can you help me correct the actual positions of the arcs for my collision detection? I have tried creating new shapes so I can have one to do math on and one to draw to the screen but I scrapped that and am starting again from here.
[1]: https://i.stack.imgur.com/rUvTM.png
[2]: https://github.com/ShadowDraco/ArcCollisionDetection
After a few days of coding and learning and testing new ideas I came back to this program and implemented the collision detection using my original idea (ray casting) and have created the equivalent with rays!
Screenshot of the new product
Github link to the project that taught me the solution
Here's the new math
public void setRays() {
for (int i = 0; i < rays.length; i++) {
double rayStartAngleX = Math.sin(Math.toRadians((startAngle - angle) + i));
double rayStartAngleY = Math.cos(Math.toRadians((startAngle - angle) + i));
rays[i].setLine(cx, cy, cx + visionDistance * rayStartAngleX, cy + visionDistance * rayStartAngleY);
}
}
Here is a link the the program I started after I asked this question and moved on to learn more, and an image to what the new product looks like
(The original github page has been updated with a new branch :) I'm learning git hub right now too
I do not believe that using Arc2D in the way I intended is possible, however there is .setArcByTangent method, it may be possible to use that but I wasn't going to get into that. Rays are cooler.

Box2d setAngularVelocity do not work for high speeds

I am using Box2d for a game, and although I use large constants to set angular velocity, the fastest speed I can get is 1 revolution at 3.86 seconds.
I had checked my source code in the following thread and everything is the same with what I have been suggested from both users in here and in tutorials:
setAngularVelocity rotates really slowly
However than I noticed the following unresolved thread:
http://www.reddit.com/r/libgdx/comments/1qr2m3/the_strangest_libgdxbox2d_behaviour/
and noticed that might actually be the problem. Here is my dispose method
public void dispose() {
//Get Rid of Everything!
Assets.Clear();
GameEngine.Clear();
BallMap.clear();
PlayerMap.clear();
shapeRenderer.dispose();
debugRenderer.dispose();
world.dispose();
batch.dispose();
font.dispose();
}
They are all reinitialized on the beginning as follows:
this.game = game;
this.cameraWidth = cameraWidth*pixelRatio;
this.cameraHeight = cameraHeight*pixelRatio;
batch = new SpriteBatch();
shapeRenderer = new ShapeRenderer();
stateTime = 0F;
Scores = new Integer[]{0, 0};
debugRenderer = new Box2DDebugRenderer();
world = new World(new Vector2(0, 0), true); //Create a world with no gravity
GameEngine.setContactListener(world);
I navigate through screens with the following code:
public void create () {
scene_menu = new MainMenuScreen(this, cameraWidth, cameraHeight);
setScreen(scene_menu);
}
public void swtogame(){
scene_menu.dispose();
scene_game = new MatchScreen(this, cameraWidth, cameraHeight);
setScreen(scene_game);
}
public void swtomenu(){
scene_game.dispose();
scene_menu = new MainMenuScreen(this, cameraWidth, cameraHeight);
setScreen(scene_menu);
}
the way i initialize objects:
public Object(World world, short category, short mask, float x, float y, float radius, Sprite image,
float maxSpeed, float frictionStrength, float linearDamping, float angularDamping, boolean movable,
float elasticity, float mass){
this.world = world;
this.category = category;
this.mask = mask;
// We set our body type
this.bodyDef = new BodyDef();
if(movable==true){bodyDef.type = BodyType.DynamicBody;}else{bodyDef.type = BodyType.StaticBody;}
// Set body's starting position in the world
bodyDef.position.set(x, y);
bodyDef.linearDamping = linearDamping;
bodyDef.angularDamping = angularDamping;
// Create our body in the world using our body definition
this.body = world.createBody(bodyDef);
// Create a circle shape and set its radius
CircleShape circle = new CircleShape();
circle.setRadius(radius);
// Create a fixture definition to apply our shape to
fixtureDef = new FixtureDef();
fixtureDef.shape = circle;
fixtureDef.density = (float) (mass/(Math.PI*radius*radius));
fixtureDef.friction = frictionStrength;
fixtureDef.restitution = elasticity;
fixtureDef.filter.categoryBits = category;
fixtureDef.filter.maskBits = mask;
// Create our fixture and attach it to the body
this.fixture = body.createFixture(fixtureDef);
// BodyDef and FixtureDef don't need disposing, but shapes do.
circle.dispose();
... unrelated functions after that
}
Am I disposing correctly? Is this a bug? Is there any way to get around it and use the setAngularVelocity properly?
Because you haven't shown much code, I can I'm not 100% sure that I'm right, but I think that you are hitting the built in maximum movement limit of 2.0 units per time step. This means that at a typical framerate of 60Hz a body covering 2 units per timestep is moving at 120 m/s or 432 km/h (270 mph). Unfortunately it seems that there is no direct way to change this limit in Java, because this limit seems to be defined in the native C++ librarys.
But I think that the real problem is that you have a wrong scale. Box2D uses MKS (meters, kilograms, and seconds). And you may have used pixels instead of meters. The FAQ of Box2D suggests to use
objects [that are] between 0.1 - 10 meters
otherwise you can get strange situations.
See http://www.iforce2d.net/b2dtut/gotchas#speedlimit
and https://code.google.com/p/box2d/wiki/FAQ
I just found the problem, and it was pretty simple. Im just going to post this here for future googlers:
Object was actually rotating properly, the problem was in my drawing method, I didn't use conversion between radians to degrees in my batch.draw, and it interpreted everything in radians. I know such an amateur mistake! Thanks a lot for your time.

Appearance of a triangle strip. Surface normals? Or windings?

Below is a picture of what my outcome is.
I am using flat shading and have put each vertex in their respectable triangle objects. Then I use these vertices to calculate the surface normals. I have been reading that because my triangles share similar vertices that calculating the normals may be an issue? But to me this looks like a windings problem given that every other one is off.
I provided some of my code below to anyone who wants to look through it and get a better idea what the issue could be.
Triangle currentTri = new Triangle();
int triPointIndex = 0;
List<Triangle> triList = new ArrayList<Triangle>()
GL11.glBegin(GL11.GL_TRIANGLE_STRIP);
int counter1 = 0;
float stripZ = 1.0f;
float randY;
for (float x=0.0f; x<20.0f; x+=2.0f) {
if (stripZ == 1.0f) {
stripZ = -1.0f;
} else { stripZ = 1.0f; }
randY = (Float) randYList.get(counter1);
counter1 += 1;
GL11.glVertex3f(x, randY, stripZ);
Vert currentVert = currentTri.triVerts[triPointIndex];
currentVert.x = x;
currentVert.y = randY;
currentVert.z = stripZ;
triPointIndex++;
System.out.println(triList);
Vector3f normal = new Vector3f();
float Ux = currentTri.triVerts[1].x - currentTri.triVerts[0].x;
float Uy = currentTri.triVerts[1].y - currentTri.triVerts[0].y;
float Uz = currentTri.triVerts[1].z - currentTri.triVerts[0].z;
float Vx = currentTri.triVerts[2].x - currentTri.triVerts[0].x;
float Vy = currentTri.triVerts[2].y - currentTri.triVerts[0].y;
float Vz = currentTri.triVerts[2].z - currentTri.triVerts[0].z;
normal.x = (Uy * Vz) - (Uz * Vy);
normal.y = (Uz * Vx) - (Ux * Vz);
normal.z = (Ux * Vy) - (Uy * Vx);
GL11.glNormal3f(normal.x, normal.y, normal.z);
if (triPointIndex == 3) {
triList.add(currentTri);
Triangle nextTri = new Triangle();
nextTri.triVerts[0] = currentTri.triVerts[1];
nextTri.triVerts[1] = currentTri.triVerts[2];
currentTri = nextTri;
triPointIndex = 2;
}
}
GL11.glEnd();
You should be setting the normal before calling glVertex3f (...). A call to glVertex* is basically what finalizes a vertex, it associates the current color, normal, texture coordinates, etc... with the vertex at the position you pass and emits a new vertex.
glVertex — specify a vertex
Description
glVertex commands are used within glBegin / glEnd pairs to specify point, line, and polygon vertices. The current color, normal, texture coordinates, and fog coordinate are associated with the vertex when glVertex is called.
When only x and y are specified, z defaults to 0.0 and w defaults to 1.0. When x, y, and z are specified, w defaults to 1.0.
Chances are very good that this is a large part of your problem. Triangle strips are designed to workaround implicit winding issues. You have to reverse the winding of every triangle when you use a strip, but the rasterizer compensates for this by flipping the winding order used for front/back internally on each alternate triangle.
Update:
Understand of course that the rasterizer is smart enough to flip the front/back winding for each alternate triangle when using a strip but your code is not (at least not currently). You need to compensate for the alternately reversed winding when you calculate the normals yourself on the CPU side.
Actually it's both in one. The direction of the normal depends on the winding used to calculate it. However ultimately it boils down to a normals problem, since that's what determines lighting calculations.
Winding is also important for OpenGL, but you can't change anything about that in a striped primitive.

Box2D origin of multiple fixtures

I think when you apply force to a body it is applied to origin of body ( could be center of mass ). Now I am trying to to create tetris-like blocks and make them jump by applying linearPulse like this:
body.applyLinearImpulse(0, 5f, this.body.getPosition().x, this.body.getPosition().y, true);
this works excellent if you only have one box fixture as body, but when you create multiple fixtures the origin gets misplaced and I can't place it to center of fixture.
Here is picture of what I mean:
I create fixtures from matrix like this:
int array[][]= {{0,0,0,0,0},
{0,0,1,0,0},
{0,1,1,1,0},
{0,0,0,0,0},
{0,0,0,0,0}};
and I use array to create fixtures like this:
public void setBody(int[][] blocks){
BodyDef def = new BodyDef();
def.type = BodyType.DynamicBody;
def.position.set(new Vector2(100 * WORLD_TO_BOX, 100 * WORLD_TO_BOX));
Body body = world.createBody(def);
body.setTransform(150*WORLD_TO_BOX, 200*WORLD_TO_BOX, -90*MathUtils.degreesToRadians);
for (int x = 0; x < 5; x++) { // HARDCODED 5
for (int y = 0; y < 5; y++) { // HARDCODED 5
if(blocks[x][y] == 1){
PolygonShape poly = new PolygonShape();
Vector2 v = new Vector2((-5/2+x),(-5/2+y));
poly.setAsBox(size/2 * WORLD_TO_BOX, size/2 * WORLD_TO_BOX, v, 0);
body.createFixture(poly, 1);
poly.dispose();
}
}
}
this.body = body;
}
WORLD_TO_BOX values is 0.032f
and size of one block is 32f
so my guestion is, how can I manually set center of mass/origin of my complex multifixture body?
I don't believe you get to set the center of mass for a body, it is calculated by the system. From the manual:
You can access the center of mass position in local and world
coordinates. Much of the internal simulation in Box2D uses the center
of mass. However, you should normally not need to access it. Instead
you will usually work with the body transform. For example, you may
have a body that is square. The body origin might be a corner of the
square, while the center of mass is located at the center of the
square.
const b2Vec2& GetPosition() const;
float32 GetAngle() const;
const b2Vec2& GetWorldCenter() const;
const b2Vec2& GetLocalCenter()
I believe you will use GetWorldCenter() as the linear impulse is applied in world coordinates (also per the manual):
You can apply forces, torques, and impulses to a body. When you apply
a force or an impulse, you provide a world point where the load is
applied.
Was this helpful?

3D Picking OpenGL ES 2.0 after model matrix translation

Hey all I'm trying to implement 3D picking into my program, and it works perfectly if I don't move from the origin. It is perfectly accurate. But if I move the model matrix away from the origin (the viewmatrix eye is still at 0,0,0) the picking vectors are still drawn from the original location. It should still be drawing from the view matrix eye (0,0,0) but it isn't. Here's some of my code to see if you can find out why..
Vector3d near = unProject(x, y, 0, mMVPMatrix, this.width, this.height);
Vector3d far = unProject(x, y, 1, mMVPMatrix, this.width, this.height);
Vector3d pickingRay = far.subtract(near);
//pickingRay.z *= -1;
Vector3d normal = new Vector3d(0,0,1);
if (normal.dot(pickingRay) != 0 && pickingRay.z < 0)
{
float t = (-5f-normal.dot(mCamera.eye))/(normal.dot(pickingRay));
pickingRay = mCamera.eye.add(pickingRay.scale(t));
addObject(pickingRay.x, pickingRay.y, pickingRay.z+.5f, Shape.BOX);
//a line for the picking vector for debugging
PrimProperties a = new PrimProperties(); //new prim properties for size and center
Prim result = null;
result = new Line(a, mCamera.eye, far);//new line object for seeing look at vector
result.createVertices();
objects.add(result);
}
public static Vector3d unProject(
float winx, float winy, float winz,
float[] resultantMatrix,
float width, float height)
{
winy = height-winy;
float[] m = new float[16],
in = new float[4],
out = new float[4];
Matrix.invertM(m, 0, resultantMatrix, 0);
in[0] = (winx / width) * 2 - 1;
in[1] = (winy / height) * 2 - 1;
in[2] = 2 * winz - 1;
in[3] = 1;
Matrix.multiplyMV(out, 0, m, 0, in, 0);
if (out[3]==0)
return null;
out[3] = 1/out[3];
return new Vector3d(out[0] * out[3], out[1] * out[3], out[2] * out[3]);
}
Matrix.translateM(mModelMatrix, 0, this.diffX, this.diffY, 0); //i use this to move the model matrix based on pinch zooming stuff.
Any help would be greatly appreciated! Thanks.
I wonder which algorithm you have implemented. Is it a ray casting approach to the problem?
I didn't focus much on the code itself but this looks a way too simple implementation to be a fully operational ray casting solution.
In my humble experience, i would like to suggest you, depending on the complexity of your final project (which I don't know), to adopt a color picking solution.
This solution is usually the most flexible and the easiest to be implemented.
It consist in the rendering of the objects in your scene with unique flat colors (usually you disable lighting as well in your shaders) to a backbuffer...a texture, then you acquire the coordinates of the click (touch) and you read the color of the pixel in that specific coordinates.
Having the color of the pixel and the tables of the colors of the different objects you rendered, makes possible for you to understand what the user clicked from a logical perspective.
There are other approaches to the object picking problem, this is probably universally recognized as the fastest one.
Cheers
Maurizio

Categories