Collision Detection Tmx Maps using libgdx (java) - java

So I'm trying to implement collision detection in my game and I have a layer in the tmx file called Collision. The LIBGDX onsite tutorials doesnt cover interaction with object layers and it was hard to figure out how to render the map in the first place. This is how I render my screen, I would like to learn how to get my collision layer and then get my sprite to interact with it.
#Override
public void render(float delta) {
translateCamera();
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
camera.update();
renderer.setView(camera);
renderer.render(bgLayers);
// renderer.render();
batch.begin();
batch.draw(playerDirect, Gdx.graphics.getWidth() / 2,
Gdx.graphics.getHeight() / 2);
batch.end();
renderer.render(fgLayers);
}

There is a way to use the object layer. Don't give up hope!
One major advantage of this method over using tile properties is the ease with which you can generate fewer, larger bodies for improved efficiency in Box2d. Plus, even better, those bodies can be any shape you want! Rather than dozens of squared-off bodies, my sample level in my game now has just three funny-shaped (read more organic-looking) ChainShape-based bodies.
I answered the same question on GameDev the other day, after a serious hunt deep in the jungles of the Web. The tutorial I found didn't quite work for me as-is, so a little editing later I came up with this:
public class MapBodyBuilder {
// The pixels per tile. If your tiles are 16x16, this is set to 16f
private static float ppt = 0;
public static Array<Body> buildShapes(Map map, float pixels, World world) {
ppt = pixels;
MapObjects objects = map.getLayers().get("Obstacles").getObjects();
Array<Body> bodies = new Array<Body>();
for(MapObject object : objects) {
if (object instanceof TextureMapObject) {
continue;
}
Shape shape;
if (object instanceof RectangleMapObject) {
shape = getRectangle((RectangleMapObject)object);
}
else if (object instanceof PolygonMapObject) {
shape = getPolygon((PolygonMapObject)object);
}
else if (object instanceof PolylineMapObject) {
shape = getPolyline((PolylineMapObject)object);
}
else if (object instanceof CircleMapObject) {
shape = getCircle((CircleMapObject)object);
}
else {
continue;
}
BodyDef bd = new BodyDef();
bd.type = BodyType.StaticBody;
Body body = world.createBody(bd);
body.createFixture(shape, 1);
bodies.add(body);
shape.dispose();
}
return bodies;
}
private static PolygonShape getRectangle(RectangleMapObject rectangleObject) {
Rectangle rectangle = rectangleObject.getRectangle();
PolygonShape polygon = new PolygonShape();
Vector2 size = new Vector2((rectangle.x + rectangle.width * 0.5f) / ppt,
(rectangle.y + rectangle.height * 0.5f ) / ppt);
polygon.setAsBox(rectangle.width * 0.5f / ppt,
rectangle.height * 0.5f / ppt,
size,
0.0f);
return polygon;
}
private static CircleShape getCircle(CircleMapObject circleObject) {
Circle circle = circleObject.getCircle();
CircleShape circleShape = new CircleShape();
circleShape.setRadius(circle.radius / ppt);
circleShape.setPosition(new Vector2(circle.x / ppt, circle.y / ppt));
return circleShape;
}
private static PolygonShape getPolygon(PolygonMapObject polygonObject) {
PolygonShape polygon = new PolygonShape();
float[] vertices = polygonObject.getPolygon().getTransformedVertices();
float[] worldVertices = new float[vertices.length];
for (int i = 0; i < vertices.length; ++i) {
worldVertices[i] = vertices[i] / ppt;
}
polygon.set(worldVertices);
return polygon;
}
private static ChainShape getPolyline(PolylineMapObject polylineObject) {
float[] vertices = polylineObject.getPolyline().getTransformedVertices();
Vector2[] worldVertices = new Vector2[vertices.length / 2];
for (int i = 0; i < vertices.length / 2; ++i) {
worldVertices[i] = new Vector2();
worldVertices[i].x = vertices[i * 2] / ppt;
worldVertices[i].y = vertices[i * 2 + 1] / ppt;
}
ChainShape chain = new ChainShape();
chain.createChain(worldVertices);
return chain;
}
}
Assuming you've set things up so that the size of your tiles correspond to 1 square metre (1 square unit, if you prefer) in your Box2d World, the static Bodys this produces will be exactly where you drew them in Tiled. It was so satisfying to see this up and running, believe you me.

I'd reccomend adding blocked properties to the actual tiles themselves - you can add tile properties via the Tiled editor on the actual tileset. You can retrieve their properties on the tileset. I'm going to quote the documentation:
A TiledMap contains one or more TiledMapTileSet instances. A tile
set contains a number of TiledMapTile instances. There are multiple
implementations of tiles, e.g. static tiles, animated tiles etc. You
can also create your own implementation for special
Cells in a tile layer reference these tiles. Cells within a layer can
reference tiles of multiple tile sets. It is however recommended to
stick to a single tile set per layer to reduce texture switches.
Specifically, call getProperties on the tile in a tileset. This will retrieve the propeties - then you can compare to your custom attribute and this can tell you if a particular tile is blocked - then you can go ahead and implement your own collision logic.

Related

LibGDX Box2D: Cannot get Fixture to render

I have been recently trying to get back into LibGDX's version of Box2D, and I looked back at a demo I created a few months back, and my code looks fine, and from my Google search results, my code is fine, but for the life of me, I cannot get the Fixture to render.
Here is my (Minimalist example) code, and for the life of me, I cannot get it to work Note: I built a wrapper around the LibGDX Game class, should be self-explanatory:
public class TestBox2D extends EGGame {
int width;
int height;
static final Vector2 ZERO_GRAVITY = new Vector2(0f, 0f);
OrthographicCamera camera;
World world;
Body body;
Box2DDebugRenderer box2dDebugRenderer;
RayHandler rayHandler;
... // Removed Constructor, nothing special here.
#Override
protected void init() {
width = Gdx.graphics.getWidth() / 2;
height = Gdx.graphics.getHeight() / 2;
camera = new OrthographicCamera(width, height);
camera.position.set(width / 2, height / 2, 0);
camera.update();
world = new World(ZERO_GRAVITY, true);
box2dDebugRenderer = new Box2DDebugRenderer();
rayHandler = new RayHandler(world);
rayHandler.setCombinedMatrix(camera.combined);
// creating Body
BodyDef bodyDef = new BodyDef();
bodyDef.type = BodyDef.BodyType.StaticBody;
bodyDef.position.set(width/2, height/2);
body = world.createBody(bodyDef);
CircleShape shape = new CircleShape();
shape.setRadius(1f);
FixtureDef fixtureDef = new FixtureDef();
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
}
#Override
protected void updateGame() {
world.step(1f / 30f, 6, 2);
rayHandler.update();
}
#Override
protected void renderGame() {
box2dDebugRenderer.render(world, camera.combined);
rayHandler.render();
}
#Override
public void dispose() {
world.dispose();
}
... // Removed main method, nothing special here.
}
Note that world.getBodyCount(); and world.getFixtureCount(); both return 1.
Probable causes of problem.
Check if you have called render on fixtures in either RayHandler class or Box2DDebugRenderer class.
You have not set the position of the circle shape. It might be lying on the edge and remain out of camera bounds.
Check your units. Radius of circle might be relatively so small that it would be invisible, or it might be so large that it might be covering entire screen.
Hope this helps.
You can try the following, one of the things mention Tanmay Patil, it resizes body:
Example:
Varible Class:
long time = 0;
float testSize = 0;
Call in your render method:
time += System.nanoTime();
if (time >= 100000000){
time = 0;
testSize += (0.1f);
body.getFixtureList().first().getShape().setRadius(testSize);
}
if not notice any change, try the opposite:
time += System.nanoTime();
if (time >= 100000000){
time = 0;
testSize -= (0.1f);
body.getFixtureList().first().getShape().setRadius(testSize);
}
Edit:
On the other hand, this does not affect the question, but you can call dispose here if you want:
.////////////
CircleShape shape = new CircleShape();
shape.setRadius(1f);
FixtureDef fixtureDef = new FixtureDef();
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
shape.dispose();
}
Fixed.
The issue was that I was attempting to call RayHandler#render() after Box2DDebugRenderer.render(...) while the RayHandler didn't have any Light objects (Adding a PointLight allowed it to render), but whatever the reason for it is, it's weird, but calling the RayHandler#render() first allows it to work. This might be a bug in LibGDX that I will report.

Box2d setAngularVelocity do not work for high speeds

I am using Box2d for a game, and although I use large constants to set angular velocity, the fastest speed I can get is 1 revolution at 3.86 seconds.
I had checked my source code in the following thread and everything is the same with what I have been suggested from both users in here and in tutorials:
setAngularVelocity rotates really slowly
However than I noticed the following unresolved thread:
http://www.reddit.com/r/libgdx/comments/1qr2m3/the_strangest_libgdxbox2d_behaviour/
and noticed that might actually be the problem. Here is my dispose method
public void dispose() {
//Get Rid of Everything!
Assets.Clear();
GameEngine.Clear();
BallMap.clear();
PlayerMap.clear();
shapeRenderer.dispose();
debugRenderer.dispose();
world.dispose();
batch.dispose();
font.dispose();
}
They are all reinitialized on the beginning as follows:
this.game = game;
this.cameraWidth = cameraWidth*pixelRatio;
this.cameraHeight = cameraHeight*pixelRatio;
batch = new SpriteBatch();
shapeRenderer = new ShapeRenderer();
stateTime = 0F;
Scores = new Integer[]{0, 0};
debugRenderer = new Box2DDebugRenderer();
world = new World(new Vector2(0, 0), true); //Create a world with no gravity
GameEngine.setContactListener(world);
I navigate through screens with the following code:
public void create () {
scene_menu = new MainMenuScreen(this, cameraWidth, cameraHeight);
setScreen(scene_menu);
}
public void swtogame(){
scene_menu.dispose();
scene_game = new MatchScreen(this, cameraWidth, cameraHeight);
setScreen(scene_game);
}
public void swtomenu(){
scene_game.dispose();
scene_menu = new MainMenuScreen(this, cameraWidth, cameraHeight);
setScreen(scene_menu);
}
the way i initialize objects:
public Object(World world, short category, short mask, float x, float y, float radius, Sprite image,
float maxSpeed, float frictionStrength, float linearDamping, float angularDamping, boolean movable,
float elasticity, float mass){
this.world = world;
this.category = category;
this.mask = mask;
// We set our body type
this.bodyDef = new BodyDef();
if(movable==true){bodyDef.type = BodyType.DynamicBody;}else{bodyDef.type = BodyType.StaticBody;}
// Set body's starting position in the world
bodyDef.position.set(x, y);
bodyDef.linearDamping = linearDamping;
bodyDef.angularDamping = angularDamping;
// Create our body in the world using our body definition
this.body = world.createBody(bodyDef);
// Create a circle shape and set its radius
CircleShape circle = new CircleShape();
circle.setRadius(radius);
// Create a fixture definition to apply our shape to
fixtureDef = new FixtureDef();
fixtureDef.shape = circle;
fixtureDef.density = (float) (mass/(Math.PI*radius*radius));
fixtureDef.friction = frictionStrength;
fixtureDef.restitution = elasticity;
fixtureDef.filter.categoryBits = category;
fixtureDef.filter.maskBits = mask;
// Create our fixture and attach it to the body
this.fixture = body.createFixture(fixtureDef);
// BodyDef and FixtureDef don't need disposing, but shapes do.
circle.dispose();
... unrelated functions after that
}
Am I disposing correctly? Is this a bug? Is there any way to get around it and use the setAngularVelocity properly?
Because you haven't shown much code, I can I'm not 100% sure that I'm right, but I think that you are hitting the built in maximum movement limit of 2.0 units per time step. This means that at a typical framerate of 60Hz a body covering 2 units per timestep is moving at 120 m/s or 432 km/h (270 mph). Unfortunately it seems that there is no direct way to change this limit in Java, because this limit seems to be defined in the native C++ librarys.
But I think that the real problem is that you have a wrong scale. Box2D uses MKS (meters, kilograms, and seconds). And you may have used pixels instead of meters. The FAQ of Box2D suggests to use
objects [that are] between 0.1 - 10 meters
otherwise you can get strange situations.
See http://www.iforce2d.net/b2dtut/gotchas#speedlimit
and https://code.google.com/p/box2d/wiki/FAQ
I just found the problem, and it was pretty simple. Im just going to post this here for future googlers:
Object was actually rotating properly, the problem was in my drawing method, I didn't use conversion between radians to degrees in my batch.draw, and it interpreted everything in radians. I know such an amateur mistake! Thanks a lot for your time.

Shaky unstable JBox2D bodies

Edit 2: http://youtu.be/KiCzUZ69gpA - as you can see in this video, the shaking effect is amplified when I also render some text for each body. Observe how the ground body (blue) shakes violently when each body has some text rendered near it, and how it does not when text rendering is commented out. This has to be connected!
Edit: I've made two important additions to the original question: I've added my rendering functions, and the camera (translation) methods, and I think that the error is actually there, not in JBox2D.
I'm trying to simulate and render a lot of random bodies (2-20) connected with RevoluteJoints. One body can be connected to multiple others, and there are no separate constructions, i.e. all the bodies are interconnected.
However, when watching the live rendering, it is very shaky and unstable. By that I mean that bodies' positions (or maybe angles) seem to be randomly fluctuating for no apparent reason, making the simulation look unstable.
Here's a video of what I'm observing:
http://youtu.be/xql-ypso1ZU
Notice the middle square and the rotating rectangle. The middle square is shifting its position back and forth slightly at seemingly random intervals, and the rotating rectangle is very jittery (take a look at the point it is rotating about).
What could this effect be due? Is it some known issue with (J)Box2D, or is it an issue with my rendering system? I think that I have somehow misconfigured the physics engine, but also some floating point math in the rendering system could be the culprit.
Here's how I'm creating the bodies and the joints:
private Body setPart(Part part) {
// body definition
BodyDef bd = new BodyDef();
bd.position.set(0f, -10f);
bd.angle = 0f;
bd.type = BodyType.DYNAMIC;
// define shape of the body.
PolygonShape Shape = new PolygonShape();
Shape.setAsBox(part.width / 2, part.height / 2);
// define fixture of the body.
FixtureDef fd = new FixtureDef();
Filter filter = new Filter();
filter.groupIndex = -1;
fd.filter = filter;
fd.shape = Shape;
fd.density = 0.5f;
fd.friction = 0.3f;
fd.restitution = 0.5f;
// create the body and add fixture to it
Body body = world.createBody(bd);
body.createFixture(fd);
body.setUserData(new PartUserData());
return body;
}
private void setJoint(PartJoint partJoint) {
Body bodyOne = partToBody.get(partJoint.partOne);
Body bodyTwo = partToBody.get(partJoint.partTwo);
RevoluteJointDef jointDef = new RevoluteJointDef();
jointDef.bodyA = bodyOne;
jointDef.bodyB = bodyTwo;
jointDef.localAnchorA = partJoint.partOne
.getAnchor(partJoint.percentOne);
jointDef.localAnchorB = partJoint.partTwo
.getAnchor(partJoint.percentTwo);
// rotation
jointDef.lowerAngle = GeomUtil.circle(partJoint.rotateFrom);
jointDef.upperAngle = GeomUtil.circle(partJoint.rotateTo);
jointDef.enableLimit = true;
jointDef.maxMotorTorque = 10.0f; // TODO limit maximum torque
jointDef.motorSpeed = GeomUtil.circle(partJoint.angularVelocity);
jointDef.enableMotor = true;
world.createJoint(jointDef);
}
The time step is 0.01f.
Here is how I draw bodies:
private void drawBody(Body body) {
// setup the transforms
Vector position = camera.translate(body.getPosition());
currentGraphics.translate(position.x, position.y);
currentGraphics.rotate(body.getAngle());
// do the actual rendering
for (Fixture fixture = body.getFixtureList(); fixture != null; fixture = fixture
.getNext()) {
PolygonShape shape = (PolygonShape) fixture.getShape();
if (body.getUserData() instanceof PartUserData) {
fillShape(shape, partFillColor);
currentGraphics.setStroke(partOutlineStroke);
outlineShape(shape, partOutlineColor);
} else {
fillShape(shape, groundFillColor);
outlineShape(shape, groundOutlineColor);
}
}
// clean up
currentGraphics.rotate(-body.getAngle());
currentGraphics.translate(-position.x, -position.y);
currentGraphics.setColor(defaultColor);
currentGraphics.setStroke(defaultStroke);
}
I think that the issue might be the way I'm handling rendering of all the bodies.
This is the algorithm for each body:
1. Translate the Graphics2D object to its position
2. Rotate it by body.getAngle()
3. Render the body
4. Rotate the graphics back
5. Translate the graphics back
Could it be that amongst all these transforms something goes wrong?
When I removed the calls to camera's methods, the effect seems to have been reduced. These are the relevant camera methods:
public Vector translate(Vec2 worldPosition) {
Vector point = new Vector();
point.x = (int) (worldPosition.x * pixelsPerMeter) - position.x;
point.y = (int) (worldPosition.y * pixelsPerMeter) - position.y;
point.x = (int) (point.x * zoom);
point.y = (int) (point.y * zoom);
point.x += renderer.getWidth() / 2;
point.y += renderer.getHeight() / 2;
return point;
}
public Vector translateRelative(Vec2 worldPosition) {
Vector point = new Vector();
point.x = (int) (worldPosition.x * pixelsPerMeter);
point.y = (int) (worldPosition.y * pixelsPerMeter);
point.x = (int) (point.x * zoom);
point.y = (int) (point.y * zoom);
return point;
}
But what part of them would cause an issue?
tl;dr: I've found a solution, but haven't identified the exact problem. Quite sure it's with my translation methods.
It seems that I have identified the scope of the problem and the solution, but I am still not sure what exactly is causing this behavior.
In those translation formulas I posted in the question, all JBox2D vectors are multiplied by a scale called pixelsPerMeter. When I set this scale to a low value, the shaking effect occurs (it's also important to note that there is another factor as well, called zoom, which is usually greater for a lower pixelsPerMeter).
So, it could be that when multiplying by a relatively low pixelsPerMeter, I have to multiply with a higher zoom factor, and since I'm converting to ints in both steps, there could be some errors in the floating point math or something. Please see the translation methods I've posted in the question.
Here's a video that demonstrates this: (to be uploaded)
Notice that when I set the pixelsPerMeter to 250, shaking seems to be gone, while when I set it to 25, it's quite visible.
Your solution was the correct one. You are not supposed to use pixel units for the Box2D physics engine :)
http://box2d.org/2011/12/pixels/
and
https://code.google.com/p/box2d/wiki/FAQ#How_do_I_convert_pixels_to_meters?

Drawing filled polygon with libGDX

I want to draw some (filled) polygons with libGDX. It shoudn't be filled with a graphic/texture. I have only the vertices of the polygon (closed path) and tried to visualize with meshes but at some point this is not the best solution, I think.
My code for an rectangle is:
private Mesh mesh;
#Override
public void create() {
if (mesh == null) {
mesh = new Mesh(
true, 4, 0,
new VertexAttribute(Usage.Position, 3, "a_position")
);
mesh.setVertices(new float[] {
-0.5f, -0.5f, 0
0.5f, -0.5f, 0,
-0.5f, 0.5f, 0,
0.5f, 0.5f, 0
});
}
}
// ...
#Override
public void render() {
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
mesh.render(GL10.GL_TRIANGLE_STRIP, 0, 4);
}
is there a function or something to draw filled polygons in an easier way?
Since recent updates of LibGDX, #Rus answer is using deprecated functions. However, I give him/her credits for the new updated version below:
PolygonSprite poly;
PolygonSpriteBatch polyBatch = new PolygonSpriteBatch(); // To assign at the beginning
Texture textureSolid;
// Creating the color filling (but textures would work the same way)
Pixmap pix = new Pixmap(1, 1, Pixmap.Format.RGBA8888);
pix.setColor(0xDEADBEFF); // DE is red, AD is green and BE is blue.
pix.fill();
textureSolid = new Texture(pix);
PolygonRegion polyReg = new PolygonRegion(new TextureRegion(textureSolid),
new float[] { // Four vertices
0, 0, // Vertex 0 3--2
100, 0, // Vertex 1 | /|
100, 100, // Vertex 2 |/ |
0, 100 // Vertex 3 0--1
}, new short[] {
0, 1, 2, // Two triangles using vertex indices.
0, 2, 3 // Take care of the counter-clockwise direction.
});
poly = new PolygonSprite(polyReg);
poly.setOrigin(a, b);
polyBatch = new PolygonSpriteBatch();
For good triangulating algorithms if your polygon is not convex, see the almost-linear earclipping algorithm from Toussaint (1991)
Efficient triangulation of simple polygons, Godfried Toussaint, 1991
Here is a libGDX example which draws a 2D concave polygon.
Define class members for PolygonSprite PolygonSpriteBatch
PolygonSprite poly;
PolygonSpriteBatch polyBatch;
Texture textureSolid;
Create instances, 1x1 size texture used with red pixel as workaround. An array of coordinates (x, y) is used for initialization of the polygon.
ctor() {
textureSolid = makeTextureBox(1, 0xFFFF0000, 0, 0);
float a = 100;
float b = 100;
PolygonRegion polyReg = new PolygonRegion(new TextureRegion(textureSolid),
new float[] {
a*0, b*0,
a*0, b*2,
a*3, b*2,
a*3, b*0,
a*2, b*0,
a*2, b*1,
a*1, b*1,
a*1, b*0,
});
poly = new PolygonSprite(polyReg);
poly.setOrigin(a, b);
polyBatch = new PolygonSpriteBatch();
}
Draw and rotate polygon
void draw() {
super.draw();
polyBatch.begin();
poly.draw(polyBatch);
polyBatch.end();
poly.rotate(1.1f);
}
I believe the ShapeRenderer class now has a polygon method for vertex defined polygons:
ShapeRenderer.polygon()
You can use the ShapeRenderer API to draw simple, solid-color shapes with Libgdx.
The code you've given is a reasonable way to draw solid color polygons too. Its much more flexible than ShapeRenderer, but is a good bit more complicated. You'll need to use glColor4f to set the color, or add a Usage.Color attribute to each vertex. See the SubMeshColorTest example for more details on the first approach and the MeshColorTexture example for details on the second approach.
Another option to think about is using sprite textures. If you're only interested in simple solid colors objects, you can use very simple 1x1 textures of a single color and let the system stretch that across the sprite. Much of Libgdx and the underlying hardware are really optimized for rendering textures, so you may find it easier to use even if you're not really taking advantage of the texture contents. (You can even use a 1x1 white texture, and then use a SpriteBatch with setColor and draw()
to draw different color rectangles easily.)
You can also mix and match the various approaches, too.
Use triangulation algorithm and then draw all triangles as GL_TRIANGLE_STRIP
http://www.personal.psu.edu/cxc11/AERSP560/DELAUNEY/13_Two_algorithms_Delauney.pdf
just wanted to share my related solution with you, namely for implementing and drawing a walkZone with scene2d. I basically had to put together the different suggestions of the others' posts:
1) The WalkZone:
import com.badlogic.gdx.graphics.Pixmap;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.PolygonRegion;
import com.badlogic.gdx.graphics.g2d.TextureRegion;
import com.badlogic.gdx.math.EarClippingTriangulator;
import com.badlogic.gdx.math.Polygon;
import com.mygdx.game.MyGame;
public class WalkZone extends Polygon {
private PolygonRegion polygonRegion = null;
public WalkZone(float[] vertices) {
super(vertices);
if (MyGame.DEBUG) {
Pixmap pix = new Pixmap(1, 1, Pixmap.Format.RGBA8888);
pix.setColor(0x00FF00AA);
pix.fill();
polygonRegion = new PolygonRegion(new TextureRegion(new Texture(pix)),
vertices, new EarClippingTriangulator().computeTriangles(vertices).toArray());
}
}
public PolygonRegion getPolygonRegion() {
return polygonRegion;
}
}
2) The Screen:
you can then add a listener in the desired Stage:
myStage.addListener(new InputListener() {
#Override
public boolean touchDown(InputEvent event, float x, float y, int pointer, int button) {
if (walkZone.contains(x, y)) player.walkTo(x, y);
// or even directly: player.addAction(moveTo ...
return super.touchDown(event, x, y, pointer, button);
}
});
3) The implementation:
The array passed to te WZ constructor is a set of x,y,x,y... points. If you put them counter-clockwise, it works (I didn't check the other way, nor know how it exactly works); for example this generates a 100x100 square:
yourScreen.walkZone = new WalkZone(new int[]{0, 0, 100, 0, 100, 100, 0, 100});
In my project it works like a charm, even with very intricated polygons. Hope it helps!!
Most answers suggest triangulation, which is fine, but you can also do it using the stencil buffer. It handles both convex and concave polygons. This may be a better solution if your polygon changes a lot, since otherwise you'd have to do triangulation every frame. Also, this solution properly handles self intersecting polygons, which EarClippingTriangulator does not.
FloatArray vertices = ... // The polygon x,y pairs.
Color color = ... // The color to draw the polygon.
ShapeRenderer shapes = ...
ImmediateModeRenderer renderer = shapes.getRenderer();
Gdx.gl.glClearStencil(0);
Gdx.gl.glClear(GL20.GL_STENCIL_BUFFER_BIT);
Gdx.gl.glEnable(GL20.GL_STENCIL_TEST);
Gdx.gl.glStencilFunc(GL20.GL_NEVER, 0, 1);
Gdx.gl.glStencilOp(GL20.GL_INVERT, GL20.GL_INVERT, GL20.GL_INVERT);
Gdx.gl.glColorMask(false, false, false, false);
renderer.begin(shapes.getProjectionMatrix(), GL20.GL_TRIANGLE_FAN);
renderer.vertex(vertices.get(0), vertices.get(1), 0);
for (int i = 2, n = vertices.size; i < n; i += 2)
renderer.vertex(vertices.get(i), vertices.get(i + 1), 0);
renderer.end();
Gdx.gl.glColorMask(true, true, true, true);
Gdx.gl.glStencilOp(GL20.GL_ZERO, GL20.GL_ZERO, GL20.GL_ZERO);
Gdx.gl.glStencilFunc(GL20.GL_EQUAL, 1, 1);
Gdx.gl.glEnable(GL20.GL_BLEND);
shapes.setColor(color);
shapes.begin(ShapeType.Filled);
shapes.rect(-9999999, -9999999, 9999999 * 2, 9999999 * 2);
shapes.end();
Gdx.gl.glDisable(GL20.GL_STENCIL_TEST);
To use the stencil buffer, you must specify the number of bits for the stencil buffer when your app starts. For example, here is how to do that using the LWJGL2 backend:
LwjglApplicationConfiguration config = new LwjglApplicationConfiguration();
config.stencil = 8;
new LwjglApplication(new YourApp(), config);
For more information on this technique, try one of these links:
http://commaexcess.com/articles/7/concave-polygon-triangulation-shortcut
http://glprogramming.com/red/chapter14.html#name13
http://what-when-how.com/opengl-programming-guide/drawing-filled-concave-polygons-using-the-stencil-buffer-opengl-programming/

3D Picking OpenGL ES 2.0 after model matrix translation

Hey all I'm trying to implement 3D picking into my program, and it works perfectly if I don't move from the origin. It is perfectly accurate. But if I move the model matrix away from the origin (the viewmatrix eye is still at 0,0,0) the picking vectors are still drawn from the original location. It should still be drawing from the view matrix eye (0,0,0) but it isn't. Here's some of my code to see if you can find out why..
Vector3d near = unProject(x, y, 0, mMVPMatrix, this.width, this.height);
Vector3d far = unProject(x, y, 1, mMVPMatrix, this.width, this.height);
Vector3d pickingRay = far.subtract(near);
//pickingRay.z *= -1;
Vector3d normal = new Vector3d(0,0,1);
if (normal.dot(pickingRay) != 0 && pickingRay.z < 0)
{
float t = (-5f-normal.dot(mCamera.eye))/(normal.dot(pickingRay));
pickingRay = mCamera.eye.add(pickingRay.scale(t));
addObject(pickingRay.x, pickingRay.y, pickingRay.z+.5f, Shape.BOX);
//a line for the picking vector for debugging
PrimProperties a = new PrimProperties(); //new prim properties for size and center
Prim result = null;
result = new Line(a, mCamera.eye, far);//new line object for seeing look at vector
result.createVertices();
objects.add(result);
}
public static Vector3d unProject(
float winx, float winy, float winz,
float[] resultantMatrix,
float width, float height)
{
winy = height-winy;
float[] m = new float[16],
in = new float[4],
out = new float[4];
Matrix.invertM(m, 0, resultantMatrix, 0);
in[0] = (winx / width) * 2 - 1;
in[1] = (winy / height) * 2 - 1;
in[2] = 2 * winz - 1;
in[3] = 1;
Matrix.multiplyMV(out, 0, m, 0, in, 0);
if (out[3]==0)
return null;
out[3] = 1/out[3];
return new Vector3d(out[0] * out[3], out[1] * out[3], out[2] * out[3]);
}
Matrix.translateM(mModelMatrix, 0, this.diffX, this.diffY, 0); //i use this to move the model matrix based on pinch zooming stuff.
Any help would be greatly appreciated! Thanks.
I wonder which algorithm you have implemented. Is it a ray casting approach to the problem?
I didn't focus much on the code itself but this looks a way too simple implementation to be a fully operational ray casting solution.
In my humble experience, i would like to suggest you, depending on the complexity of your final project (which I don't know), to adopt a color picking solution.
This solution is usually the most flexible and the easiest to be implemented.
It consist in the rendering of the objects in your scene with unique flat colors (usually you disable lighting as well in your shaders) to a backbuffer...a texture, then you acquire the coordinates of the click (touch) and you read the color of the pixel in that specific coordinates.
Having the color of the pixel and the tables of the colors of the different objects you rendered, makes possible for you to understand what the user clicked from a logical perspective.
There are other approaches to the object picking problem, this is probably universally recognized as the fastest one.
Cheers
Maurizio

Categories