I'm trying to make a game where the player ( the circle ) has to collect some stars. The stars will be at different positions and in order to get the stars the player must draw ramps in order to reach the stars. Picture below.
http://3w-bg.org/game/pic.PNG
The red line is where the user has drawn on the screen.
Ok so i capture the coordinates when the user touches and drags on the screen and then i use this coordinates to create a ChainShape for the line. The problem is that the line is drawn nowhere near the touched area. Picture below.
http://3w-bg.org/game/pic2.PNG
The world and the screen positions are not the same to my understanding. So how can i draw the chainshape line exactly where the user has touched. Tried camera.project/unproject but that didn't help.
Usually when using Box2D you should have some kind of pixel-to-meter ratio defined. This is done in order to keep the coordinates in your physics world smaller to keep numeric stability.
When using a Camera and a constant PIXEL_TO_METER to convert the values, you can convert your coordinates like this:
public static Vector2 screenToPhysics(Camera camera, Vector2 screenPos) {
Vector3 worldPos = camera.unproject(new Vector3(screenPos.x, screenPos.y, 0));
return new Vector2(worldPos.x, worldPos.y).scl(1f / PIXEL_TO_METER);
}
public static Vector2 physicsToScreen(Camera camera, Vector2 physicsPos) {
Vector3 worldPos = new Vector3(physicsPos.x, physicsPos.y, 0).scl(PIXEL_TO_METER);
Vector3 screenPos = camera.project(worldPos);
return new Vector2(screenPos.x, screenPos.y);
}
Related
I'm using rectangles for collision detection and a rectangle is created every 3 seconds, I wan't the rectangle to move upward just like my sprite but .translateY() method can't be used on rectangle.
this is what I did to my sprites stored on ArrayList:
for(Sprite sprite:mySprite){
sprite.translateY(deltaTime*movementSpeed);
}
and this is what I did on rectangles which does not work:
for(Rectangle rect:myRect){
rect.setY(deltaTime*movementSpeed);
}
it is possible that your rectangle is being drawn in posciciĆ³n you tell them, but setY is not the same, which translateY, simple explanation:
if for example deltaTime = 1 aprox. and movementeSpeed = 5.
you always drawing the rectangle in the same position, with minimal variation in delta maybe just not appreciated:
your position is rect.setY(5); all time.
try this
for(Rectangle rect:myRect){
rect.setY(rect.getY() + (deltaTime*movementSpeed));
}
I hope this help.
I rotated my sprite 90 degrees and I want to do the same with my rectangle to be able to use them for collision, but the rotate() method is not available on rectangles.
This is what I did:
treeSpr=new Sprite(new Texture(Gdx.files.internal("tree.png")));
treeSpr.setPosition(250,700);
treeSpr.rotate(90f);
//Rectangle
treeRect=new Rectangle(treeSpr.getX(),treeSpr.getHeight(),
treeSpr.getWidth(),treeSpr.getHeight());
The other answer is basically correct; however, I had some issues with the positioning of the polygons using that method. Just some clarification:
LibGDX does not support rotated Rectangles when using the Intersector for collision dectection. If you need rotated rectangles, you should use the Polygon for collision detection instead.
Building a Rectangular Polygon:
polygon = new Polygon(new float[]{0,0,bounds.width,0,bounds.width,bounds.height,0,bounds.height});
Don't forget to set the origin of the Polygon if you are going to rotate it:
polygon.setOrigin(bounds.width/2, bounds.height/2);
Now you can rotate the collision polygon:
polygon.setRotation(degrees);
Also, somewhere in your code, you will likely want to update the position of the collision polygon to match your sprite:
polygon.setPosition(x, y);
We can even draw our polygon on screen (for debug purposes):
drawDebug(ShapeRenderer shapeRenderer) {
shapeRenderer.begin(ShapeRenderer.ShapeType.Line);
shapeRenderer.polygon(polygon.getTransformedVertices());
shapeRenderer.end();
}
Collision Detection:
The overlapConvexPolygons() of the Intersector:
boolean collision = Intersector.overlapConvexPolygons(polygon1, polygon2)
As mentioned in the other answer, this method only works if:
using convex polygons, which the rectangle is
performing polygon to polygon checks, e.g.: you cannot mix rectangles and
polygons
Rotation
You could create a Polygon from the rectangle or from the sprite (supplying the vertices in order for the polygon constructor) and use it's rotate(float degrees) method:
treePoly = new Polygon(new float[] {
treeRect.x, treeRect.y,
treeRect.x, treeRect.y + treeRect.height,
treeRect.x + treeRect.width, treeRect.y + treeRect.height,
treeRect.x + treeRect.width, treeRect.y
});
treePoly.rotate(45f);
Collision Detection
Collision checks then could be done via the Intersector class:
Intersector.overlapConvexPolygons(polygon1, polygon2)
Keep in mind though, this method only works if:
you use convex polygons, which the rectangle is
you do polygon to polygon checks, e.g.: you cannot mix rectangles and polygons
I think something like it can help, I can not test now,
//Rectangle
treeRect=new Rectangle(treeSpr.getX(),
treeSpr.getY(),
treeSpr.getHeight(), //now is change width by height
treeSpr.getWidth()); //now is change height by width
Note: may You need to adjust the origin of the rotation for both
you can use a render ShapeRenderer to see if the result is as expected:
add for test in variable class
private ShapeRenderer sRDebugRectangel = new ShapeRenderer();
add for test in update or draw
sRDebugRectangel.begin(ShapeType.Filled);
sRDebugRectangel.identity();
sRDebugRectangel.rect(yourRectangle.getX(),
yourRectangle.getY(),
yourRectangle.getWidth(),
yourRectangle.getHeight());
sRDebugRectangel.end();
can look at my answer to this question to use a shaperrender otherwise known as:
Libgdx, how can I create a rectangle from coordinates?
I am developing a 2d game; I am currently developing a system of movement of the camera on the map, I used the following method: my camera has own coordinates - x,y;
I have ArrayList with all my sprites for map with their coords from 0 to mapSize, every sprite has a Draw function, which looks simply like
g2d.drawImage(texture, getX(), getY(), getX() + getSizeX(), y + getSizeY(), 0, 0, getSizeX(), getSizeY(), null);
I'm always drawing all my sprites, without checking are they visible or not;
Whether there is a load on the computer at this drawing (when drawing textures that very far away from screen size)?
Do I need to check whether the object is visible before rendering?
My main DrawAll function contains():
public void DrawAll(graphics2D g2d){
g2d.translate(-playerCamera.getX(), -playerCamera.getY());
for (int i = 0; i < mapSprites.size(); i++) {
mapSprites.get(i).Draw(g2d);
}
g2d.translate(-playerCamera.getX(), -playerCamera.getY());
drawSomeStrings, etc....
}
This is not very good, because lines that were drawn after second translate may twitch when moving the screen.
Should I give translate up and do the offset coordinates manually in each object\sprite's Draw function?
graphics2D will clip your drawing. So it does not impact too much. If you have a lot of sprites, you should consider using a SpatialIndex to select which Sprite is in the screen. (https://github.com/aled/jsi)
In a Java 2D game, I have a rectangular sprite of a tank. The sprite can rotate in any angle, and travel in the direction of that angle.
This sprite needs to have a bounding box, so I can detect collision to it.
This bounding box needs to:
Follow the sprite around the screen.
Rotate when the sprite rotates.
Obviously it should be invisible, but right now I'm drawing the box on the screen to see if it works. It doesn't.
My problem is this:
When the sprite travels parallel to the x axis or y axis, the box follows correctly and keeps 'wrapping' the sprite precisely.
But when the sprites travles diagonaly, the box doesn't follow the sprite correctly.
Sometimes it moves too much along the x axis and too little along the y axis. Sometimes the opposite. And maybe sometimes too much both or too little on both. Not sure.
Could you look at my code and tell me if you see anything wrong?
(Please note: The bounding box most of the time is actually just two arrays of coordinates, each one containing 4 values. The coordinates are used to form a Polygon when collision is checked, or when the box is drawn to the screen).
Relevant code from the Entity class, the superclass of Tank:
int[] xcoo = new int[4]; // coordinates of 4 vertices of the bounding box.
int[] ycoo = new int[4];
double x,y; // current position of the sprite.
double dx,dy; // how much to move the sprite, and the vertices of the bounding box.
double angle; // current angle of movement and rotation of sprite and bounding-box.
// Returns a Polygon object, that's the bounding box.
public Polygon getPolyBounds(){ return new Polygon(xcoo,ycoo,xcoo.length) ; }
public void move(){
// Move sprite
x += dx;
y += dy;
// Move vertices of bounding box.
for(int i=0;i<4;i++){
xcoo[i] += dx;
ycoo[i] += dy;
}
// Code to rotate the bounding box according to the angle, will be added later.
// ....
}
Relevant code from the Board class, the class that runs most of the game.
This is from the game-loop.
// keysPressed1 is an array of flags to tell which key is currently pressed.
// if left arrow is pressed
if(keysPressed1[0]==true)
tank1.setAngle(tank1.getAngle()-3);
// if right arrow is pressed
if(keysPressed1[1]==true)
tank1.setAngle(tank1.getAngle()+3);
// if up arrow is pressed (sets the direction to move, based on angle).
if(keysPressed1[2]==true){
tank1.setDX(2 * Math.cos(Math.toRadians(tank1.getAngle())));
tank1.setDY(2 * Math.sin(Math.toRadians(tank1.getAngle())));
tank1.move(); // should move both the sprite, and it's bounding box.
}
Thanks a lot for your help. If you need me to explain something about the code so you can help me, please say so.
Your sprite is using doubles and your bounding box is using ints, see these declarations:
int[] xcoo = new int[4];
double x, y
And the following updates:
(double dx, dy, showing it is a double)
x += dx
xcoo[i] += dx
In the latter (the bounding box) you are adding an int to a double which causes it to drop it's decimal places as it is being cast to an integer.
Hence why they do not follow the sprite exactly, as an int can never follow a double.
To solve this you need xcoo, ycoo and corresponding methods to work with double instead of int.
Update: So Polygon only takes Integers appereantly, to solve that take a look at the following question: Polygons with Double Coordinates
You should be using Path2D.Double
I have a screen (BaseScreen implements the Screen interface) that renders a PNG image. On click of the screen, it moves the character to the position touched (for testing purposes).
public class DrawingSpriteScreen extends BaseScreen {
private Texture _sourceTexture = null;
float x = 0, y = 0;
#Override
public void create() {
_sourceTexture = new Texture(Gdx.files.internal("data/character.png"));
}
.
.
}
During rendering of the screen, if the user touched the screen, I grab the coordinates of the touch, and then use these to render the character image.
#Override
public void render(float delta) {
if (Gdx.input.justTouched()) {
x = Gdx.input.getX();
y = Gdx.input.getY();
}
super.getGame().batch.draw(_sourceTexture, x, y);
}
The issue is the coordinates for drawing the image start from the bottom left position (as noted in the LibGDX Wiki) and the coordinates for the touch input starts from the upper left corner. So the issue I'm having is that I click on the bottom right, it moves the image to the top right. My coordinates may be X 675 Y 13, which on touch would be near the top of the screen. But the character shows at the bottom, since the coordinates start from the bottom left.
Why is what? Why are the coordinate systems reversed? Am I using the wrong objects to determine this?
To detect collision I use camera.unproject(vector3). I set vector3 as:
x = Gdx.input.getX();
y = Gdx.input.getY();
z=0;
Now I pass this vector in camera.unproject(vector3). Use x and y of this vector to draw your character.
You're doing it right. Libgdx generally provides coordinate systems in their "native" format (in this case the native touch screen coordinates, and the default OpenGL coordinates). This doesn't create any consistency but it does mean the library doesn't have to get in between you and everything else. Most OpenGL games use a camera that maps relatively arbitrary "world" coordinates onto the screen, so the world/game coordinates are often very different from screen coordinates (so consistency is impossible). See Changing the Coordinate System in LibGDX (Java)
There are two ways you can work around this. One is transform your touch coordinates. The other is to use a different camera (a different projection).
To fix the touch coordinates, just subtract the y from the screen height. That's a bit of a hack. More generally you want to "unproject" from the screen into the world (see the
Camera.unproject() variations). This is probably the easiest.
Alternatively, to fix the camera see "Changing the Coordinate System in LibGDX (Java)", or this post on the libgdx forum. Basically you define a custom camera, and then set the SpriteBatch to use that instead of the default.:
// Create a full-screen camera:
camera = new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
// Set it to an orthographic projection with "y down" (the first boolean parameter)
camera.setToOrtho(true, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
camera.update();
// Create a full screen sprite renderer and use the above camera
batch = new SpriteBatch(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setProjectionMatrix(camera.combined);
While fixing the camera works, it is "swimming upstream" a bit. You'll run into other renderers (ShapeRenderer, the font renderers, etc) that will also default to the "wrong" camera and need to be fixed up.
I had same problem , i simply did this.
public boolean touchDown(int screenX, int screenY, int pointer, int button) {
screenY = (int) (gheight - screenY);
return true;
}
and every time you want to take input from user dont use Gdx.input.getY();
instead use (Gdx.graphics.getHeight()-Gdx.input.getY())
that worked for me.
The link below discusses this problem.
Projects the given coords in world space to screen coordinates.
You need to use the method project(Vector3 worldCoords) in class com.badlogic.gdx.graphics.Camera.
private Camera camera;
............
#Override
public boolean touchDown(int screenX, int screenY, int pointer, int button) {
Create an instance of the vector and initialize it with the coordinates of the input event handler.
Vector3 worldCoors = new Vector3(screenX, screenY, 0);
Projects the worldCoors given in world space to screen coordinates.
camera.project(worldCoors);
Use projected coordinates.
world.hitPoint((int) worldCoors.x, (int) worldCoors.y);
OnTouch();
return true;
}