Libgdx get scaled touch position - java

I'm working at a android game using LIBGDX.
#Override
public boolean touchDown(int x, int y, int pointer, int button) {
// TODO Auto-generated method stub
return false;
}
Here, the x and y returns the position of the touch of the device screen, and the values are between 0 and the device screen width and height.
My game resolution is 800x480, and it will keep its aspect ratio on every device.
I want to find out a way to get the touch position, related to the game rectangle, this image can explain exactly:
Is there a way to do it?
I want to get the touch position related to my viewport..
I use this to keep the aspect ratio
http://www.java-gaming.org/index.php?topic=25685.0

Unproject your touch.
Make a Vector3 object for user touch:
Vector3 touch = new Vector3();
And use the camera to convert the screen touch coordinates, to camera coordinates:
#Override
public boolean touchDown(int x, int y, int pointer, int button){
camera.unproject(touch.set(x, y, 0)); //<---
//use touch.x and touch.y as your new touch point
return false;
}

In the newer version of LibGDX you can achieve it with the built in viewports.
Firstly choose your preferred viewport the one you want here is FitViewport.
You can read about them here:
https://github.com/libgdx/libgdx/wiki/Viewports
Next, you declare and initialize the viewport and pass your resolution and camera:
viewport = new FitViewport(800, 480, cam);
Then edit your "resize" method of the screen class to be like that:
#Override
public void resize(int width, int height) {
viewport.update(width, height);
}
Now wherever you want to get touch points you need to transfer them to new points according to the new resolution. Fortunately, the viewport class does it automatically.
Just write this:
Vector2 newPoints = new Vector2(x,y);
newPoints = game.mmScreen.viewport.unproject(newPoints);
Where x and y are the touch points on the screen and in the second line "newPoints" gets the transformed coordinates. Now you can pass them wherever you want.

After 1-2 paintful hours I finnaly found the solution..
if (Gdx.input.isTouched()) {
float x = Gdx.input.getX();
float y = Gdx.input.getY();
float yR = viewport.height / (y - viewport.y); // the y ratio
y = 480 / yR;
float xR = viewport.width / (x - viewport.x); // the x ratio
x = 800 / xR;
bubbles.add(new Bubble(x, 480 - y));
}
Edit: this is an old deprecared way to do it, so don't.

Related

Processing - Method for Ellipse/Rect Collision

I am programming a game of sorts which would be kinda long to explain, but in short the player is an ellipse() and follows the mouse around, whilst the rect() is the obstacle that moves down the screen and needs to be dodged by the player, otherwise it's game over. There are multiple rect as I am using an ArrayList to store each obstacle object.
Currently, the player can just pass straight through the rect without anything happening to it. I have tried to solve it multiple times but it got extremely messy and I couldn't understand much due to being extremely new to Java (only 1 week of experience), so I have just placed the empty code below.
tldr; I need to figure out how to get an ellipse/rect collision to work (in its own method). I only have 1 week of Processing/Java experience. I've cut out most of the code that you don't need to look at, mainly just kept the variables used to define the shapes and the code for the shapes just in case you need that. Also, if possible could the collision method could be placed inside the Enemy Class.
Enemy Class (all the variables used to define the rect)
class Enemy {
int enemyNumber; // used to determine enemy type
//VARIABLES FOR ENEMY
boolean redEnemy = false; // determine enemy colour
color enemyColour = color(#B9B9E8); // sets default colour to blue
PVector position, velocity;
float xDist, yDist; // x and y distance for Bar
float smallCircleRad, bigCircleRad; // radius for circles
// **************************************************************************
Enemy() { //CONSTRUCTOR
position = new PVector(width/2, random(-300000, -250));
//println(position.y);
velocity = new PVector(0, 10);
smallCircleRad = 200;
bigCircleRad = 400;
xDist = width;
yDist = 200;
enemyNumber = int(random(1, 6));
}
// **************************************************************************
void redBar(float xPos, float yPos, float xDist, float yDist) {
redEnemy = true;
noStroke();
enemyColour = color(#E38585);
fill(enemyColour);
rect(xPos, yPos, xDist, yDist);
}
void blueBar(float xPos, float yPos, float xDist, float yDist) {
redEnemy = false;
noStroke();
enemyColour = color(#B9B9E8);
fill(enemyColour);
rect(xPos, yPos, xDist, yDist);
}
Player Class (all the variables used to define the ellipse)
class Player {
int r = 50; //player radius
float playerX = width/2; //starting x coordinate
float playerY = height/2+500; //starting y coordinate
float speed = 20; //player speed
float angle; //angle used to calculate trajectory for player
void playerChar() { //method for player model and general rules
stroke(10);
rectMode(CENTER);
fill(playerColour);
ellipse(playerX, playerY, r*2, r*2);
}
Make your life easier by treating the player as a rectangle instead of a circle. You can still draw them as a circle, but for collision detection, use a rectangle. This is called a bounding box and is very popular in collision detection.
Then you can use rectangle-rectangle collision detection, which is much simpler.
Some basic google searches return these results:
Axis-Aligned Bounding Box
What is the fastest way to work out 2D bounding box intersection?
Processing Collision Detection
If for some reason you absolutely need the player to be a circle when calculating the collision, then I'd start by googling something like "circle rectangle collision detection".
If you still can't get it figured out, please post a MCVE in a new question and we'll go from there. Good luck.

Offset between click detection and button graphics in libgdx

I have a libgdx application that contains a class Button. The constructor of Button takes three arguements: Filename of graphics, position, and game (the latter being used for callbacks of various sorts).
The button scales itself based on the graphics provided, thus setting its width and height based on the properties of the graphics.
The main class, Game, when a click is detected compares the coordinates of the click up against the coordinates of the button combined with its width and height.
Now, the main issue is that there is a little bit of a horizontal offset between the button and the click coordinates, so the effect is that the graphics show up a few pixels to the right of the clickable area. I cannot for the life of me figure out the source of this discrepancy, so I would greatly appreciate some fresh eyes to see where I'm going wrong here.
Button, constructor and polling-method for clickable area.
public Rectangle getClickArea() {
return new Rectangle(pos.x - (img.getWidth() / 2), pos.y + (img.getHeight() / 2), w, h);
}
public Button(String assetfile, int x, int y, Game game) {
this.game = game;
img = new Pixmap(new FileHandle(assetfile));
pos = new Vector2(x, y);
this.w = img.getWidth();
this.h = img.getHeight();
}
A relevant snippet from InputHandler. It listens for input and passes on the event. Please note that the vertical click position is subtracted from the vertical size of the screen, as vertical 0 is opposite in InputHandler:
public boolean touchDown(int screenX, int screenY, int pointer, int button) {
tracker.click(screenX, Settings.windowSize_Y - screenY);
return false;
}
ClickTracker (referenced as tracker in the above snippet), the Class that does the actual comparison between clicks and clickables:
public void click(int x, int y) {
Vector2 clickPos = new Vector2(x, y);
for (Tickable c : world.getPaintables())
{
if (!(c instanceof Unit))
continue;
if (((Clickable)c).getClickArea().contains(clickPos)) {
System.out.println("Clicked on unit");
}
}
for (Clickable c : clickables)
{
if (c.getClickArea().contains(clickPos)) {
c.clicked(x, y);
}
}
In short: The vertical alignment works as intended, but the horizontal is slightly off. The button graphics appear maybe around 10-20 pixels to the right of the clickable area.
I'll gladly post more info or code if needed, but I believe I have the relevant parts covered.
Edit:
As Maciej Dziuban requested, here's the snipped that draws the UI elements. batch is a SpriteBatch as provided by libgdx:
for (Paintable p : ui) {
batch.draw(new Texture(p.getImg()), p.getImgPos().x, p.getImgPos().y);
}
the getImgPos() is an interface method implemented by all drawable items:
public Vector2 getImgPos() {
return new Vector2(pos.x - (getImg().getWidth() / 2), pos.y);
}
It's worth noting that half of the horizontal image size is subtracted from the X pos, as X pos refers to the bottom center.
You have inconsistency in your position transformations:
Your clickArea's corner is pos translated by [-width/2, height/2] vector.
Your drawArea's corner is pos translated by [-width/2, 0] vector
They clearly should be the same, so if you want your pos to represent bottom-center of your entity (as you've explicitly stated) you have to change your getClickArea() method to, so it matches getImgPos().
public Rectangle getClickArea() {
return new Rectangle(pos.x - (img.getWidth() / 2), pos.y, w, h);
}
Side note: as Tenfour04 noticed, you create new texture each frame and this is huge memory leak. You should make it a field initialized in constructor or even a static variable given some buttons share the texture. Don't forget to call dispose() on resources. For more powerful asset management check out this article (note it may be an overkill in small projects).

How to match user click and the sprite object position in libGDX framework

I am using libGDX java framework for developing a practice game in Eclipse.
My game is in landscape mode and I am using sprite image for game assets .Actually i am trying to follow the kilobolt ZombieBird tutorial
I have set orthographic camera like this -- >
cam = new OrthographicCamera();
cam.setToOrtho(true, 250, 120);
I have done this because my background texture region is of 250 x 120 px in the sprite image.
So basically my sprite image is small in size and it is getting scaled according to the device but all the computing is done relative to 250 x 140 px like for changing the position of the object i have defined Vector2 position = new Vector2(x, y); and if i write position.x = 260; the sprite will go outside the screen even if my device width is 500px .
Problem :
Now i have to make the moving sprite vanish when someone clicks on it (just imagine zombies moving around and if i click on them they die) .So i am using the following code for matching user click co-ords with the object co-ords.
int x1 = Gdx.input.getX();
int y1 = Gdx.input.getY();
if(position.x == x1 && position.y == y1){
// do something that vanish the object clicked
}
The problem is position.x and position.y returns the co-ords relative to the ortho cam width and height which is 250x120 px and the click co-ords are relative to the device width and height which maybe anything according to the device. Because of this even if i click right on the object the click co-ords and the object position co-ords have a huge difference in their values.So i would never get matching values .
Is there any solution for this or am i doing it wrong ?
You have to unproject the device coordinates using the camera. The camera has a built in function to do this, so it's fairly simple. Furthermore, to determine if the sprite is clicked, you have to check to see if the point clicked is anywhere inside the sprite, not just equal to the sprite's position. Do something like this:
int x1 = Gdx.input.getX();
int y1 = Gdx.input.getY();
Vector3 input = new Vector3(x1, y1, 0);
cam.unproject(input);
//Now you can use input.x and input.y, as opposed to x1 and y1, to determine if the moving
//sprite has been clicked
if(sprite.getBoundingRectange().contains(input.x, input.y)) {
//Do whatever you want to do with the sprite when clicked
}
As an alternative to answer by kabb, you can just use math to convert screen co-ordinates to cam co-ordinates:
//Example:
float ScreenWidth = Gdx.graphics.getWidth();
float ScreenHeight = Gdx.graphics.getHeight();
// on a 1080p screen this would return ScreenWidth = 1080, ScreenHeight = 1920;
//now you get the screen co-ordinates and convert them to cam co-ordinates:
float x1 = Gdx.input.getX();
float y1 = Gdx.input.getY();
float x1cam = (x1/ScreenWidth)*CamWidth
float y1cam = (y1/ScreenHeight)*CamHeight
// now you can use the if statement in kabbs answer

libgdx create texture from overlay using pixmap

I am trying to create a method which returns a texture modified by an overlay using libgdx and PixMap.
Assuming I have 2 images:
A Base Image in FileHandle textureInput
And an overlay image in FileHandle overLay
It should produce this texture:
So it should use the RGB values from the textureInput and the alpha values from the overLay and create the final image. I believe I can do this using the Pixmap class but I just can't seem to find exactly how.
Here is what I gather should be the structure of the method:
public Texture getOverlayTexture(FileHandle overLay, FileHandle textureInput){
Pixmap inputPix = new Pixmap(textureInput);
Pixmap overlayPix = new Pixmap(overLay);
Pixmap outputPix = new Pixmap(inputPix.getWidth(), inputPix.getHeight(), Format.RGBA8888);
// go over the inputPix and add each byte to the outputPix
// but only where the same byte is not alpha in the overlayPix
Texture outputTexture = new Texture(outputPix, Format.RGBA8888, false);
inputPix.dispose();
outputPix.dispose();
overlayPix.dispose();
return outputTexture;
}
I am just looking for a bit of direction as to where to go from here. Any help is really appreciated. I apologize if this question is too vague or if my approach is entirely off.
Thanks!
I finally found the way to do this.
How my game is setup is that each item draws itself. They are handed a spritebatch and can do stuff with it. I did it that way various reasons. There is an item manager containing a list of items. Each item has various attributes. Each item has it's own render method along with other independent methods. Here is what finally worked:
A normal item's render method which does not use any alpha masking:
public void render(SpriteBatch batch, int renderLayer) {
if(renderLayer == Integer.parseInt(render_layer)){ // be in the correct render layer
batch.draw(item.region,
item.position.x, // position.x
item.position.y, // position.y
0, //origin x
0, //origin y
item.region.getRegionWidth() , //w
item.region.getRegionHeight(), //h
item.t_scale, //scale x
item.t_scale, //scale y
item.manager.radiansToDegrees(item.rotation)); //angle
}
}
So it is handed a spritebatch that it draws to with the correct image, location, scale, and rotation, and that is that.
After playing around with what I found here: https://gist.github.com/mattdesl/6076846 for a while, this finally worked for an item who needs to use alpha masking:
public void render(SpriteBatch batch, int renderLayer) {
if(renderLayer == Integer.parseInt(render_layer)){
batch.enableBlending();
//draw the alpha mask
drawAlphaMask(batch, item.position.x, item.position.y, item.region.getRegionWidth(), item.region.getRegionHeight());
//draw our foreground elements
drawForeground(batch, item.position.x, item.position.y, item.region.getRegionWidth(), item.region.getRegionHeight());
batch.disableBlending();
}
}
There is a TextureRegion named alphaMask which contains a black shape.
It can be any image, but let's say in this instance its this shape / image:
Here is the function called above that uses that image:
private void drawAlphaMask(SpriteBatch batch, float x, float y, float width, float height) {
//disable RGB color, only enable ALPHA to the frame buffer
Gdx.gl.glColorMask(false, false, false, true);
// Get these values so I can be sure I set them back to how it was
dst = batch.getBlendDstFunc();
src = batch.getBlendSrcFunc();
//change the blending function for our alpha map
batch.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ZERO);
//draw alpha mask sprite
batch.draw(alphaRegion,
x, // position.x
y, // position.y
0, // origin x
0, // origin y
alphaRegion.getRegionWidth(), // w
alphaRegion.getRegionHeight(), // h
item.t_scale, // scale x
item.t_scale, // scale y
item.manager.radiansToDegrees(item.rotation)); // angle
//flush the batch to the GPU
batch.flush();
}
There are a variety of "materials" to apply to any shape. In any instance one of them is assigned to the spriteRegion variable. Let's say right now it is this:
So the drawForeground method called above uses that image like this:
private void drawForeground(SpriteBatch batch, float clipX, float clipY, float clipWidth, float clipHeight) {
//now that the buffer has our alpha, we simply draw the sprite with the mask applied
Gdx.gl.glColorMask(true, true, true, true);
batch.setBlendFunction(GL10.GL_DST_ALPHA, GL10.GL_ONE_MINUS_DST_ALPHA);
batch.draw(spriteRegion,
clipX, // corrected center position.x
clipY, // corrected center position.y
0, //origin x
0, //origin y
spriteRegion.getRegionWidth() , //w
spriteRegion.getRegionHeight(), //h
item.t_scale, //scale x
item.t_scale, //scale y
item.manager.radiansToDegrees(item.rotation)); //angle
//remember to flush before changing GL states again
batch.flush();
// set it back to however it was before
batch.setBlendFunction(src, dst);
}
That all worked right away in the desktop build, and can produce "Brick Beams" (or whatever) in the game nicely:
However in Android and GWT builds (because after all, I am using libgdx) it did not incorporate the alpha mask, and instead rendered the full brick square.
After a lot of looking around I found this: https://github.com/libgdx/libgdx/wiki/Integrating-libgdx-and-the-device-camera
And so to fix this in Android I modified the MainActivity.java onCreate method like this:
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
AndroidApplicationConfiguration cfg = new AndroidApplicationConfiguration();
cfg.useGL20 = false;
cfg.r = 8;
cfg.g = 8;
cfg.b = 8;
cfg.a = 8;
initialize(new SuperContraption("android"), cfg);
if (graphics.getView() instanceof SurfaceView) {
SurfaceView glView = (SurfaceView) graphics.getView();
// force alpha channel - I'm not sure we need this as the GL surface
// is already using alpha channel
glView.getHolder().setFormat(PixelFormat.TRANSLUCENT);
}
}
And that fixes it for Android.
I still cannot figure out how to make it work properly in gwt, as I cannot figure out how to tell libgdx to tell GWT to tell webGl to go ahead and pay attention to the alpha channel. I'm interested in how to do something like this in an easier or less expensive way (though this seems to work fine).
If anyone knows how to make this work with GWT, please post as another answer.
Here is the non-working GWT build if you want to see the texture issue:
https://supercontraption.com/assets/play/index.html

libgdx coordinate system differences between rendering and touch input

I have a screen (BaseScreen implements the Screen interface) that renders a PNG image. On click of the screen, it moves the character to the position touched (for testing purposes).
public class DrawingSpriteScreen extends BaseScreen {
private Texture _sourceTexture = null;
float x = 0, y = 0;
#Override
public void create() {
_sourceTexture = new Texture(Gdx.files.internal("data/character.png"));
}
.
.
}
During rendering of the screen, if the user touched the screen, I grab the coordinates of the touch, and then use these to render the character image.
#Override
public void render(float delta) {
if (Gdx.input.justTouched()) {
x = Gdx.input.getX();
y = Gdx.input.getY();
}
super.getGame().batch.draw(_sourceTexture, x, y);
}
The issue is the coordinates for drawing the image start from the bottom left position (as noted in the LibGDX Wiki) and the coordinates for the touch input starts from the upper left corner. So the issue I'm having is that I click on the bottom right, it moves the image to the top right. My coordinates may be X 675 Y 13, which on touch would be near the top of the screen. But the character shows at the bottom, since the coordinates start from the bottom left.
Why is what? Why are the coordinate systems reversed? Am I using the wrong objects to determine this?
To detect collision I use camera.unproject(vector3). I set vector3 as:
x = Gdx.input.getX();
y = Gdx.input.getY();
z=0;
Now I pass this vector in camera.unproject(vector3). Use x and y of this vector to draw your character.
You're doing it right. Libgdx generally provides coordinate systems in their "native" format (in this case the native touch screen coordinates, and the default OpenGL coordinates). This doesn't create any consistency but it does mean the library doesn't have to get in between you and everything else. Most OpenGL games use a camera that maps relatively arbitrary "world" coordinates onto the screen, so the world/game coordinates are often very different from screen coordinates (so consistency is impossible). See Changing the Coordinate System in LibGDX (Java)
There are two ways you can work around this. One is transform your touch coordinates. The other is to use a different camera (a different projection).
To fix the touch coordinates, just subtract the y from the screen height. That's a bit of a hack. More generally you want to "unproject" from the screen into the world (see the
Camera.unproject() variations). This is probably the easiest.
Alternatively, to fix the camera see "Changing the Coordinate System in LibGDX (Java)", or this post on the libgdx forum. Basically you define a custom camera, and then set the SpriteBatch to use that instead of the default.:
// Create a full-screen camera:
camera = new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
// Set it to an orthographic projection with "y down" (the first boolean parameter)
camera.setToOrtho(true, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
camera.update();
// Create a full screen sprite renderer and use the above camera
batch = new SpriteBatch(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setProjectionMatrix(camera.combined);
While fixing the camera works, it is "swimming upstream" a bit. You'll run into other renderers (ShapeRenderer, the font renderers, etc) that will also default to the "wrong" camera and need to be fixed up.
I had same problem , i simply did this.
public boolean touchDown(int screenX, int screenY, int pointer, int button) {
screenY = (int) (gheight - screenY);
return true;
}
and every time you want to take input from user dont use Gdx.input.getY();
instead use (Gdx.graphics.getHeight()-Gdx.input.getY())
that worked for me.
The link below discusses this problem.
Projects the given coords in world space to screen coordinates.
You need to use the method project(Vector3 worldCoords) in class com.badlogic.gdx.graphics.Camera.
private Camera camera;
............
#Override
public boolean touchDown(int screenX, int screenY, int pointer, int button) {
Create an instance of the vector and initialize it with the coordinates of the input event handler.
Vector3 worldCoors = new Vector3(screenX, screenY, 0);
Projects the worldCoors given in world space to screen coordinates.
camera.project(worldCoors);
Use projected coordinates.
world.hitPoint((int) worldCoors.x, (int) worldCoors.y);
OnTouch();
return true;
}

Categories